Re: [DISCUSSION] Restructure Storm documentation

2016-01-22 Thread Nathan Marz
At the very least, the Javadocs should be available by version. This is
something I used to do but looks like we forgot to keep doing that after
the transition to Apache. Maintaining other docs (tutorials, etc.) by
version is more difficult as those are rarely updated at the time of
release.

On Fri, Jan 22, 2016 at 2:01 PM, Bobby Evans  wrote:

> It doesn't have to be Taylor cutting releases.  The only major requirement
> around that is that the PMC votes on the release.
>  - Bobby
>
> On Friday, January 22, 2016 3:48 PM, Kyle Nusbaum
>  wrote:
>
>
>  Yep, That's precisely what I was thinking.
>
> I don't really see a problem with the process being manual. It won't be
> *too* much work, and we do releases infrequently enough that I don't see it
> as a burden. A small helper script would probably be trivial to write.
>
> Of course, Taylor is the one cutting the releases, so I'll defer to him on
> the automated/manual issue. -- Kyle
>
> On Friday, January 22, 2016 3:45 PM, P. Taylor Goetz <
> ptgo...@gmail.com> wrote:
>
>
>  I’m definitely open to improving the process such that we can have
> version-specific documentation, and finding a way to automate updating the
> asf-site branch during the release process. I’m also okay if that process
> is somewhat manual.
>
> I’ve thought about it a little but haven’t really come with a process.
>
> Ideally we’d do something that would do a snapshot of the docs at release
> time and create a subdirectory in the asf-site website (e.g. “1.0.0-docs”).
>
> I’m open to suggestions.
>
> -Taylor
>
> > On Jan 22, 2016, at 4:25 PM, Kyle Nusbaum 
> wrote:
> >
> > The new website is awesome.
> >
> > Tt would be great to keep tabs on documentation for different versions
> of Storm and host those different versions on the site.
> >
> > I don't care too much for having all the documentation in its own
> branch. I would suggest that each version branch of Storm keeps its own
> version of the docs -- or keeps any modifications to the docs, if not the
> entire collection, in order to keep the common parts in sync -- and that
> these docs get merged into the asf-site branch in their own version
> directory as part of the release process.
> > Please let me know what you think and I'll file Jira issues as
> necessary.-- Kyle
>
>
>
>
>
>



-- 
Twitter: @nathanmarz
http://nathanmarz.com


Re: [VOTE] Accept Alibaba JStorm Code Donation

2015-10-27 Thread Nathan Marz
+1 (binding)

On Tue, Oct 27, 2015 at 5:30 PM, 임정택  wrote:

> +1
>
> 2015-10-28 2:48 GMT+09:00 P. Taylor Goetz :
>
> > All,
> >
> > The IP Clearance process for the Alibaba JStorm code donation has
> > completed.
> >
> > The IP Clearance Status document can be found here:
> >
> > http://incubator.apache.org/ip-clearance/storm-jstorm.html
> >
> > The source code can be found at https://github.com/alibaba/jstorm with
> > the following git commit SHA: e935da91a897797dad56e24c4ffa57860ac91878
> >
> > This is a VOTE to accept the code donation, and import the donated code
> > into the Apache Storm git repository. Discussion regarding how to proceed
> > with merging the codebases can take place in separate thread.
> >
> > [ ] +1 Accept the Alibaba JStorm code donation.
> > [ ] +0 Indifferent
> > [ ] -1 Do not accept the code donation because…
> >
> > This VOTE will be open for at least 72 hours.
> >
> > -Taylor
> >
> >
> >
>
>
> --
> Name : 임 정택
> Blog : http://www.heartsavior.net / http://dev.heartsavior.net
> Twitter : http://twitter.com/heartsavior
> LinkedIn : http://www.linkedin.com/in/heartsavior
>



-- 
Twitter: @nathanmarz
http://nathanmarz.com


Re: Upgrading Storm to use Netty 4.0 or higher

2015-04-20 Thread Nathan Marz
I would like to know what the benefits of upgrading would be. Upgrading a
dependency, especially something as core as this, carries with it risk.
Once we know the benefits we can weigh that against the risk.

On Mon, Apr 20, 2015 at 1:54 PM, Bobby Evans ev...@yahoo-inc.com.invalid
wrote:

 We have not really explored going to netty 4.0.  I know at some point we
 will probably have to switch over to use it, but for the time being most of
 the dependencies that we have seen still use netty 3.X, but if you have
 code to use netty 4.x and want to contribute I would be happy to review it
 and see what we can do to support that.  Even if it involves doing some
 shading to support it.
  - Bobby



  On Friday, April 17, 2015 10:48 PM, Julian Stephen 
 julian.step...@gmail.com wrote:


  Hi All,
   I am quite curious to know if there is any interest or activity  around
 upgrading Storm to use Netty 4.0 or higher. From what I understand, Storm
 (at least till the 0.10.0 dev branch I checked out)  still depends on Netty
 3.x. and Netty 4.x is not compatible with 3.x code Link
 http://netty.io/wiki/new-and-noteworthy-in-4.0.html. I could not find
 any
 related issues in the Apache Storm Jira, and is curious to know if anyone
 has explored implications and impact of such a change.

 Regards,
 Julian







-- 
Twitter: @nathanmarz
http://nathanmarz.com


Re: New Committer/PMC Member: Parth Brahmbhatt

2015-03-30 Thread Nathan Marz
Congrats Parth!

On Mon, Mar 30, 2015 at 4:59 PM, P. Taylor Goetz ptgo...@apache.org wrote:

 Please join me in welcoming Parth Brahmbhatt as a new Apache Storm
 Committer/PMC member.

 Parth has demonstrated a strong commitment to the Apache Storm community
 through active participation and mentoring on the Storm mailing lists.
 Furthermore, he has authored many enhancements and bug fixes spanning both
 Storm’s core codebase, as well as a numerous integration components.

 Congratulations and welcome Parth!

 -Taylor




-- 
Twitter: @nathanmarz
http://nathanmarz.com


Re: [VOTE] Adopt Apache Storm Project Bylaws

2015-02-19 Thread Nathan Marz
+1

On Thu, Feb 19, 2015 at 8:15 AM, Andy Feng andy.f...@gmail.com wrote:

 +1

 Andy Feng

 Sent from my iPhone

  On Feb 18, 2015, at 1:43 PM, P. Taylor Goetz ptgo...@apache.org wrote:
 
  As a follow-up to the previous discussion regarding adopting project
 bylaws, I’d like to start an official VOTE to formally adopt the bylaws as
 listed below.
 
  Please vote on adopting the proposed bylaws.
 
  [+1] Adopt the bylaws as listed
  [+0] No opinion
  [-1] Do not adopt the bylaws because…
 
  This vote will be 2/3 Majority as described below, and open for 6 days.
 
  -Taylor
 
  -
 
  # Apache Storm Project Bylaws
 
 
  ## Roles and Responsibilities
 
  Apache projects define a set of roles with associated rights and
 responsibilities. These roles govern what tasks an individual may perform
 within the project. The roles are defined in the following sections:
 
  ### Users:
 
  The most important participants in the project are people who use our
 software. The majority of our developers start out as users and guide their
 development efforts from the user's perspective.
 
  Users contribute to the Apache projects by providing feedback to
 developers in the form of bug reports and feature suggestions. As well,
 users participate in the Apache community by helping other users on mailing
 lists and user support forums.
 
  ### Contributors:
 
  Contributors are all of the volunteers who are contributing time, code,
 documentation, or resources to the Storm Project. A contributor that makes
 sustained, welcome contributions to the project may be invited to become a
 Committer, though the exact timing of such invitations depends on many
 factors.
 
  ### Committers:
 
  The project's Committers are responsible for the project's technical
 management. Committers have access to all project source repositories.
 Committers may cast binding votes on any technical discussion regarding
 storm.
 
  Committer access is by invitation only and must be approved by lazy
 consensus of the active PMC members. A Committer is considered emeritus by
 their own declaration or by not contributing in any form to the project for
 over six months. An emeritus Committer may request reinstatement of commit
 access from the PMC. Such reinstatement is subject to lazy consensus
 approval of active PMC members.
 
  All Apache Committers are required to have a signed Contributor License
 Agreement (CLA) on file with the Apache Software Foundation. There is a
 [Committers' FAQ](https://www.apache.org/dev/committers.html) which
 provides more details on the requirements for Committers.
 
  A Committer who makes a sustained contribution to the project may be
 invited to become a member of the PMC. The form of contribution is not
 limited to code. It can also include code review, helping out users on the
 mailing lists, documentation, testing, etc.
 
  ### Project Management Committee(PMC):
 
  The PMC is responsible to the board and the ASF for the management and
 oversight of the Apache Storm codebase. The responsibilities of the PMC
 include:
 
   * Deciding what is distributed as products of the Apache Storm project.
 In particular all releases must be approved by the PMC.
   * Maintaining the project's shared resources, including the codebase
 repository, mailing lists, websites.
   * Speaking on behalf of the project.
   * Resolving license disputes regarding products of the project.
   * Nominating new PMC members and Committers.
   * Maintaining these bylaws and other guidelines of the project.
 
  Membership of the PMC is by invitation only and must be approved by a
 consensus approval of active PMC members. A PMC member is considered
 emeritus by their own declaration or by not contributing in any form to
 the project for over six months. An emeritus member may request
 reinstatement to the PMC. Such reinstatement is subject to consensus
 approval of the active PMC members.
 
  The chair of the PMC is appointed by the ASF board. The chair is an
 office holder of the Apache Software Foundation (Vice President, Apache
 Storm) and has primary responsibility to the board for the management of
 the projects within the scope of the Storm PMC. The chair reports to the
 board quarterly on developments within the Storm project.
 
  The chair of the PMC is rotated annually. When the chair is rotated or
 if the current chair of the PMC resigns, the PMC votes to recommend a new
 chair using Single Transferable Vote (STV) voting. See
 http://wiki.apache.org/general/BoardVoting for specifics. The decision
 must be ratified by the Apache board.
 
  ## Voting
 
  Decisions regarding the project are made by votes on the primary project
 development mailing list (dev@storm.apache.org). Where necessary, PMC
 voting may take place on the private Storm PMC mailing list. Votes are
 clearly indicated by subject line starting with [VOTE]. Votes may contain
 multiple items for approval and these should be clearly separated. 

[jira] [Commented] (STORM-677) Maximum retries strategy may cause data loss

2015-02-19 Thread Nathan Marz (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327642#comment-14327642
 ] 

Nathan Marz commented on STORM-677:
---

Option 2 doesn't have to be long term as it should be easy to implement. I do 
not view the options as looking very similar as I think Option 2 will be 
significantly more robust – getting out of a weird state as fast as possible is 
really important.

If that itself can cause other workers to give up on a connection it could 
result in the topology never reaching a stable state. – This is exactly why 
the amount of time attempting to make a connection must be related to the start 
timeout for a worker. 

 Maximum retries strategy may cause data loss
 

 Key: STORM-677
 URL: https://issues.apache.org/jira/browse/STORM-677
 Project: Apache Storm
  Issue Type: Bug
Affects Versions: 0.9.3, 0.10.0
Reporter: Michael Noll
Priority: Minor
  Labels: Netty

 h3. Background
 Storm currently supports the configuration setting 
 storm.messaging.netty.max_retries.  This setting is supposed to limit the 
 number of reconnection attempts a Netty client will perform in case of a 
 connection loss.
 Unfortunately users have run into situations where this behavior will result 
 in data loss:
 {quote}
 https://github.com/apache/storm/pull/429/files#r24681006
 This could be a separate JIRA, but we ran into a situation where we hit the 
 maximum number of reconnection attempts, and the exception was eaten because 
 it was thrown from a background thread and it just killed the background 
 thread. This code appears to do the same thing.
 {quote}
 The problem can be summarized by the following example:  Once a Netty client 
 hits the maximum number of connection retries, it will stop trying to 
 reconnect (as intended) but will also continue to run forever without being 
 able to send any messages to its designated remote targets.  At this point 
 data will be lost because any messages that the Netty client is supposed to 
 send will be dropped (by design).  And since the Netty client is still alive 
 and thus considered functional, Storm is not able to do something about 
 this data loss situation.
 For a more detailed description please take a look at the discussion in 
 https://github.com/apache/storm/pull/429/files#r24742354.
 h3. Possible solutions
 (Most of this section is copy-pasted from an [earlier discussion on this 
 problem|https://github.com/apache/storm/pull/429/files#r24742354].)
 There are at least three approaches we may consider:
 # Let the Netty client die if max retries is reached, so that the Storm task 
 has the chance to re-create a client and thus break out of the client's 
 discard-messages-forever state.
 # Let the parent Storm task die if (one of its possibly many) Netty clients 
 dies, so that by restarting the task we'll also get a new Netty client.
 # Remove the max retries semantics as well as the corresponding setting from 
 Storm's configuration. Here, a Netty client will continue to reconnect to a 
 remote destination forever. The possible negative impact of these reconnects 
 (e.g. number of TCP connection attempts in a cluster) are kept in check by 
 our exponential backoff policy for such connection retries.
 My personal opinion on these three approaches:
 - I do not like (1) because I feel it introduces potentially confusing 
 semantics: We keep having a max retries setting, but it is not really a hard 
 limit anymore. It rather becomes a max retries until we recreate a Netty 
 client, and would also reset any exponential backoff strategy of the 
 previous Netty client instance (cf. StormBoundedExponentialBackoffRetry). 
 If we do want such resets (but I don't think we do at this point), then a 
 cleaner approach would be to implement such resetting inside the retry policy 
 (again, cf. StormBoundedExponentialBackoffRetry).
 - I do not like (2) because a single bad Netty client would be able to take 
 down a Storm task, which among other things would also impact any other, 
 working Netty clients of the Storm task.
 - Option (3) seems a reasonable approach, although it breaks backwards 
 compatibility with regard to Storm's configuration (because we'd now ignore 
 storm.messaging.netty.max_retries).
 Here's initial feedback from other developers:
 {quote}
 https://github.com/apache/storm/pull/429/files#r24824540
 revans2: I personally prefer option 3, no maximum number of reconnection 
 attempts. Having the client decide that it is done, before nimbus does feels 
 like it is asking for trouble.
 {quote}
 {quote}
 https://github.com/ptgoetz
 ptgoetz: I'm in favor of option 3 as well. I'm not that concerned about 
 storm.messaging.netty.max_retries being ignored. We could probably just log a 
 warning

Re: [DISCUSS] Adopt Apache Storm Bylaws

2015-02-12 Thread Nathan Marz
Yes, I would like to codify it. It's not about there being a bug with a
patch – it's about realizing that particular patch does not fit in with a
coherent vision of Storm, or that functionality could be achieved in a
completely different way. So basically, preventing bloat. With that change
I'm +1 to the bylaws and I believe we would have a consensus.

On Wed, Feb 11, 2015 at 7:34 PM, P. Taylor Goetz ptgo...@gmail.com wrote:

 I have no problem with your proposal. Actually I never even considered
 setting a timeline for a revert. I've always felt that if there was any
 problem with a patch/modification, it could be reverted at any time -- no
 deadline. If we find a problem, we fix it. We've reverted changes in the
 past, and lived to tell about it :).

 So I would think we don't even have to mention any revert timeline. If we
 feel the need to codify that, I'm okay with it.

 -Taylor

  On Feb 11, 2015, at 9:06 PM, Nathan Marz nat...@nathanmarz.com wrote:
 
  I'm -1 on these bylaws. This commit process encourages merging as fast as
  possible and does not give adequate time for dissenting opinions to veto
 a
  patch. I'm concerned about two things:
 
  1. Regressions - Having too lax of a merge process will lead to
 unforeseen
  regressions. We all saw this first hand with ZeroMQ: I had to freeze the
  version of ZeroMQ used by Storm because subsequent versions would regress
  in numerous ways.
  2. Bloat – All software projects have a tendency to become bloated and
  build complexity because things were added piecemeal without a coherent
  vision.
 
  These are very serious issues, and I've seen too many projects become
  messes because of them. The only way to control these problems are with
  -1's. Trust isn't even the issue here – one committer may very well
 think a
  new feature looks fine and why not let it in, while another will
  recognize that the feature is unnecessary, adds complexity, and/or can be
  addressed via better means. As is, the proposed bylaws are attempting to
  make vetoing very difficult.
 
  I have a proposal which I believe gets the best of all worlds: allowing
 for
  fast responsiveness on contributions while allowing for regressions and
  bloat to be controlled. It is just a slight modification of the current
  bylaws:
 
  A minimum of one +1 from a Committer other than the one who authored the
  patch, and no -1s. The code can be committed after the first +1. If a -1
 is
  received to the patch within 7 days after the patch was posted, it may be
  reverted immediately if it was already merged.
 
  To be clear, if a patch was posted on the 7th and merged on the 10th, it
  may be -1'd and reverted until the 14th.
 
  With this process patches can be merged just as fast as before, but it
 also
  allows for committers with a more holistic or deeper understanding of a
  part of Storm to prevent unnecessary complexity.
 
 
  On Tue, Feb 10, 2015 at 7:48 AM, Bobby Evans ev...@yahoo-inc.com.invalid
 
  wrote:
 
  I am fine with this. I mostly want a starting point, and we can adjust
  things from there is need be.
  - Bobby
 
 
  On Sunday, February 8, 2015 8:39 PM, Harsha st...@harsha.io
 wrote:
 
 
 
  Thanks for putting this together. Proposed bylaws looks good to
  me. -Harsha
 
 
  On Thu, Feb 5, 2015, at 02:10 PM, P. Taylor Goetz wrote:
  Associated pull request can be found here:
  https://github.com/apache/storm/pull/419
 
 
  This is another attempt at gaining consensus regarding adopting
  official bylaws for the Apache Storm project. The changes are minor
  and should be apparent in the pull request diff.
 
  In earlier discussions, there were concerns raised about certain
  actions requiring approval types that were too strict. In retrospect,
  and after reviewing the bylaws of other project (Apache Drill [1],
  Apache Hadoop [2]) as well as the official Glossary of Apache-Related
  Terms [3], it seems that some of those concerns were somewhat
  unfounded, and stemmed from the fact that different projects use
  different and inconsistent names for various approval types.
 
  In an effort to remedy the situation, I have modified the “Approvals”
  table to use the same names as the Glossary of Apache-Related Terms
  [3]. The table below provides a mapping between the terms used in this
  proposed update to the Apache Storm bylaws, the Apache Glossary, the
  Apache Drill bylaws, and the Apache Hadoop bylaws.
 
 
  | Proposed Storm Bylaws | Apache Glossary | Apache Drill | Apache
  | Hadoop | Definition |
  |
 
 ---||||-|
  | Consensus Approval | Consensus Approval | Lazy Consensus | Consensus
  | Approval | 3 binding +1 votes and no binding -1 votes | Majority
  | Approval | Majority Approval | Lazy Majority | Lazy Majority | At
  | least 3 binding +1 votes and more +1 votes than -1 votes | Lazy
  | Consensus | Lazy Consensus | Lazy

Re: [DISCUSS] Adopt Apache Storm Bylaws

2015-02-12 Thread Nathan Marz
+1

On Thu, Feb 12, 2015 at 5:57 PM, P. Taylor Goetz ptgo...@gmail.com wrote:

 Pull request updated.

 Here’s a link to the latest commit:
 https://github.com/ptgoetz/storm/commit/18a68a074570db01fc6377a269feb90ecda898ab

 - Taylor

 On Feb 12, 2015, at 8:41 PM, P. Taylor Goetz ptgo...@gmail.com wrote:

  Great hear. I will update the pull request accordingly.
 
  -Taylor
 
 
  On Feb 12, 2015, at 5:24 PM, Derek Dagit der...@yahoo-inc.com.INVALID
 wrote:
 
  I am OK with codifying the retroactive -1 as proposed by Nathan, and I
  am otherwise OK with the proposed bylaws.
  --
  Derek
 
 
 
  - Original Message -
  From: Bobby Evans ev...@yahoo-inc.com.INVALID
  To: dev@storm.apache.org dev@storm.apache.org
  Cc:
  Sent: Thursday, February 12, 2015 8:12 AM
  Subject: Re: [DISCUSS] Adopt Apache Storm Bylaws
 
  That seems fine to me.  Most other projects I have worked on follow a
 similar procedure, and a retroactive -1 can be applied, without having it
 codified, but making it official seems fine to me.
  I am +1 for those changes.
  - Bobby
 
 
 
  On Thursday, February 12, 2015 2:23 AM, Nathan Marz 
 nat...@nathanmarz.com wrote:
 
 
  Yes, I would like to codify it. It's not about there being a bug with a
  patch – it's about realizing that particular patch does not fit in with
 a
  coherent vision of Storm, or that functionality could be achieved in a
  completely different way. So basically, preventing bloat. With that
 change
  I'm +1 to the bylaws and I believe we would have a consensus.
 
  On Wed, Feb 11, 2015 at 7:34 PM, P. Taylor Goetz ptgo...@gmail.com
 wrote:
 
  I have no problem with your proposal. Actually I never even considered
  setting a timeline for a revert. I've always felt that if there was any
  problem with a patch/modification, it could be reverted at any time --
 no
  deadline. If we find a problem, we fix it. We've reverted changes in
 the
  past, and lived to tell about it :).
 
  So I would think we don't even have to mention any revert timeline. If
 we
  feel the need to codify that, I'm okay with it.
 
  -Taylor
 
  On Feb 11, 2015, at 9:06 PM, Nathan Marz nat...@nathanmarz.com
 wrote:
 
  I'm -1 on these bylaws. This commit process encourages merging as
 fast as
  possible and does not give adequate time for dissenting opinions to
 veto
  a
  patch. I'm concerned about two things:
 
  1. Regressions - Having too lax of a merge process will lead to
  unforeseen
  regressions. We all saw this first hand with ZeroMQ: I had to freeze
 the
  version of ZeroMQ used by Storm because subsequent versions would
 regress
  in numerous ways.
  2. Bloat – All software projects have a tendency to become bloated and
  build complexity because things were added piecemeal without a
 coherent
  vision.
 
  These are very serious issues, and I've seen too many projects become
  messes because of them. The only way to control these problems are
 with
  -1's. Trust isn't even the issue here – one committer may very well
  think a
  new feature looks fine and why not let it in, while another will
  recognize that the feature is unnecessary, adds complexity, and/or
 can be
  addressed via better means. As is, the proposed bylaws are attempting
 to
  make vetoing very difficult.
 
  I have a proposal which I believe gets the best of all worlds:
 allowing
  for
  fast responsiveness on contributions while allowing for regressions
 and
  bloat to be controlled. It is just a slight modification of the
 current
  bylaws:
 
  A minimum of one +1 from a Committer other than the one who authored
 the
  patch, and no -1s. The code can be committed after the first +1. If a
 -1
  is
  received to the patch within 7 days after the patch was posted, it
 may be
  reverted immediately if it was already merged.
 
  To be clear, if a patch was posted on the 7th and merged on the 10th,
 it
  may be -1'd and reverted until the 14th.
 
  With this process patches can be merged just as fast as before, but it
  also
  allows for committers with a more holistic or deeper understanding of
 a
  part of Storm to prevent unnecessary complexity.
 
 
  On Tue, Feb 10, 2015 at 7:48 AM, Bobby Evans
 ev...@yahoo-inc.com.invalid
 
  wrote:
 
  I am fine with this. I mostly want a starting point, and we can
 adjust
  things from there is need be.
  - Bobby
 
 
 On Sunday, February 8, 2015 8:39 PM, Harsha st...@harsha.io
  wrote:
 
 
 
  Thanks for putting this together. Proposed bylaws looks good to
  me. -Harsha
 
 
  On Thu, Feb 5, 2015, at 02:10 PM, P. Taylor Goetz wrote:
  Associated pull request can be found here:
  https://github.com/apache/storm/pull/419
 
 
  This is another attempt at gaining consensus regarding adopting
  official bylaws for the Apache Storm project. The changes are minor
  and should be apparent in the pull request diff.
 
  In earlier discussions, there were concerns raised about certain
  actions requiring approval types that were too strict. In
 retrospect,
  and after reviewing

Re: [DISCUSS] Adopt Apache Storm Bylaws

2015-02-11 Thread Nathan Marz
I'm -1 on these bylaws. This commit process encourages merging as fast as
possible and does not give adequate time for dissenting opinions to veto a
patch. I'm concerned about two things:

1. Regressions - Having too lax of a merge process will lead to unforeseen
regressions. We all saw this first hand with ZeroMQ: I had to freeze the
version of ZeroMQ used by Storm because subsequent versions would regress
in numerous ways.
2. Bloat – All software projects have a tendency to become bloated and
build complexity because things were added piecemeal without a coherent
vision.

These are very serious issues, and I've seen too many projects become
messes because of them. The only way to control these problems are with
-1's. Trust isn't even the issue here – one committer may very well think a
new feature looks fine and why not let it in, while another will
recognize that the feature is unnecessary, adds complexity, and/or can be
addressed via better means. As is, the proposed bylaws are attempting to
make vetoing very difficult.

I have a proposal which I believe gets the best of all worlds: allowing for
fast responsiveness on contributions while allowing for regressions and
bloat to be controlled. It is just a slight modification of the current
bylaws:

A minimum of one +1 from a Committer other than the one who authored the
patch, and no -1s. The code can be committed after the first +1. If a -1 is
received to the patch within 7 days after the patch was posted, it may be
reverted immediately if it was already merged.

To be clear, if a patch was posted on the 7th and merged on the 10th, it
may be -1'd and reverted until the 14th.

With this process patches can be merged just as fast as before, but it also
allows for committers with a more holistic or deeper understanding of a
part of Storm to prevent unnecessary complexity.


On Tue, Feb 10, 2015 at 7:48 AM, Bobby Evans ev...@yahoo-inc.com.invalid
wrote:

 I am fine with this. I mostly want a starting point, and we can adjust
 things from there is need be.
  - Bobby


  On Sunday, February 8, 2015 8:39 PM, Harsha st...@harsha.io wrote:



 Thanks for putting this together. Proposed bylaws looks good to
 me. -Harsha


 On Thu, Feb 5, 2015, at 02:10 PM, P. Taylor Goetz wrote:
  Associated pull request can be found here:
  https://github.com/apache/storm/pull/419
 
 
  This is another attempt at gaining consensus regarding adopting
  official bylaws for the Apache Storm project. The changes are minor
  and should be apparent in the pull request diff.
 
  In earlier discussions, there were concerns raised about certain
  actions requiring approval types that were too strict. In retrospect,
  and after reviewing the bylaws of other project (Apache Drill [1],
  Apache Hadoop [2]) as well as the official Glossary of Apache-Related
  Terms [3], it seems that some of those concerns were somewhat
  unfounded, and stemmed from the fact that different projects use
  different and inconsistent names for various approval types.
 
  In an effort to remedy the situation, I have modified the “Approvals”
  table to use the same names as the Glossary of Apache-Related Terms
  [3]. The table below provides a mapping between the terms used in this
  proposed update to the Apache Storm bylaws, the Apache Glossary, the
  Apache Drill bylaws, and the Apache Hadoop bylaws.
 
 
  | Proposed Storm Bylaws | Apache Glossary | Apache Drill | Apache
  | Hadoop | Definition |
  |
 ---||||-|
  | Consensus Approval | Consensus Approval | Lazy Consensus | Consensus
  | Approval | 3 binding +1 votes and no binding -1 votes | Majority
  | Approval | Majority Approval | Lazy Majority | Lazy Majority | At
  | least 3 binding +1 votes and more +1 votes than -1 votes | Lazy
  | Consensus | Lazy Consensus | Lazy Approval | Lazy Consensus | No -1
  | votes (‘silence gives assent’) |
  | 2/3 Majority | N/A | 2/3 Majority* | Lazy 2/3 Majority | At least 3
  |  +1 votes and twice as many +1 votes as -1 votes |
 
  * The Apache Drill bylaws to not define “2/3 Majority” in the
   Approvals table, but it is used in the Actions table.
 
  Please keep these differences in terminology when comparing the
  proposed bylaws with those of other projects.
 
  I would like to use this DISCUSS thread as a forum for reaching
  consensus to approve the proposed bylaws and to discuss any changes
  needed to reach that point. If successful, the VOTE to officially
  adopt the bylaws should be a technicality and pass without dissent.
 
  -Taylor
 
 
  [1]https://cwiki.apache.org/confluence/display/DRILL/Project+Bylaws
  [2]http://hadoop.apache.org/bylaws.html
  [3]http://www.apache.org/foundation/glossary.html Email had 1
 attachment:


   * signature.asc 1k (application/pgp-signature)







-- 
Twitter: @nathanmarz
http://nathanmarz.com


Re: [DISCUSS] Logging framework logback - log4j 2.x

2015-02-09 Thread Nathan Marz
The critical feature is putting a cap on the total size of log files kept
for a worker. As long as log4j2 allows you to put a hard limit (e.g. 1GB
total across all log files for a worker, with older files being deleted as
limit is exceeded), then I don't mind switching.

On Mon, Feb 9, 2015 at 9:43 AM, Bobby Evans ev...@yahoo-inc.com.invalid
wrote:

 I'm not totally positive on this, but the little test I ran did not cause
 any serious issues.  I created a small project that just logs using slf4j
 and log4j 1.2 API with the slf4j log4j2 bridge and the log4j1.2
 compatibility bridge on the classpath.

 ```package test;

 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;

 public class Test {
 private static final Logger LOG = LoggerFactory.getLogger(Test.class);
 private static final org.apache.log4j.Logger logger =
 org.apache.log4j.Logger.getLogger(Test.class);
 public static void main(String[] args) {
 System.out.println(Testing...);
 LOG.error(slf4j Testing...);
 logger.error(log4j Testing...);
 }
 }```
 I then manipulated the classpath to have log4j-1.2 and slf4j-log4j12 at
 the end of the classpath so that the log4j2 jars would override any log4j1
 jars.

 mvn exec:exec -Dexec.executable=java -Dexec.args=-cp
 %classpath:~/.m2/repository/org/slf4j/slf4j-log4j12/1.7.10/slf4j-log4j12-1.7.10.jar:~/.m2/repository/log4j/log4j/1.2.17/log4j-1.2.17.jar
 test.Test

 I got out the log messages I expected, and an error messages about
 multiple bindings that I think we can ignore.
 SLF4J: Class path contains multiple SLF4J bindings.
 SLF4J: Found binding in
 [jar:file:/Users/evans/.m2/repository/org/apache/logging/log4j/log4j-slf4j-impl/2.1/log4j-slf4j-impl-2.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: Found binding in
 [jar:file:/Users/evans/.m2/repository/org/slf4j/slf4j-log4j12/1.7.10/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
 explanation.
 SLF4J: Actual binding is of type
 [org.apache.logging.slf4j.Log4jLoggerFactory]
 ERROR StatusLogger No log4j2 configuration file found. Using default
 configuration: logging only errors to the console.
 Testing...
 11:36:53.880 [main] ERROR test.Test - slf4j Testing...
 11:36:53.881 [main] ERROR test.Test - log4j Testing...
  To me I can live with SLF4J spitting out error messages, at least all of
 the logs come out.  With our current setup if someone doesn't exclude
 things properly it crashes.

 - Bobby


  On Monday, February 9, 2015 10:59 AM, Michael Rose 
 mich...@fullcontact.com wrote:


  slf4j-log4j12 would still need to be excluded with log4j2, as you must use
 slf4j-log4j2. log4j2 its self has a package and coordinate change, so now
 people would be excluding sfl4j-log4j12, log4j 1.2 and logback. Switching
 to log4j2 does not solve that particular issue and perhaps slightly
 exacerbates it.

 If the only reason is to have a RFC5424-compliant syslog appender, why not
 fix logback's or build a separate one?

 *Michael Rose*
 Senior Platform Engineer
 *Full*Contact | fullcontact.com
 
 https://www.fullcontact.com/?utm_source=FullContact%20-%20Email%20Signaturesutm_medium=emailutm_content=Signature%20Linkutm_campaign=FullContact%20-%20Email%20Signatures
 
 m: +1.720.837.1357 | t: @xorlev


 All Your Contacts, Updated and In One Place.
 Try FullContact for Free
 
 https://www.fullcontact.com/?utm_source=FullContact%20-%20Email%20Signaturesutm_medium=emailutm_content=Signature%20Linkutm_campaign=FullContact%20-%20Email%20Signatures
 

 On Mon, Feb 9, 2015 at 9:35 AM, Harsha st...@harsha.io wrote:

  I am +1 on switching to log4j. I second Bobby on excluding log4j and new
  users/devs run into this issue quite often.
  Thanks,
  Harsha
 
  On Mon, Feb 9, 2015, at 08:28 AM, Bobby Evans wrote:
   I haven't seen any reply to this yet. It is a real pain to repeatedly
   tell our downstream users to run mvn dependecy:tree look for slf4j
 log4j
   bindings and exclude them.  That alone is enough for me to say lets
   switch.
- Bobby
  
  
On Monday, February 2, 2015 3:07 PM, Derek Dagit
der...@yahoo-inc.com.INVALID wrote:
  
  
In the past, the storm project used log4j version 1.x as its logging
   framework.  Around the time of 0.9.0, before moving to Apache, the
   project
   moved to using logback for two reasons:
  
   1) logback supported rolling log files, which was critical for managing
   disk
space usage.
   2) logback supported dynamically updating its logging configuration
   files.
  
  
   Recently, we have met a new requirement that we send logs to a syslog
   daemon
   for further processing.  The syslog daemon has a particular format
   described in
   RFC5424, and using it basically means that things like stack traces
 have
   newlines properly contained within a single logging event, instead of
   written
   raw into the log making extra parsing necessary.
  
   log4j version 

[jira] [Commented] (STORM-561) Add ability to create topologies dynamically

2014-12-03 Thread Nathan Marz (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14233947#comment-14233947
 ] 

Nathan Marz commented on STORM-561:
---

[~amontalenti] Why are you using the Clojure DSL for streamparse? I think 
you'll find things a lot easier to just create the Thrift objects directly in 
Python code. Storm is designed so that you don't even have to touch Java or any 
JVM language in order to construct and submit topologies.

Storm already has the ability to create topologies completely dynamically, 
where we define dynamically as not needing any recompilation. Storm's Thrift 
interfaces allow you to:

1. Create spout and bolt object of any language implementation from any other 
language. The ComponentObject struct can be specified either as a serialized 
java object, a java class name + arguments, or as a ShellComponent (for running 
spouts/bolts via subprocesses to allow for other languages).
2. Define topologies from any language. Storm gets this for free by the nature 
of topologies being Thrift objects.
3. Submit topologies from any language. Again, Storm gets this for free by the 
nature of Nimbus being a Thrift server and topologies being Thrift objects.

This means that the majority of what's been proposed in this issue is redundant 
with what's already in Storm. The exception is Trident, but that should have 
its own issue opened for this feature.

The direction I'd like to see a patch for this issue go is making a nice 
*library* in an interpreted language (like Python) that makes a pretty wrapper 
interface around the Thrift stuff, since manipulating Thrift objects directly 
is a little verbose. In addition, it can handle packaging of any artifacts the 
topology will need (like spout and bolt implementations) into a .jar file. The 
generated Python code for manipulating the Thrift structures is packaged with 
Storm at storm-core/src/py/. 


 Add ability to create topologies dynamically
 

 Key: STORM-561
 URL: https://issues.apache.org/jira/browse/STORM-561
 Project: Apache Storm
  Issue Type: Improvement
Reporter: Nathan Leung
Assignee: Nathan Leung
   Original Estimate: 336h
  Remaining Estimate: 336h

 It would be nice if a storm topology could be built dynamically, instead of 
 requiring a recompile to change parameters (e.g. number of workers, number of 
 tasks, layout, etc).
 I would propose the following data structures for building core storm 
 topologies.  I haven't done a design for trident yet but the intention would 
 be to add trident support when core storm support is complete (or in parallel 
 if there are other people working on it):
 {code}
 // fields value and arguments are mutually exclusive
 class Argument {
 String argumentType;  // Class used to lookup arguments in 
 method/constructor
 String implementationType; // Class used to create this argument
 String value; // String used to construct this argument
 ListArgument arguments; // arguments used to build this argument
 }
 class Dependency {
 String upstreamComponent; // name of upstream component
 String grouping;
 ListArgument arguments; // arguments for the grouping
 }
 class StormSpout {
 String name;
 String klazz;  // Class of this spout
 List Argument arguments;
 int numTasks;
 int numExecutors;
 }
 class StormBolt {
 String name;
 String klazz; // Class of this bolt
 List Argument arguments;
 int numTasks;
 int numExecutors;
 ListDependency dependencies;
 }
 class StormTopologyRepresentation {
 String name;
 ListStormSpout spouts;
 ListStormBolt bolts;
 Map config;
 int numWorkers;
 }
 {code}
 Topology creation will be built on top of the data structures above.  The 
 benefits:
 * Dependency free.  Code to unmarshal from json, xml, etc, can be kept in 
 extensions, or as examples, and users can write a different unmarshaller if 
 they want to use a different text representation.
 * support arbitrary spout and bolts types
 * support of all groupings, streams, via reflections
 * ability to specify configuration map via config file
 * reification of spout / bolt / dependency arguments
 ** recursive argument reification for complex objects



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (STORM-561) Add ability to create topologies dynamically

2014-12-03 Thread Nathan Marz (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14233947#comment-14233947
 ] 

Nathan Marz edited comment on STORM-561 at 12/4/14 6:36 AM:


[~amontalenti] Why are you using the Clojure DSL for streamparse? I think 
you'll find things a lot easier to just create the Thrift objects directly in 
Python code. Storm is designed so that you don't even have to touch Java or any 
JVM language in order to construct and submit topologies.

Storm already has the ability to create topologies completely dynamically, 
where we define dynamically as not needing any recompilation. Storm's Thrift 
interfaces allow you to:

1. Create spout and bolt object of any language implementation from any other 
language. The ComponentObject struct can be specified either as a serialized 
java object, a java class name + arguments, or as a ShellComponent (for running 
spouts/bolts via subprocesses to allow for other languages).
2. Define topologies from any language. Storm gets this for free by the nature 
of topologies being Thrift objects.
3. Submit topologies from any language. Again, Storm gets this for free by the 
nature of Nimbus being a Thrift server and topologies being Thrift objects.

This means that the majority of what's been proposed in this issue is redundant 
with what's already in Storm. The exception is Trident, but that should have 
its own issue opened for this feature.

The direction I'd like to see a patch for this issue go is making a nice 
*library* in an interpreted language (like Python) that makes a pretty wrapper 
interface around the Thrift stuff, since manipulating Thrift objects directly 
is a little verbose. In addition, it can handle packaging of any artifacts the 
topology will need (like spout and bolt implementations) into a .jar file. The 
generated Python code for manipulating the Thrift structures is packaged with 
Storm at storm-core/src/py/. 

For reference, here's the storm.thrift file: 
https://github.com/apache/storm/blob/master/storm-core/src/storm.thrift



was (Author: marz):
[~amontalenti] Why are you using the Clojure DSL for streamparse? I think 
you'll find things a lot easier to just create the Thrift objects directly in 
Python code. Storm is designed so that you don't even have to touch Java or any 
JVM language in order to construct and submit topologies.

Storm already has the ability to create topologies completely dynamically, 
where we define dynamically as not needing any recompilation. Storm's Thrift 
interfaces allow you to:

1. Create spout and bolt object of any language implementation from any other 
language. The ComponentObject struct can be specified either as a serialized 
java object, a java class name + arguments, or as a ShellComponent (for running 
spouts/bolts via subprocesses to allow for other languages).
2. Define topologies from any language. Storm gets this for free by the nature 
of topologies being Thrift objects.
3. Submit topologies from any language. Again, Storm gets this for free by the 
nature of Nimbus being a Thrift server and topologies being Thrift objects.

This means that the majority of what's been proposed in this issue is redundant 
with what's already in Storm. The exception is Trident, but that should have 
its own issue opened for this feature.

The direction I'd like to see a patch for this issue go is making a nice 
*library* in an interpreted language (like Python) that makes a pretty wrapper 
interface around the Thrift stuff, since manipulating Thrift objects directly 
is a little verbose. In addition, it can handle packaging of any artifacts the 
topology will need (like spout and bolt implementations) into a .jar file. The 
generated Python code for manipulating the Thrift structures is packaged with 
Storm at storm-core/src/py/. 


 Add ability to create topologies dynamically
 

 Key: STORM-561
 URL: https://issues.apache.org/jira/browse/STORM-561
 Project: Apache Storm
  Issue Type: Improvement
Reporter: Nathan Leung
Assignee: Nathan Leung
   Original Estimate: 336h
  Remaining Estimate: 336h

 It would be nice if a storm topology could be built dynamically, instead of 
 requiring a recompile to change parameters (e.g. number of workers, number of 
 tasks, layout, etc).
 I would propose the following data structures for building core storm 
 topologies.  I haven't done a design for trident yet but the intention would 
 be to add trident support when core storm support is complete (or in parallel 
 if there are other people working on it):
 {code}
 // fields value and arguments are mutually exclusive
 class Argument {
 String argumentType;  // Class used to lookup arguments in 
 method/constructor
 String implementationType; // Class used to create

Re: [DISCUSS] Release Apache Storm 0.9.3

2014-11-14 Thread Nathan Marz
-1. Looking at https://issues.apache.org/jira/browse/STORM-350 it seems the
upgrade to disruptor caused message loss issues. That upgrade should be
reverted, or I'd like @clockfly to provide more insight.

On Fri, Nov 14, 2014 at 5:19 PM, Harsha st...@harsha.io wrote:

 I am +1 releasing 0.9.3
 +1 on including STORM-555.
 Thanks,
 Harsha

 On Fri, Nov 14, 2014, at 02:29 PM, P. Taylor Goetz wrote:
  I’d like to get the community’s opinion on releasing 0.9.3 with what is
  currently in the 0.9.3 branch. This would be the official release
  (skipping the unofficial rc2).
 
  The only addition I’d like to include is STORM-555, which should be
  eligible for merging early next week.
 
  Thoughts?
 
  -Taylor
 
 
  Email had 1 attachment:
  + signature.asc
1k (application/pgp-signature)




-- 
Twitter: @nathanmarz
http://nathanmarz.com


[jira] [Resolved] (STORM-523) Project.clj missing

2014-10-09 Thread Nathan Marz (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nathan Marz resolved STORM-523.
---
Resolution: Not a Problem

-1, Storm's build is Maven now. Maintaing two separate builds would be a 
nightmare.

 Project.clj missing
 ---

 Key: STORM-523
 URL: https://issues.apache.org/jira/browse/STORM-523
 Project: Apache Storm
  Issue Type: Improvement
Reporter: nicolas ginder
Priority: Minor

 project.clj files are missing. There are only pom.xml files, it would be good 
 to generate project.clj from pom.xml. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (STORM-404) Worker on one machine crashes due to a failure of another worker on another machine

2014-10-05 Thread Nathan Marz (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14159592#comment-14159592
 ] 

Nathan Marz commented on STORM-404:
---

Do you see this problem with the ZeroMQ transport or just with Netty? The big 
change from 0.8.* to 0.9.* was making Netty the default. The behavior of the 
transport is supposed to be to just drop messages if it can't connect to the 
other worker (or buffer and drop once the reassignment is received). It sounds 
like the Netty transport might not be doing this.

 Worker on one machine crashes due to a failure of another worker on another 
 machine
 ---

 Key: STORM-404
 URL: https://issues.apache.org/jira/browse/STORM-404
 Project: Apache Storm
  Issue Type: Bug
Affects Versions: 0.9.2-incubating
Reporter: Itai Frenkel

 I have two workers (one on each machine). The first worker(10.30.206.125) had 
 a problem starting (could not find Nimbus host), however the second worker 
 crashed too since it could not connect to the first worker.
 This looks like a cascading failure, which seems like a bug.
 2014-07-15 17:43:32 b.s.m.n.Client [INFO] Reconnect started for 
 Netty-Client-ip-10-30-206-125.ec2.internal/10.30.206.125:6700... [17]
 2014-07-15 17:43:33 b.s.m.n.Client [INFO] Reconnect started for 
 Netty-Client-ip-10-30-206-125.ec2.internal/10.30.206.125:6700... [18]
 2014-07-15 17:43:34 b.s.m.n.Client [INFO] Reconnect started for 
 Netty-Client-ip-10-30-206-125.ec2.internal/10.30.206.125:6700... [19]
 2014-07-15 17:43:35 b.s.m.n.Client [INFO] Reconnect started for 
 Netty-Client-ip-10-30-206-125.ec2.internal/10.30.206.125:6700... [20]
 2014-07-15 17:43:36 b.s.m.n.Client [INFO] Reconnect started for 
 Netty-Client-ip-10-30-206-125.ec2.internal/10.30.206.125:6700... [21]
 2014-07-15 17:43:37 b.s.m.n.Client [INFO] Reconnect started for 
 Netty-Client-ip-10-30-206-125.ec2.internal/10.30.206.125:6700... [22]
 2014-07-15 17:43:38 b.s.m.n.Client [INFO] Reconnect started for 
 Netty-Client-ip-10-30-206-125.ec2.internal/10.30.206.125:6700... [23]
 2014-07-15 17:43:39 b.s.m.n.Client [INFO] Reconnect started for 
 Netty-Client-ip-10-30-206-125.ec2.internal/10.30.206.125:6700... [24]
 2014-07-15 17:43:40 b.s.m.n.Client [INFO] Reconnect started for 
 Netty-Client-ip-10-30-206-125.ec2.internal/10.30.206.125:6700... [25]
 2014-07-15 17:43:41 b.s.m.n.Client [INFO] Reconnect started for 
 Netty-Client-ip-10-30-206-125.ec2.internal/10.30.206.125:6700... [26]
 2014-07-15 17:43:42 b.s.m.n.Client [INFO] Reconnect started for 
 Netty-Client-ip-10-30-206-125.ec2.internal/10.30.206.125:6700... [27]
 2014-07-15 17:43:43 b.s.m.n.Client [INFO] Reconnect started for 
 Netty-Client-ip-10-30-206-125.ec2.internal/10.30.206.125:6700... [28]
 2014-07-15 17:43:44 b.s.m.n.Client [INFO] Reconnect started for 
 Netty-Client-ip-10-30-206-125.ec2.internal/10.30.206.125:6700... [29]
 2014-07-15 17:43:45 b.s.m.n.Client [INFO] Reconnect started for 
 Netty-Client-ip-10-30-206-125.ec2.internal/10.30.206.125:6700... [30]
 2014-07-15 17:43:46 b.s.m.n.Client [INFO] Closing Netty Client 
 Netty-Client-ip-10-30-206-125.ec2.internal/10.30.206.125:6700
 2014-07-15 17:43:46 b.s.m.n.Client [INFO] Waiting for pending batchs to be 
 sent with Netty-Client-ip-10-30-206-125.ec2.internal/10.30.206.125:6700..., 
 timeout: 60ms, pendings: 0
 2014-07-15 17:43:46 b.s.util [ERROR] Async loop died!
 java.lang.RuntimeException: java.lang.RuntimeException: Client is being 
 closed, and does not take requests any more
 at 
 backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:128)
  ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
 at 
 backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:99)
  ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
 at 
 backtype.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:80)
  ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
 at 
 backtype.storm.disruptor$consume_loop_STAR_$fn__758.invoke(disruptor.clj:94) 
 ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
 at backtype.storm.util$async_loop$fn__457.invoke(util.clj:431) 
 ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
 at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
 at java.lang.Thread.run(Thread.java:745) [na:1.7.0_60]
 Caused by: java.lang.RuntimeException: Client is being closed, and does not 
 take requests any more
 at backtype.storm.messaging.netty.Client.send(Client.java:194) 
 ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
 at backtype.storm.utils.TransferDrainer.send(TransferDrainer.java:54) 
 ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
 at 
 backtype.storm.daemon.worker$mk_transfer_tuples_handler$fn__5927$fn__5928.invoke(worker.clj:322)
  ~[storm-core-0.9.2-incubating.jar:0.9.2