Ok that's reasonable -- it's certainly more of an enhancement than a
critical bug-fix.  I would like to get this in for 1.1.0 though, so let's
talk through the right way to do that on the PR.

In the meantime the best alternative is running with lax firewall settings,
which can be somewhat mitigated by modifying the ephemeral port range.

Thanks!
Andrew


On Sun, Jun 29, 2014 at 11:14 PM, Reynold Xin <r...@databricks.com> wrote:

> Hi Andrew,
>
> The port stuff is great to have, but they are pretty big changes to the
> core that are introducing new features and are not exactly fixing important
> bugs. For this reason, it probably can't block a release (I'm not even sure
> if it should go into a maintenance release where we fix critical bugs for
> Spark core).
>
> We should definitely include them for 1.1.0 though (~Aug).
>
>
>
>
> On Sun, Jun 29, 2014 at 11:09 PM, Andrew Ash <and...@andrewash.com> wrote:
>
> > Thanks for helping shepherd the voting on 1.0.1 Patrick.
> >
> > I'd like to call attention to
> > https://issues.apache.org/jira/browse/SPARK-2157 and
> > https://github.com/apache/spark/pull/1107 -- "Ability to write tight
> > firewall rules for Spark"
> >
> > I'm currently unable to run Spark on some projects because our cloud ops
> > team is uncomfortable with the firewall situation around Spark at the
> > moment.  Currently Spark starts listening on random ephemeral ports and
> > does server to server communication on them.  This keeps the team from
> > writing tight firewall rules between the services -- they get real queasy
> > when asked to open inbound connections to the entire ephemeral port range
> > of a cluster.  We can tighten the size of the ephemeral range using
> kernel
> > settings to mitigate the issue, but it doesn't actually solve the
> problem.
> >
> > The PR above aims to make every listening port on JVMs in a Spark
> > standalone cluster configurable with an option.  If not set, the current
> > behavior stands (start listening on an ephemeral port).  Is this
> something
> > the Spark team would consider merging into 1.0.1?
> >
> > Thanks!
> > Andrew
> >
> >
> >
> > On Sun, Jun 29, 2014 at 10:54 PM, Patrick Wendell <pwend...@gmail.com>
> > wrote:
> >
> > > Hey All,
> > >
> > > We're going to move onto another rc because of this vote.
> > > Unfortunately with the summit activities I haven't been able to usher
> > > in the necessary patches and cut the RC. I will do so as soon as
> > > possible and we can commence official voting.
> > >
> > > - Patrick
> > >
> > > On Sun, Jun 29, 2014 at 4:56 PM, Reynold Xin <r...@databricks.com>
> > wrote:
> > > > We should make sure we include the following two patches:
> > > >
> > > > https://github.com/apache/spark/pull/1264
> > > >
> > > > https://github.com/apache/spark/pull/1263
> > > >
> > > >
> > > >
> > > >
> > > > On Fri, Jun 27, 2014 at 8:39 PM, Krishna Sankar <ksanka...@gmail.com
> >
> > > wrote:
> > > >
> > > >> +1
> > > >> Compiled for CentOS 6.5, deployed in our 4 node cluster (Hadoop 2.2,
> > > YARN)
> > > >> Smoke Tests (sparkPi,spark-shell, web UI) successful
> > > >>
> > > >> Cheers
> > > >> <k/>
> > > >>
> > > >>
> > > >> On Thu, Jun 26, 2014 at 7:06 PM, Patrick Wendell <
> pwend...@gmail.com>
> > > >> wrote:
> > > >>
> > > >> > Please vote on releasing the following candidate as Apache Spark
> > > version
> > > >> > 1.0.1!
> > > >> >
> > > >> > The tag to be voted on is v1.0.1-rc1 (commit 7feeda3):
> > > >> >
> > > >> >
> > > >>
> > >
> >
> https://git-wip-us.apache.org/repos/asf?p=spark.git;a=commit;h=7feeda3d729f9397aa15ee8750c01ef5aa601962
> > > >> >
> > > >> > The release files, including signatures, digests, etc. can be
> found
> > > at:
> > > >> > http://people.apache.org/~pwendell/spark-1.0.1-rc1/
> > > >> >
> > > >> > Release artifacts are signed with the following key:
> > > >> > https://people.apache.org/keys/committer/pwendell.asc
> > > >> >
> > > >> > The staging repository for this release can be found at:
> > > >> >
> > >
> https://repository.apache.org/content/repositories/orgapachespark-1020/
> > > >> >
> > > >> > The documentation corresponding to this release can be found at:
> > > >> > http://people.apache.org/~pwendell/spark-1.0.1-rc1-docs/
> > > >> >
> > > >> > Please vote on releasing this package as Apache Spark 1.0.1!
> > > >> >
> > > >> > The vote is open until Monday, June 30, at 03:00 UTC and passes if
> > > >> > a majority of at least 3 +1 PMC votes are cast.
> > > >> >
> > > >> > [ ] +1 Release this package as Apache Spark 1.0.1
> > > >> > [ ] -1 Do not release this package because ...
> > > >> >
> > > >> > To learn more about Apache Spark, please see
> > > >> > http://spark.apache.org/
> > > >> >
> > > >> > === About this release ===
> > > >> > This release fixes a few high-priority bugs in 1.0 and has a
> variety
> > > >> > of smaller fixes. The full list is here: http://s.apache.org/b45.
> > > Some
> > > >> > of the more visible patches are:
> > > >> >
> > > >> > SPARK-2043: ExternalAppendOnlyMap doesn't always find matching
> keys
> > > >> > SPARK-2156 and SPARK-1112: Issues with jobs hanging due to akka
> > frame
> > > >> size.
> > > >> > SPARK-1790: Support r3 instance types on EC2.
> > > >> >
> > > >> > This is the first maintenance release on the 1.0 line. We plan to
> > make
> > > >> > additional maintenance releases as new fixes come in.
> > > >> >
> > > >> > - Patrick
> > > >> >
> > > >>
> > >
> >
>

Reply via email to