We're cancelling this RC in favor of rc10. There were two blockers: an
issue with Windows run scripts and an issue with the packaging for
Hadoop 1 when hive support is bundled.

https://issues.apache.org/jira/browse/SPARK-1875
https://issues.apache.org/jira/browse/SPARK-1876

Thanks everyone for the testing. TD will be cutting rc10, since I'm
travelling this week (thanks TD!).

- Patrick

On Mon, May 19, 2014 at 7:06 PM, Nan Zhu <zhunanmcg...@gmail.com> wrote:
> just rerun my test on rc5
>
> everything works
>
> build applications with sbt and the spark-*.jar which is compiled with Hadoop 
> 2.3
>
> +1
>
> --
> Nan Zhu
>
>
> On Sunday, May 18, 2014 at 11:07 PM, witgo wrote:
>
>> How to reproduce this bug?
>>
>>
>> ------------------ Original ------------------
>> From: "Patrick Wendell";<pwend...@gmail.com (mailto:pwend...@gmail.com)>;
>> Date: Mon, May 19, 2014 10:08 AM
>> To: "dev@spark.apache.org 
>> (mailto:dev@spark.apache.org)"<dev@spark.apache.org 
>> (mailto:dev@spark.apache.org)>;
>> Cc: "Tom Graves"<tgraves...@yahoo.com (mailto:tgraves...@yahoo.com)>;
>> Subject: Re: [VOTE] Release Apache Spark 1.0.0 (rc9)
>>
>>
>>
>> Hey Matei - the issue you found is not related to security. This patch
>> a few days ago broke builds for Hadoop 1 with YARN support enabled.
>> The patch directly altered the way we deal with commons-lang
>> dependency, which is what is at the base of this stack trace.
>>
>> https://github.com/apache/spark/pull/754
>>
>> - Patrick
>>
>> On Sun, May 18, 2014 at 5:28 PM, Matei Zaharia <matei.zaha...@gmail.com 
>> (mailto:matei.zaha...@gmail.com)> wrote:
>> > Alright, I've opened https://github.com/apache/spark/pull/819 with the 
>> > Windows fixes. I also found one other likely bug, 
>> > https://issues.apache.org/jira/browse/SPARK-1875, in the binary packages 
>> > for Hadoop1 built in this RC. I think this is due to Hadoop 1's security 
>> > code depending on a different version of org.apache.commons than Hadoop 2, 
>> > but it needs investigation. Tom, any thoughts on this?
>> >
>> > Matei
>> >
>> > On May 18, 2014, at 12:33 PM, Matei Zaharia <matei.zaha...@gmail.com 
>> > (mailto:matei.zaha...@gmail.com)> wrote:
>> >
>> > > I took the always fun task of testing it on Windows, and unfortunately, 
>> > > I found some small problems with the prebuilt packages due to recent 
>> > > changes to the launch scripts: bin/spark-class2.cmd looks in ./jars 
>> > > instead of ./lib for the assembly JAR, and bin/run-example2.cmd doesn't 
>> > > quite match the master-setting behavior of the Unix based one. I'll send 
>> > > a pull request to fix them soon.
>> > >
>> > > Matei
>> > >
>> > >
>> > > On May 17, 2014, at 11:32 AM, Sandy Ryza <sandy.r...@cloudera.com 
>> > > (mailto:sandy.r...@cloudera.com)> wrote:
>> > >
>> > > > +1
>> > > >
>> > > > Reran my tests from rc5:
>> > > >
>> > > > * Built the release from source.
>> > > > * Compiled Java and Scala apps that interact with HDFS against it.
>> > > > * Ran them in local mode.
>> > > > * Ran them against a pseudo-distributed YARN cluster in both 
>> > > > yarn-client
>> > > > mode and yarn-cluster mode.
>> > > >
>> > > >
>> > > > On Sat, May 17, 2014 at 10:08 AM, Andrew Or <and...@databricks.com 
>> > > > (mailto:and...@databricks.com)> wrote:
>> > > >
>> > > > > +1
>> > > > >
>> > > > >
>> > > > > 2014-05-17 8:53 GMT-07:00 Mark Hamstra <m...@clearstorydata.com 
>> > > > > (mailto:m...@clearstorydata.com)>:
>> > > > >
>> > > > > > +1
>> > > > > >
>> > > > > >
>> > > > > > On Sat, May 17, 2014 at 12:58 AM, Patrick Wendell 
>> > > > > > <pwend...@gmail.com (mailto:pwend...@gmail.com)
>> > > > > > > wrote:
>> > > > > >
>> > > > > >
>> > > > > > > I'll start the voting with a +1.
>> > > > > > >
>> > > > > > > On Sat, May 17, 2014 at 12:58 AM, Patrick Wendell 
>> > > > > > > <pwend...@gmail.com (mailto:pwend...@gmail.com)>
>> > > > > > > wrote:
>> > > > > > > > Please vote on releasing the following candidate as Apache 
>> > > > > > > > Spark
>> > > > > > >
>> > > > > > >
>> > > > > >
>> > > > > > version
>> > > > > > > 1.0.0!
>> > > > > > > > This has one bug fix and one minor feature on top of rc8:
>> > > > > > > > SPARK-1864: https://github.com/apache/spark/pull/808
>> > > > > > > > SPARK-1808: https://github.com/apache/spark/pull/799
>> > > > > > > >
>> > > > > > > > The tag to be voted on is v1.0.0-rc9 (commit 920f947):
>> > > > > https://git-wip-us.apache.org/repos/asf?p=spark.git;a=commit;h=920f947eb5a22a679c0c3186cf69ee75f6041c75
>> > > > > > > >
>> > > > > > > > The release files, including signatures, digests, etc. can be 
>> > > > > > > > found
>> > > > > at:
>> > > > > > > > http://people.apache.org/~pwendell/spark-1.0.0-rc9/
>> > > > > > > >
>> > > > > > > > Release artifacts are signed with the following key:
>> > > > > > > > https://people.apache.org/keys/committer/pwendell.asc
>> > > > > > > >
>> > > > > > > > The staging repository for this release can be found at:
>> > > > > > https://repository.apache.org/content/repositories/orgapachespark-1017/
>> > > > > > > >
>> > > > > > > > The documentation corresponding to this release can be found 
>> > > > > > > > at:
>> > > > > > > > http://people.apache.org/~pwendell/spark-1.0.0-rc9-docs/
>> > > > > > > >
>> > > > > > > > Please vote on releasing this package as Apache Spark 1.0.0!
>> > > > > > > >
>> > > > > > > > The vote is open until Tuesday, May 20, at 08:56 UTC and 
>> > > > > > > > passes if
>> > > > > > > > amajority of at least 3 +1 PMC votes are cast.
>> > > > > > > >
>> > > > > > > > [ ] +1 Release this package as Apache Spark 1.0.0
>> > > > > > > > [ ] -1 Do not release this package because ...
>> > > > > > > >
>> > > > > > > > To learn more about Apache Spark, please see
>> > > > > > > > http://spark.apache.org/
>> > > > > > > >
>> > > > > > > > == API Changes ==
>> > > > > > > > We welcome users to compile Spark applications against 1.0. 
>> > > > > > > > There are
>> > > > > > > > a few API changes in this release. Here are links to the 
>> > > > > > > > associated
>> > > > > > > > upgrade guides - user facing changes have been kept as small as
>> > > > > > > > possible.
>> > > > > > > >
>> > > > > > > > changes to ML vector specification:
>> > > > > http://people.apache.org/~pwendell/spark-1.0.0-rc8-docs/mllib-guide.html#from-09-to-10
>> > > > > > > >
>> > > > > > > > changes to the Java API:
>> > > > > http://people.apache.org/~pwendell/spark-1.0.0-rc8-docs/java-programming-guide.html#upgrading-from-pre-10-versions-of-spark
>> > > > > > > >
>> > > > > > > > changes to the streaming API:
>> > > > > http://people.apache.org/~pwendell/spark-1.0.0-rc8-docs/streaming-programming-guide.html#migration-guide-from-091-or-below-to-1x
>> > > > > > > >
>> > > > > > > > changes to the GraphX API:
>> > > > > http://people.apache.org/~pwendell/spark-1.0.0-rc8-docs/graphx-programming-guide.html#upgrade-guide-from-spark-091
>> > > > > > > >
>> > > > > > > > coGroup and related functions now return Iterable[T] instead of
>> > > > > Seq[T]
>> > > > > > > > ==> Call toSeq on the result to restore the old behavior
>> > > > > > > >
>> > > > > > > > SparkContext.jarOfClass returns Option[String] instead of 
>> > > > > > > > Seq[String]
>> > > > > > > > ==> Call toSeq on the result to restore old behavior
>> > > > > > > >
>> > > > > > >
>> > > > > >
>> > > > >
>> > > > >
>> > > >
>> > > >
>> > >
>> > >
>> >
>> >
>>
>>
>>
>
>

Reply via email to