+1 (non-binding, of course)

1. Compiled OSX 10.10 (Yosemite) OK Total time: 26:48 min
     mvn clean package -Pyarn -Phadoop-2.6 -DskipTests
2. Tested pyspark, mllib (iPython 4.0, FYI, notebook install is separate
“conda install python” and then “conda install jupyter”)
2.1. statistics (min,max,mean,Pearson,Spearman) OK
2.2. Linear/Ridge/Laso Regression OK
2.3. Decision Tree, Naive Bayes OK
2.4. KMeans OK
       Center And Scale OK
2.5. RDD operations OK
      State of the Union Texts - MapReduce, Filter,sortByKey (word count)
2.6. Recommendation (Movielens medium dataset ~1 M ratings) OK
       Model evaluation/optimization (rank, numIter, lambda) with itertools
OK
3. Scala - MLlib
3.1. statistics (min,max,mean,Pearson,Spearman) OK
3.2. LinearRegressionWithSGD OK
3.3. Decision Tree OK
3.4. KMeans OK
3.5. Recommendation (Movielens medium dataset ~1 M ratings) OK
3.6. saveAsParquetFile OK
3.7. Read and verify the 4.3 save(above) - sqlContext.parquetFile,
registerTempTable, sql OK
3.8. result = sqlContext.sql("SELECT
OrderDetails.OrderID,ShipCountry,UnitPrice,Qty,Discount FROM Orders INNER
JOIN OrderDetails ON Orders.OrderID = OrderDetails.OrderID") OK
4.0. Spark SQL from Python OK
4.1. result = sqlContext.sql("SELECT * from people WHERE State = 'WA'") OK
5.0. Packages
5.1. com.databricks.spark.csv - read/write OK (--packages
com.databricks:spark-csv_2.10:1.2.0)
6.0. DataFrames
6.1. cast,dtypes OK
6.2. groupBy,avg,crosstab,corr,isNull,na.drop OK
6.3. All joins,sql,set operations,udf OK
*Notes:*
1. Speed improvement in DataFrame functions groupBy, avg,sum et al. *Good
work*. I am working on a project to reduce processing time from ~24 hrs to
... Let us see what Spark does. The speedups would help a lot.
2. FYI, UDFs getM and getY work now (Thanks). Slower; saturates the CPU. A
non-scientific snapshot below. I know that this really has to be done more
rigorously, on a bigger machine, with more cores et al..
[image: Inline image 1] [image: Inline image 2]

On Thu, Sep 24, 2015 at 12:27 AM, Reynold Xin <r...@databricks.com> wrote:

> Please vote on releasing the following candidate as Apache Spark version
> 1.5.1. The vote is open until Sun, Sep 27, 2015 at 10:00 UTC and passes if
> a majority of at least 3 +1 PMC votes are cast.
>
> [ ] +1 Release this package as Apache Spark 1.5.1
> [ ] -1 Do not release this package because ...
>
>
> The release fixes 81 known issues in Spark 1.5.0, listed here:
> http://s.apache.org/spark-1.5.1
>
> The tag to be voted on is v1.5.1-rc1:
>
> https://github.com/apache/spark/commit/4df97937dbf68a9868de58408b9be0bf87dbbb94
>
> The release files, including signatures, digests, etc. can be found at:
> http://people.apache.org/~pwendell/spark-releases/spark-1.5.1-rc1-bin/
>
> Release artifacts are signed with the following key:
> https://people.apache.org/keys/committer/pwendell.asc
>
> The staging repository for this release (1.5.1) can be found at:
> *https://repository.apache.org/content/repositories/orgapachespark-1148/
> <https://repository.apache.org/content/repositories/orgapachespark-1148/>*
>
> The documentation corresponding to this release can be found at:
> http://people.apache.org/~pwendell/spark-releases/spark-1.5.1-rc1-docs/
>
>
> =======================================
> How can I help test this release?
> =======================================
> If you are a Spark user, you can help us test this release by taking an
> existing Spark workload and running on this release candidate, then
> reporting any regressions.
>
> ================================================
> What justifies a -1 vote for this release?
> ================================================
> -1 vote should occur for regressions from Spark 1.5.0. Bugs already
> present in 1.5.0 will not block this release.
>
> ===============================================================
> What should happen to JIRA tickets still targeting 1.5.1?
> ===============================================================
> Please target 1.5.2 or 1.6.0.
>
>
>
>

Reply via email to