Maciej let's fix SPARK-13283. It won't block 1.6.2 though. On Thu, Jun 23, 2016 at 5:45 AM, Maciej Bryński <[email protected]> wrote:
> -1 > > I need SPARK-13283 to be solved. > > Regards, > Maciek Bryński > > 2016-06-23 0:13 GMT+02:00 Krishna Sankar <[email protected]>: > >> +1 (non-binding, of course) >> >> 1. Compiled OSX 10.10 (Yosemite) OK Total time: 37:11 min >> mvn clean package -Pyarn -Phadoop-2.6 -DskipTests >> 2. Tested pyspark, mllib (iPython 4.0) >> 2.0 Spark version is 1.6.2 >> 2.1. statistics (min,max,mean,Pearson,Spearman) OK >> 2.2. Linear/Ridge/Lasso Regression OK >> 2.3. Decision Tree, Naive Bayes OK >> 2.4. KMeans OK >> Center And Scale OK >> 2.5. RDD operations OK >> State of the Union Texts - MapReduce, Filter,sortByKey (word count) >> 2.6. Recommendation (Movielens medium dataset ~1 M ratings) OK >> Model evaluation/optimization (rank, numIter, lambda) with >> itertools OK >> 3. Scala - MLlib >> 3.1. statistics (min,max,mean,Pearson,Spearman) OK >> 3.2. LinearRegressionWithSGD OK >> 3.3. Decision Tree OK >> 3.4. KMeans OK >> 3.5. Recommendation (Movielens medium dataset ~1 M ratings) OK >> 3.6. saveAsParquetFile OK >> 3.7. Read and verify the 4.3 save(above) - sqlContext.parquetFile, >> registerTempTable, sql OK >> 3.8. result = sqlContext.sql("SELECT >> OrderDetails.OrderID,ShipCountry,UnitPrice,Qty,Discount FROM Orders INNER >> JOIN OrderDetails ON Orders.OrderID = OrderDetails.OrderID") OK >> 4.0. Spark SQL from Python OK >> 4.1. result = sqlContext.sql("SELECT * from people WHERE State = 'WA'") OK >> 5.0. Packages >> 5.1. com.databricks.spark.csv - read/write OK (--packages >> com.databricks:spark-csv_2.10:1.4.0) >> 6.0. DataFrames >> 6.1. cast,dtypes OK >> 6.2. groupBy,avg,crosstab,corr,isNull,na.drop OK >> 6.3. All joins,sql,set operations,udf OK >> 7.0. GraphX/Scala >> 7.1. Create Graph (small and bigger dataset) OK >> 7.2. Structure APIs - OK >> 7.3. Social Network/Community APIs - OK >> 7.4. Algorithms (PageRank of 2 datasets, aggregateMessages() ) OK >> >> Cheers & Good Work, Folks >> <k/> >> >> On Sun, Jun 19, 2016 at 9:24 PM, Reynold Xin <[email protected]> wrote: >> >>> Please vote on releasing the following candidate as Apache Spark version >>> 1.6.2. The vote is open until Wednesday, June 22, 2016 at 22:00 PDT and >>> passes if a majority of at least 3+1 PMC votes are cast. >>> >>> [ ] +1 Release this package as Apache Spark 1.6.2 >>> [ ] -1 Do not release this package because ... >>> >>> >>> The tag to be voted on is v1.6.2-rc2 >>> (54b1121f351f056d6b67d2bb4efe0d553c0f7482) >>> >>> The release files, including signatures, digests, etc. can be found at: >>> http://people.apache.org/~pwendell/spark-releases/spark-1.6.2-rc2-bin/ >>> >>> Release artifacts are signed with the following key: >>> https://people.apache.org/keys/committer/pwendell.asc >>> >>> The staging repository for this release can be found at: >>> https://repository.apache.org/content/repositories/orgapachespark-1186/ >>> >>> The documentation corresponding to this release can be found at: >>> http://people.apache.org/~pwendell/spark-releases/spark-1.6.2-rc2-docs/ >>> >>> >>> ======================================= >>> == How can I help test this release? == >>> ======================================= >>> If you are a Spark user, you can help us test this release by taking an >>> existing Spark workload and running on this release candidate, then >>> reporting any regressions from 1.6.1. >>> >>> ================================================ >>> == What justifies a -1 vote for this release? == >>> ================================================ >>> This is a maintenance release in the 1.6.x series. Bugs already present >>> in 1.6.1, missing features, or bugs related to new features will not >>> necessarily block this release. >>> >>> >>> >>> >> > > > -- > Maciek Bryński >
