+1 (non binding)

Tested PySpark Core, DataFrame/SQL, MLlib and Streaming on a standalone
cluster

On 21 July 2016 at 05:24, Reynold Xin <r...@databricks.com> wrote:

> +1
>
>
> On Wednesday, July 20, 2016, Krishna Sankar <ksanka...@gmail.com> wrote:
>
>> +1 (non-binding, of course)
>>
>> 1. Compiled OS X 10.11.5 (El Capitan) OK Total time: 24:07 min
>>      mvn clean package -Pyarn -Phadoop-2.7 -DskipTests
>> 2. Tested pyspark, mllib (iPython 4.0)
>> 2.0 Spark version is 2.0.0
>> 2.1. statistics (min,max,mean,Pearson,Spearman) OK
>> 2.2. Linear/Ridge/Lasso Regression OK
>> 2.3. Classification : Decision Tree, Naive Bayes OK
>> 2.4. Clustering : KMeans OK
>>        Center And Scale OK
>> 2.5. RDD operations OK
>>       State of the Union Texts - MapReduce, Filter,sortByKey (word count)
>> 2.6. Recommendation (Movielens medium dataset ~1 M ratings) OK
>>        Model evaluation/optimization (rank, numIter, lambda) with
>> itertools OK
>> 3. Scala - MLlib
>> 3.1. statistics (min,max,mean,Pearson,Spearman) OK
>> 3.2. LinearRegressionWithSGD OK
>> 3.3. Decision Tree OK
>> 3.4. KMeans OK
>> 3.5. Recommendation (Movielens medium dataset ~1 M ratings) OK
>> 3.6. saveAsParquetFile OK
>> 3.7. Read and verify the 3.6 save(above) - sqlContext.parquetFile,
>> registerTempTable, sql OK
>> 3.8. result = sqlContext.sql("SELECT
>> OrderDetails.OrderID,ShipCountry,UnitPrice,Qty,Discount FROM Orders INNER
>> JOIN OrderDetails ON Orders.OrderID = OrderDetails.OrderID") OK
>> 4.0. Spark SQL from Python OK
>> 4.1. result = sqlContext.sql("SELECT * from people WHERE State = 'WA'") OK
>> 5.0. Packages
>> 5.1. com.databricks.spark.csv - read/write OK (--packages
>> com.databricks:spark-csv_2.10:1.4.0)
>> 6.0. DataFrames
>> 6.1. cast,dtypes OK
>> 6.2. groupBy,avg,crosstab,corr,isNull,na.drop OK
>> 6.3. All joins,sql,set operations,udf OK
>> [Dataframe Operations very fast from 11 secs to 3 secs, to 1.8 secs, to
>> 1.5 secs! Good work !!!]
>> 7.0. GraphX/Scala
>> 7.1. Create Graph (small and bigger dataset) OK
>> 7.2. Structure APIs - OK
>> 7.3. Social Network/Community APIs - OK
>> 7.4. Algorithms : PageRank of 2 datasets, aggregateMessages() - OK
>>
>> Cheers
>> <k/>
>>
>> On Tue, Jul 19, 2016 at 7:35 PM, Reynold Xin <r...@databricks.com> wrote:
>>
>>> Please vote on releasing the following candidate as Apache Spark version
>>> 2.0.0. The vote is open until Friday, July 22, 2016 at 20:00 PDT and passes
>>> if a majority of at least 3 +1 PMC votes are cast.
>>>
>>> [ ] +1 Release this package as Apache Spark 2.0.0
>>> [ ] -1 Do not release this package because ...
>>>
>>>
>>> The tag to be voted on is v2.0.0-rc5
>>> (13650fc58e1fcf2cf2a26ba11c819185ae1acc1f).
>>>
>>> This release candidate resolves ~2500 issues:
>>> https://s.apache.org/spark-2.0.0-jira
>>>
>>> The release files, including signatures, digests, etc. can be found at:
>>> http://people.apache.org/~pwendell/spark-releases/spark-2.0.0-rc5-bin/
>>>
>>> Release artifacts are signed with the following key:
>>> https://people.apache.org/keys/committer/pwendell.asc
>>>
>>> The staging repository for this release can be found at:
>>> https://repository.apache.org/content/repositories/orgapachespark-1195/
>>>
>>> The documentation corresponding to this release can be found at:
>>> http://people.apache.org/~pwendell/spark-releases/spark-2.0.0-rc5-docs/
>>>
>>>
>>> =================================
>>> How can I help test this release?
>>> =================================
>>> If you are a Spark user, you can help us test this release by taking an
>>> existing Spark workload and running on this release candidate, then
>>> reporting any regressions from 1.x.
>>>
>>> ==========================================
>>> What justifies a -1 vote for this release?
>>> ==========================================
>>> Critical bugs impacting major functionalities.
>>>
>>> Bugs already present in 1.x, missing features, or bugs related to new
>>> features will not necessarily block this release. Note that historically
>>> Spark documentation has been published on the website separately from the
>>> main release so we do not need to block the release due to documentation
>>> errors either.
>>>
>>>
>>

Reply via email to