+1

On Sun, Jun 7, 2015 at 6:02 PM, Tathagata Das <tathagata.das1...@gmail.com>
wrote:

> +1
>
> On Sun, Jun 7, 2015 at 3:01 PM, Joseph Bradley <jos...@databricks.com>
> wrote:
>
>> +1
>>
>> On Sat, Jun 6, 2015 at 7:55 PM, Guoqiang Li <wi...@qq.com> wrote:
>>
>>> +1 (non-binding)
>>>
>>>
>>> ------------------ Original ------------------
>>> *From: * "Reynold Xin";<r...@databricks.com>;
>>> *Date: * Fri, Jun 5, 2015 03:18 PM
>>> *To: * "Krishna Sankar"<ksanka...@gmail.com>;
>>> *Cc: * "Patrick Wendell"<pwend...@gmail.com>; "dev@spark.apache.org"<
>>> dev@spark.apache.org>;
>>> *Subject: * Re: [VOTE] Release Apache Spark 1.4.0 (RC4)
>>>
>>> Enjoy your new shiny mbp.
>>>
>>> On Fri, Jun 5, 2015 at 12:10 AM, Krishna Sankar <ksanka...@gmail.com>
>>> wrote:
>>>
>>>> +1 (non-binding, of course)
>>>>
>>>> 1. Compiled OSX 10.10 (Yosemite) OK Total time: 25:42 min (My brand new
>>>> shiny MacBookPro12,1 : 16GB. Inaugurated the machine with compile & test
>>>> 1.4.0-RC4 !)
>>>>      mvn clean package -Pyarn -Dyarn.version=2.6.0 -Phadoop-2.4
>>>> -Dhadoop.version=2.6.0 -DskipTests
>>>> 2. Tested pyspark, mlib - running as well as compare results with 1.3.1
>>>> 2.1. statistics (min,max,mean,Pearson,Spearman) OK
>>>> 2.2. Linear/Ridge/Laso Regression OK
>>>> 2.3. Decision Tree, Naive Bayes OK
>>>> 2.4. KMeans OK
>>>>        Center And Scale OK
>>>> 2.5. RDD operations OK
>>>>       State of the Union Texts - MapReduce, Filter,sortByKey (word
>>>> count)
>>>> 2.6. Recommendation (Movielens medium dataset ~1 M ratings) OK
>>>>        Model evaluation/optimization (rank, numIter, lambda) with
>>>> itertools OK
>>>> 3. Scala - MLlib
>>>> 3.1. statistics (min,max,mean,Pearson,Spearman) OK
>>>> 3.2. LinearRegressionWithSGD OK
>>>> 3.3. Decision Tree OK
>>>> 3.4. KMeans OK
>>>> 3.5. Recommendation (Movielens medium dataset ~1 M ratings) OK
>>>> 3.6. saveAsParquetFile OK
>>>> 3.7. Read and verify the 4.3 save(above) - sqlContext.parquetFile,
>>>> registerTempTable, sql OK
>>>> 3.8. result = sqlContext.sql("SELECT
>>>> OrderDetails.OrderID,ShipCountry,UnitPrice,Qty,Discount FROM Orders INNER
>>>> JOIN OrderDetails ON Orders.OrderID = OrderDetails.OrderID") OK
>>>> 4.0. Spark SQL from Python OK
>>>> 4.1. result = sqlContext.sql("SELECT * from people WHERE State = 'WA'")
>>>> OK
>>>>
>>>> Cheers
>>>> <k/>
>>>>
>>>> On Tue, Jun 2, 2015 at 8:53 PM, Patrick Wendell <pwend...@gmail.com>
>>>> wrote:
>>>>
>>>>> Please vote on releasing the following candidate as Apache Spark
>>>>> version 1.4.0!
>>>>>
>>>>> The tag to be voted on is v1.4.0-rc3 (commit 22596c5):
>>>>> https://git-wip-us.apache.org/repos/asf?p=spark.git;a=commit;h=
>>>>> 22596c534a38cfdda91aef18aa9037ab101e4251
>>>>>
>>>>> The release files, including signatures, digests, etc. can be found at:
>>>>> http://people.apache.org/~pwendell/spark-releases/spark-1.4.0-rc4-bin/
>>>>>
>>>>> Release artifacts are signed with the following key:
>>>>> https://people.apache.org/keys/committer/pwendell.asc
>>>>>
>>>>> The staging repository for this release can be found at:
>>>>> [published as version: 1.4.0]
>>>>> https://repository.apache.org/content/repositories/orgapachespark-1111/
>>>>> [published as version: 1.4.0-rc4]
>>>>> https://repository.apache.org/content/repositories/orgapachespark-1112/
>>>>>
>>>>> The documentation corresponding to this release can be found at:
>>>>> http://people.apache.org/~pwendell/spark-releases/spark-1.4.0-rc4-docs/
>>>>>
>>>>> Please vote on releasing this package as Apache Spark 1.4.0!
>>>>>
>>>>> The vote is open until Saturday, June 06, at 05:00 UTC and passes
>>>>> if a majority of at least 3 +1 PMC votes are cast.
>>>>>
>>>>> [ ] +1 Release this package as Apache Spark 1.4.0
>>>>> [ ] -1 Do not release this package because ...
>>>>>
>>>>> To learn more about Apache Spark, please see
>>>>> http://spark.apache.org/
>>>>>
>>>>> == What has changed since RC3 ==
>>>>> In addition to may smaller fixes, three blocker issues were fixed:
>>>>> 4940630 [SPARK-8020] [SQL] Spark SQL conf in spark-defaults.conf make
>>>>> metadataHive get constructed too early
>>>>> 6b0f615 [SPARK-8038] [SQL] [PYSPARK] fix Column.when() and otherwise()
>>>>> 78a6723 [SPARK-7978] [SQL] [PYSPARK] DecimalType should not be
>>>>> singleton
>>>>>
>>>>> == How can I help test this release? ==
>>>>> If you are a Spark user, you can help us test this release by
>>>>> taking a Spark 1.3 workload and running on this release candidate,
>>>>> then reporting any regressions.
>>>>>
>>>>> == What justifies a -1 vote for this release? ==
>>>>> This vote is happening towards the end of the 1.4 QA period,
>>>>> so -1 votes should only occur for significant regressions from 1.3.1.
>>>>> Bugs already present in 1.3.X, minor regressions, or bugs related
>>>>> to new features will not block this release.
>>>>>
>>>>> ---------------------------------------------------------------------
>>>>> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>>>>> For additional commands, e-mail: dev-h...@spark.apache.org
>>>>>
>>>>>
>>>>
>>>
>>
>

Reply via email to