I'll work on resolving some of the ML QA blockers this week, but it'd be
great to get help.  *@committers & contributors who work on ML*, many of
you have helped in the past, so please help take QA tasks wherever
possible.  (Thanks Yanbo & Felix for jumping in already.)  Anyone is
welcome to chip in of course!
Joseph

On Thu, May 4, 2017 at 4:09 PM, Sean Owen <so...@cloudera.com> wrote:

> The tests pass, licenses are OK, sigs, etc. I'd endorse it but we do still
> have blockers, so I assume people mean we need there will be another RC at
> some point.
>
> Blocker
> SPARK-20503 ML 2.2 QA: API: Python API coverage
> SPARK-20501 ML, Graph 2.2 QA: API: New Scala APIs, docs
> SPARK-20502 ML, Graph 2.2 QA: API: Experimental, DeveloperApi, final,
> sealed audit
> SPARK-20509 SparkR 2.2 QA: New R APIs and API docs
> SPARK-20504 ML 2.2 QA: API: Java compatibility, docs
> SPARK-20500 ML, Graph 2.2 QA: API: Binary incompatible changes
>
> Critical
> SPARK-20499 Spark MLlib, GraphX 2.2 QA umbrella
> SPARK-20520 R streaming tests failed on Windows
> SPARK-18891 Support for specific collection types
> SPARK-20505 ML, Graph 2.2 QA: Update user guide for new features & APIs
> SPARK-20364 Parquet predicate pushdown on columns with dots return empty
> results
> SPARK-20508 Spark R 2.2 QA umbrella
> SPARK-20512 SparkR 2.2 QA: Programming guide, migration guide, vignettes
> updates
> SPARK-20513 Update SparkR website for 2.2
> SPARK-20510 SparkR 2.2 QA: Update user guide for new features & APIs
> SPARK-20507 Update MLlib, GraphX websites for 2.2
> SPARK-20506 ML, Graph 2.2 QA: Programming guide update and migration guide
> SPARK-19690 Join a streaming DataFrame with a batch DataFrame may not work
> SPARK-7768 Make user-defined type (UDT) API public
> SPARK-4502 Spark SQL reads unneccesary nested fields from Parquet
> SPARK-17626 TPC-DS performance improvements using star-schema heuristics
>
>
> On Thu, May 4, 2017 at 6:07 PM Michael Armbrust <mich...@databricks.com>
> wrote:
>
>> Please vote on releasing the following candidate as Apache Spark version
>> 2.2.0. The vote is open until Tues, May 9th, 2017 at 12:00 PST and
>> passes if a majority of at least 3 +1 PMC votes are cast.
>>
>> [ ] +1 Release this package as Apache Spark 2.2.0
>> [ ] -1 Do not release this package because ...
>>
>>
>> To learn more about Apache Spark, please see http://spark.apache.org/
>>
>> The tag to be voted on is v2.2.0-rc2
>> <https://github.com/apache/spark/tree/v2.2.0-rc2> (1d4017b44d5e6ad
>> 156abeaae6371747f111dd1f9)
>>
>> List of JIRA tickets resolved can be found with this filter
>> <https://issues.apache.org/jira/browse/SPARK-20134?jql=project%20%3D%20SPARK%20AND%20fixVersion%20%3D%202.2.0>
>> .
>>
>> The release files, including signatures, digests, etc. can be found at:
>> http://home.apache.org/~pwendell/spark-releases/spark-2.2.0-rc2-bin/
>>
>> Release artifacts are signed with the following key:
>> https://people.apache.org/keys/committer/pwendell.asc
>>
>> The staging repository for this release can be found at:
>> https://repository.apache.org/content/repositories/orgapachespark-1236/
>>
>> The documentation corresponding to this release can be found at:
>> http://people.apache.org/~pwendell/spark-releases/spark-2.2.0-rc2-docs/
>>
>>
>> *FAQ*
>>
>> *How can I help test this release?*
>>
>> If you are a Spark user, you can help us test this release by taking an
>> existing Spark workload and running on this release candidate, then
>> reporting any regressions.
>>
>> *What should happen to JIRA tickets still targeting 2.2.0?*
>>
>> Committers should look at those and triage. Extremely important bug
>> fixes, documentation, and API tweaks that impact compatibility should be
>> worked on immediately. Everything else please retarget to 2.3.0 or 2.2.1.
>>
>> *But my bug isn't fixed!??!*
>>
>> In order to make timely releases, we will typically not hold the release
>> unless the bug in question is a regression from 2.1.1.
>>
>


-- 

Joseph Bradley

Software Engineer - Machine Learning

Databricks, Inc.

[image: http://databricks.com] <http://databricks.com/>

Reply via email to