Forgot to cc dev-list

---------- Forwarded message ---------
From: Wenchen Fan <cloud0...@gmail.com>
Date: Thu, Oct 11, 2018 at 10:14 AM
Subject: Re: [VOTE] SPARK 2.4.0 (RC3)
To: <heue...@gmail.com>
Cc: Sean Owen <sro...@apache.org>


Ah sorry guys, I just copy-paste the voting email from the last RC and
forgot to update the date :P

The voting should be open until October 13 PST.

According to the discussion in the previous RC, I'm resolving SPARK-25378
as won't fix. It's OK to wait one or 2 weeks for the tensorflow release.

SPARK-25150 is a long-standing and known issue I believe, DataFrame join
API may have confusing behavior for indirect self-join, and is relatively
hard to fix, if breaking change is not allowed. I've seen many tickets
complaining about it and we should definitely fix it in 3.0, which accepts
necessary breaking changes.

SPARK-25588 does look like a potential issue, but there is not much we can
do if this problem is not reproducible.



On Thu, Oct 11, 2018 at 7:28 AM Michael Heuer <heue...@gmail.com> wrote:

> Hello Sean, Wenchen
>
> I could use triage on
>
> https://issues.apache.org/jira/browse/SPARK-25588
>
> I’ve struggled reporting Parquet+Avro dependency issues against Spark in
> the past, can’t seem to get any notice.
>
>    michael
>
>
> On Oct 10, 2018, at 5:00 PM, Sean Owen <sro...@apache.org> wrote:
>
> +1. I tested the source build against Scala 2.12 and common build
> profiles. License and sigs look OK.
>
> No blockers; one critical:
>
> SPARK-25378 ArrayData.toArray(StringType) assume UTF8String in 2.4
>
> I think this one is "won't fix" though? not trying to restore the behavior?
>
> Other items open for 2.4.0:
>
> SPARK-25347 Document image data source in doc site
> SPARK-25584 Document libsvm data source in doc site
> SPARK-25179 Document the features that require Pyarrow 0.10
> SPARK-25507 Update documents for the new features in 2.4 release
> SPARK-25346 Document Spark builtin data sources
> SPARK-24464 Unit tests for MLlib's Instrumentation
> SPARK-23197 Flaky test: spark.streaming.ReceiverSuite."receiver_life_cycle"
> SPARK-22809 pyspark is sensitive to imports with dots
> SPARK-21030 extend hint syntax to support any expression for Python and R
>
> Anyone know enough to close or retarget them? they don't look critical
> for 2.4, SPARK-25507 has no content, itself. SPARK-25179 "Document the
> features that require Pyarrow 0.10" however sounds like it could have
> been important for 2.4? if not a blocker.
>
> PS I don't think that SPARK-25150 is an issue; see JIRA. At least
> there is some ongoing discussion there.
>
> I am evaluating
> https://github.com/apache/spark/pull/22259#discussion_r224252642 right
> now.
>
>
> On Wed, Oct 10, 2018 at 9:47 AM Wenchen Fan <cloud0...@gmail.com> wrote:
>
>
> Please vote on releasing the following candidate as Apache Spark version
> 2.4.0.
>
> The vote is open until October 1 PST and passes if a majority +1 PMC votes
> are cast, with
> a minimum of 3 +1 votes.
>
> [ ] +1 Release this package as Apache Spark 2.4.0
> [ ] -1 Do not release this package because ...
>
> To learn more about Apache Spark, please see http://spark.apache.org/
>
> The tag to be voted on is v2.4.0-rc3 (commit
> 8e4a99bd201b9204fec52580f19ae70a229ed94e):
> https://github.com/apache/spark/tree/v2.4.0-rc3
>
> The release files, including signatures, digests, etc. can be found at:
> https://dist.apache.org/repos/dist/dev/spark/v2.4.0-rc3-bin/
>
> Signatures used for Spark RCs can be found in this file:
> https://dist.apache.org/repos/dist/dev/spark/KEYS
>
> The staging repository for this release can be found at:
> https://repository.apache.org/content/repositories/orgapachespark-1289
>
> The documentation corresponding to this release can be found at:
> https://dist.apache.org/repos/dist/dev/spark/v2.4.0-rc3-docs/
>
> The list of bug fixes going into 2.4.0 can be found at the following URL:
> https://issues.apache.org/jira/projects/SPARK/versions/12342385
>
> FAQ
>
> =========================
> How can I help test this release?
> =========================
>
> If you are a Spark user, you can help us test this release by taking
> an existing Spark workload and running on this release candidate, then
> reporting any regressions.
>
> If you're working in PySpark you can set up a virtual env and install
> the current RC and see if anything important breaks, in the Java/Scala
> you can add the staging repository to your projects resolvers and test
> with the RC (make sure to clean up the artifact cache before/after so
> you don't end up building with a out of date RC going forward).
>
> ===========================================
> What should happen to JIRA tickets still targeting 2.4.0?
> ===========================================
>
> The current list of open tickets targeted at 2.4.0 can be found at:
> https://issues.apache.org/jira/projects/SPARK and search for "Target
> Version/s" = 2.4.0
>
> Committers should look at those and triage. Extremely important bug
> fixes, documentation, and API tweaks that impact compatibility should
> be worked on immediately. Everything else please retarget to an
> appropriate release.
>
> ==================
> But my bug isn't fixed?
> ==================
>
> In order to make timely releases, we will typically not hold the
> release unless the bug in question is a regression from the previous
> release. That being said, if there is something which is a regression
> that has not been correctly targeted please ping me or a committer to
> help target the issue.
>
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
>
>
>

Reply via email to