Repository: spark
Updated Branches:
refs/heads/branch-2.3 5b524cc0c -> f9dcdbcef
[SPARK-22757][K8S] Enable spark.jars and spark.files in KUBERNETES mode
## What changes were proposed in this pull request?
We missed enabling `spark.files` and `spark.jars` in
Repository: spark
Updated Branches:
refs/heads/master cf0aa6557 -> 6cff7d19f
[SPARK-22757][K8S] Enable spark.jars and spark.files in KUBERNETES mode
## What changes were proposed in this pull request?
We missed enabling `spark.files` and `spark.jars` in
Repository: spark
Updated Branches:
refs/heads/branch-2.3 145820bda -> 5b524cc0c
[SPARK-22949][ML] Apply CrossValidator approach to Driver/Distributed memory
tradeoff for TrainValidationSplit
## What changes were proposed in this pull request?
Avoid holding all models in memory for
Repository: spark
Updated Branches:
refs/heads/master 52fc5c17d -> cf0aa6557
[SPARK-22949][ML] Apply CrossValidator approach to Driver/Distributed memory
tradeoff for TrainValidationSplit
## What changes were proposed in this pull request?
Avoid holding all models in memory for
Author: pwendell
Date: Fri Jan 5 06:14:49 2018
New Revision: 24033
Log:
Apache Spark 2.3.0-SNAPSHOT-2018_01_04_22_01-158f7e6 docs
[This commit notification would consist of 1439 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
Repository: spark
Updated Branches:
refs/heads/branch-2.3 158f7e6a9 -> 145820bda
[SPARK-22825][SQL] Fix incorrect results of Casting Array to String
## What changes were proposed in this pull request?
This pr fixed the issue when casting arrays into strings;
```
scala> val df =
Repository: spark
Updated Branches:
refs/heads/master df7fc3ef3 -> 52fc5c17d
[SPARK-22825][SQL] Fix incorrect results of Casting Array to String
## What changes were proposed in this pull request?
This pr fixed the issue when casting arrays into strings;
```
scala> val df =
Author: pwendell
Date: Fri Jan 5 04:14:56 2018
New Revision: 24031
Log:
Apache Spark 2.3.0-SNAPSHOT-2018_01_04_20_01-df7fc3e docs
[This commit notification would consist of 1439 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
Repository: spark
Updated Branches:
refs/heads/branch-2.3 ea9da6152 -> 158f7e6a9
[SPARK-22957] ApproxQuantile breaks if the number of rows exceeds MaxInt
## What changes were proposed in this pull request?
32bit Int was used for row rank.
That overflowed in a dataframe with more than 2B
Repository: spark
Updated Branches:
refs/heads/master 0428368c2 -> df7fc3ef3
[SPARK-22957] ApproxQuantile breaks if the number of rows exceeds MaxInt
## What changes were proposed in this pull request?
32bit Int was used for row rank.
That overflowed in a dataframe with more than 2B rows.
Author: pwendell
Date: Fri Jan 5 02:14:49 2018
New Revision: 24030
Log:
Apache Spark 2.3.0-SNAPSHOT-2018_01_04_18_01-ea9da61 docs
[This commit notification would consist of 1439 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
Repository: spark
Updated Branches:
refs/heads/master e288fc87a -> 0428368c2
[SPARK-22960][K8S] Make build-push-docker-images.sh more dev-friendly.
- Make it possible to build images from a git clone.
- Make it easy to use minikube to test things.
Also fixed what seemed like a bug: the base
Repository: spark
Updated Branches:
refs/heads/branch-2.3 84707f0c6 -> ea9da6152
[SPARK-22960][K8S] Make build-push-docker-images.sh more dev-friendly.
- Make it possible to build images from a git clone.
- Make it easy to use minikube to test things.
Also fixed what seemed like a bug: the
Author: pwendell
Date: Fri Jan 5 00:14:37 2018
New Revision: 24029
Log:
Apache Spark 2.3.0-SNAPSHOT-2018_01_04_16_01-e288fc8 docs
[This commit notification would consist of 1439 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
Repository: spark
Updated Branches:
refs/heads/branch-2.3 2ab4012ad -> 84707f0c6
[SPARK-22953][K8S] Avoids adding duplicated secret volumes when init-container
is used
## What changes were proposed in this pull request?
User-specified secrets are mounted into both the main container and
Repository: spark
Updated Branches:
refs/heads/master 95f9659ab -> e288fc87a
[SPARK-22953][K8S] Avoids adding duplicated secret volumes when init-container
is used
## What changes were proposed in this pull request?
User-specified secrets are mounted into both the main container and
Repository: spark
Updated Branches:
refs/heads/master d2cddc88e -> 95f9659ab
[SPARK-22948][K8S] Move SparkPodInitContainer to correct package.
Author: Marcelo Vanzin
Closes #20156 from vanzin/SPARK-22948.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Repository: spark
Updated Branches:
refs/heads/branch-2.3 bc4bef472 -> 2ab4012ad
[SPARK-22948][K8S] Move SparkPodInitContainer to correct package.
Author: Marcelo Vanzin
Closes #20156 from vanzin/SPARK-22948.
(cherry picked from commit
Repository: spark
Updated Branches:
refs/heads/branch-2.3 cd92913f3 -> bc4bef472
[SPARK-22850][CORE] Ensure queued events are delivered to all event queues.
The code in LiveListenerBus was queueing events before start in the
queues themselves; so in situations like the following:
Repository: spark
Updated Branches:
refs/heads/master 93f92c0ed -> d2cddc88e
[SPARK-22850][CORE] Ensure queued events are delivered to all event queues.
The code in LiveListenerBus was queueing events before start in the
queues themselves; so in situations like the following:
Author: pwendell
Date: Thu Jan 4 22:16:06 2018
New Revision: 24025
Log:
Apache Spark 2.3.0-SNAPSHOT-2018_01_04_14_01-cd92913 docs
[This commit notification would consist of 1439 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
Author: pwendell
Date: Thu Jan 4 20:17:26 2018
New Revision: 24018
Log:
Apache Spark 2.3.0-SNAPSHOT-2018_01_04_12_01-93f92c0 docs
[This commit notification would consist of 1439 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
Repository: spark
Updated Branches:
refs/heads/master 6f68316e9 -> 93f92c0ed
[SPARK-21475][CORE][2ND ATTEMPT] Change to use NIO's Files API for external
shuffle service
## What changes were proposed in this pull request?
This PR is the second attempt of #18684 , NIO's Files API doesn't
Repository: spark
Updated Branches:
refs/heads/branch-2.3 bcfeef5a9 -> cd92913f3
[SPARK-21475][CORE][2ND ATTEMPT] Change to use NIO's Files API for external
shuffle service
## What changes were proposed in this pull request?
This PR is the second attempt of #18684 , NIO's Files API doesn't
Author: pwendell
Date: Thu Jan 4 16:19:45 2018
New Revision: 24013
Log:
Apache Spark 2.3.0-SNAPSHOT-2018_01_04_08_01-6f68316 docs
[This commit notification would consist of 1439 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
Repository: spark-website
Updated Branches:
refs/heads/asf-site 13d8bd58a -> 8c5354f29
Change company to Oath from Yahoo
Project: http://git-wip-us.apache.org/repos/asf/spark-website/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark-website/commit/8c5354f2
Tree:
Author: pwendell
Date: Thu Jan 4 14:14:49 2018
New Revision: 24012
Log:
Apache Spark 2.3.0-SNAPSHOT-2018_01_04_06_01-bcfeef5 docs
[This commit notification would consist of 1439 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
Repository: spark
Updated Branches:
refs/heads/branch-2.3 1f5e3540c -> bcfeef5a9
[SPARK-22771][SQL] Add a missing return statement in Concat.checkInputDataTypes
## What changes were proposed in this pull request?
This pr is a follow-up to fix a bug left in #19977.
## How was this patch
Repository: spark
Updated Branches:
refs/heads/master 5aadbc929 -> 6f68316e9
[SPARK-22771][SQL] Add a missing return statement in Concat.checkInputDataTypes
## What changes were proposed in this pull request?
This pr is a follow-up to fix a bug left in #19977.
## How was this patch tested?
Repository: spark
Updated Branches:
refs/heads/branch-2.3 eb99b8ade -> 1f5e3540c
[SPARK-22939][PYSPARK] Support Spark UDF in registerFunction
## What changes were proposed in this pull request?
```Python
import random
from pyspark.sql.functions import udf
from pyspark.sql.types import
Repository: spark
Updated Branches:
refs/heads/master d5861aba9 -> 5aadbc929
[SPARK-22939][PYSPARK] Support Spark UDF in registerFunction
## What changes were proposed in this pull request?
```Python
import random
from pyspark.sql.functions import udf
from pyspark.sql.types import
Author: pwendell
Date: Thu Jan 4 12:19:15 2018
New Revision: 24008
Log:
Apache Spark 2.3.0-SNAPSHOT-2018_01_04_04_01-d5861ab docs
[This commit notification would consist of 1439 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
Repository: spark
Updated Branches:
refs/heads/branch-2.3 a7cfd6bea -> eb99b8ade
[SPARK-22945][SQL] add java UDF APIs in the functions object
## What changes were proposed in this pull request?
Currently Scala users can use UDF like
```
val foo = udf((i: Int) => Math.random() +
Repository: spark
Updated Branches:
refs/heads/master 9fa703e89 -> d5861aba9
[SPARK-22945][SQL] add java UDF APIs in the functions object
## What changes were proposed in this pull request?
Currently Scala users can use UDF like
```
val foo = udf((i: Int) => Math.random() +
Repository: spark
Updated Branches:
refs/heads/branch-2.3 1860a43e9 -> a7cfd6bea
[SPARK-22950][SQL] Handle ChildFirstURLClassLoader's parent
## What changes were proposed in this pull request?
ChildFirstClassLoader's parent is set to null, so we can't get jars from its
parent. This will
Repository: spark
Updated Branches:
refs/heads/master df95a908b -> 9fa703e89
[SPARK-22950][SQL] Handle ChildFirstURLClassLoader's parent
## What changes were proposed in this pull request?
ChildFirstClassLoader's parent is set to null, so we can't get jars from its
parent. This will cause
Author: pwendell
Date: Thu Jan 4 08:19:58 2018
New Revision: 24006
Log:
Apache Spark 2.3.0-SNAPSHOT-2018_01_04_00_01-df95a90 docs
[This commit notification would consist of 1439 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
37 matches
Mail list logo