Repository: spark
Updated Branches:
refs/heads/branch-1.1 2381e90dc - e7672f196
[SPARK-3167] Handle special driver configs in Windows (Branch 1.1)
This is an effort to bring the Windows scripts up to speed after recent
splashing changes in #1845.
Author: Andrew Or andrewo...@gmail.com
Repository: spark
Updated Branches:
refs/heads/branch-1.1 e7672f196 - 6f82a4b13
HOTFIX: Minor typo in conf template
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/6f82a4b1
Tree:
Repository: spark
Updated Branches:
refs/heads/master 7557c4cfe - 9d65f2712
HOTFIX: Minor typo in conf template
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/9d65f271
Tree:
Repository: spark
Updated Branches:
refs/heads/master 9d65f2712 - 3e2864e40
[SPARK-3139] Made ContextCleaner to not block on shuffles
As a workaround for SPARK-3015, the ContextCleaner was made blocking, that
is, it cleaned items one-by-one. But shuffles can take a long time to be
deleted.
Repository: spark
Updated Branches:
refs/heads/master 3e2864e40 - e1139dd60
[SPARK-3237][SQL] Fix parquet filters with UDFs
Author: Michael Armbrust mich...@databricks.com
Closes #2153 from marmbrus/parquetFilters and squashes the following commits:
712731a [Michael Armbrust] Use closure
Repository: spark
Updated Branches:
refs/heads/branch-1.1 5cf1e4401 - ca01de1b9
[SPARK-3237][SQL] Fix parquet filters with UDFs
Author: Michael Armbrust mich...@databricks.com
Closes #2153 from marmbrus/parquetFilters and squashes the following commits:
712731a [Michael Armbrust] Use
Repository: spark
Updated Branches:
refs/heads/master e1139dd60 - 43dfc84f8
[SPARK-2830][MLLIB] doc update for 1.1
1. renamed mllib-basics to mllib-data-types
1. renamed mllib-stats to mllib-statistics
1. moved random data generation to the bottom of mllib-stats
1. updated toc accordingly
Repository: spark
Updated Branches:
refs/heads/branch-1.1 74012475b - 7286d5707
[SPARK-3227] [mllib] Added migration guide for v1.0 to v1.1
The only updates are in DecisionTree.
CC: mengxr
Author: Joseph K. Bradley joseph.kurata.brad...@gmail.com
Closes #2146 from jkbradley/mllib-migration
Repository: spark
Updated Branches:
refs/heads/master 6f671d04f - b92d823ad
http://git-wip-us.apache.org/repos/asf/spark/blob/b92d823a/yarn/common/src/main/scala/org/apache/spark/scheduler/cluster/YarnClientClusterScheduler.scala
[SPARK-2933] [yarn] Refactor and cleanup Yarn AM code.
This change modifies the Yarn module so that all the logic related
to running the ApplicationMaster is localized. Instead of, previously,
4 different classes with mostly identical code, now we have:
- A single, shared ApplicationMaster
Repository: spark
Updated Branches:
refs/heads/master b92d823ad - d8298c46b
[SPARK-3170][CORE][BUG]:RDD info loss in StorageTab and ExecutorTab
compeleted stage only need to remove its own partitions that are no longer
cached. However, StorageTab may lost some rdds which are cached actually.
Repository: spark
Updated Branches:
refs/heads/branch-1.1 1d468df33 - 8f8e2a4ee
[SPARK-3170][CORE][BUG]:RDD info loss in StorageTab and ExecutorTab
compeleted stage only need to remove its own partitions that are no longer
cached. However, StorageTab may lost some rdds which are cached
Repository: spark
Updated Branches:
refs/heads/branch-1.1 8f8e2a4ee - 092121e47
[SPARK-3239] [PySpark] randomize the dirs for each process
This can avoid the IO contention during spilling, when you have multiple disks.
Author: Davies Liu davies@gmail.com
Closes #2152 from
Repository: spark
Updated Branches:
refs/heads/branch-1.1 092121e47 - 935bffe3b
[SPARK-2608][Core] Fixed command line option passing issue over Mesos via
SPARK_EXECUTOR_OPTS
This is another try after #2145 to fix
[SPARK-2608](https://issues.apache.org/jira/browse/SPARK-2608).
### Basic
Repository: spark
Updated Branches:
refs/heads/branch-1.1 935bffe3b - 0c94a5b2a
SPARK-3259 - User data should be given to the master
Author: Allan Douglas R. de Oliveira al...@chaordicsystems.com
Closes #2162 from douglaz/user_data_master and squashes the following commits:
10d15f6 [Allan
Repository: spark
Updated Branches:
refs/heads/master d8298c46b - 5ac4093c9
SPARK-3259 - User data should be given to the master
Author: Allan Douglas R. de Oliveira al...@chaordicsystems.com
Closes #2162 from douglaz/user_data_master and squashes the following commits:
10d15f6 [Allan
Repository: spark
Updated Branches:
refs/heads/master 5ac4093c9 - 3b5eb7083
[SPARK-3118][SQL]add SHOW TBLPROPERTIES tblname; and SHOW COLUMNS (FROM|IN)
table_name [(FROM|IN) db_name] support
JIRA issue: [SPARK-3118] https://issues.apache.org/jira/browse/SPARK-3118
eg:
SHOW TBLPROPERTIES
Repository: spark
Updated Branches:
refs/heads/branch-1.1 0c94a5b2a - 19cda0788
[SPARK-3118][SQL]add SHOW TBLPROPERTIES tblname; and SHOW COLUMNS (FROM|IN)
table_name [(FROM|IN) db_name] support
JIRA issue: [SPARK-3118] https://issues.apache.org/jira/browse/SPARK-3118
eg:
SHOW
Repository: spark
Updated Branches:
refs/heads/master 3b5eb7083 - 4238c17dc
[SPARK-3197] [SQL] Reduce the Expression tree object creations for aggregation
function (min/max)
Aggregation function min/max in catalyst will create expression tree for each
single row, however, the expression
Repository: spark
Updated Branches:
refs/heads/branch-1.1 19cda0788 - 4c7f082c6
[SPARK-3197] [SQL] Reduce the Expression tree object creations for aggregation
function (min/max)
Aggregation function min/max in catalyst will create expression tree for each
single row, however, the expression
Repository: spark
Updated Branches:
refs/heads/master 4238c17dc - 191d7cf2a
[SPARK-3256] Added support for :cp jar that was broken in Scala 2.10.x for
REPL
As seen with [SI-6502](https://issues.scala-lang.org/browse/SI-6502) of Scala,
the _:cp_ command was broken in Scala 2.10.x. As the
Repository: spark
Updated Branches:
refs/heads/master 48f42781d - 4fa2fda88
[SPARK-2871] [PySpark] add RDD.lookup(key)
RDD.lookup(key)
Return the list of values in the RDD for key `key`. This operation
is done efficiently if the RDD has a known partitioner by only
Repository: spark
Updated Branches:
refs/heads/master 4fa2fda88 - 7faf755ae
Spark-3213 Fixes issue with spark-ec2 not detecting slaves created with Launch
More like this
... copy the spark_cluster_tag from a spot instance requests over to the
instances.
Author: Vida Ha v...@databricks.com
Repository: spark
Updated Branches:
refs/heads/master 63a053ab1 - 28d41d627
[SPARK-3252][SQL] Add missing condition for test
According to the text message, both relations should be tested. So add the
missing condition.
Author: viirya vii...@gmail.com
Closes #2159 from viirya/fix_test and
Repository: spark
Updated Branches:
refs/heads/branch-1.1 c1ffa3e4c - b3d763b0b
[SPARK-3252][SQL] Add missing condition for test
According to the text message, both relations should be tested. So add the
missing condition.
Author: viirya vii...@gmail.com
Closes #2159 from viirya/fix_test
Repository: spark
Updated Branches:
refs/heads/master 28d41d627 - cc275f4b7
[SQL] [SPARK-3236] Reading Parquet tables from Metastore mangles location
Currently we do `relation.hiveQlTable.getDataLocation.getPath`, which returns
the path-part of the URI (e.g., s3n://my-bucket/my-path =
Repository: spark
Updated Branches:
refs/heads/master cc275f4b7 - 65253502b
[SPARK-3065][SQL] Add locale setting to fix results do not match for
udf_unix_timestamp format MMM dd h:mm:ss a run with not
America/Los_Angeles TimeZone in HiveCompatibilitySuite
When run the
Repository: spark
Updated Branches:
refs/heads/branch-1.1 77116875f - 5ea260ebd
[SPARK-3065][SQL] Add locale setting to fix results do not match for
udf_unix_timestamp format MMM dd h:mm:ss a run with not
America/Los_Angeles TimeZone in HiveCompatibilitySuite
When run the
Repository: spark
Updated Branches:
refs/heads/master 65253502b - 7d2a7a91f
[SPARK-3235][SQL] Ensure in-memory tables don't always broadcast.
Author: Michael Armbrust mich...@databricks.com
Closes #2147 from marmbrus/inMemDefaultSize and squashes the following commits:
5390360 [Michael
Repository: spark
Updated Branches:
refs/heads/branch-1.1 5ea260ebd - 9a62cf365
[SPARK-3235][SQL] Ensure in-memory tables don't always broadcast.
Author: Michael Armbrust mich...@databricks.com
Closes #2147 from marmbrus/inMemDefaultSize and squashes the following commits:
5390360 [Michael
Repository: spark
Updated Branches:
refs/heads/master 7d2a7a91f - 8712653f1
HOTFIX: Don't build with YARN support for Mapr3
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/8712653f
Tree:
Repository: spark
Updated Branches:
refs/heads/branch-1.1 9a62cf365 - 0b17c7d4f
Revert [maven-release-plugin] prepare for next development iteration
This reverts commit 9af3fb7385d1f9f221962f1d2d725ff79bd82033.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit:
Revert [maven-release-plugin] prepare release v1.1.0-snapshot2
This reverts commit e1535ad3c6f7400f2b7915ea91da9c60510557ba.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/0b17c7d4
Tree:
Repository: spark
Updated Branches:
refs/heads/master 8712653f1 - 64d8ecbbe
Add line continuation for script to work w/ py2.7.5
Error was -
$ SPARK_HOME=$PWD/dist ./dev/create-release/generate-changelist.py
File ./dev/create-release/generate-changelist.py, line 128
if day
Repository: spark
Updated Branches:
refs/heads/branch-1.1 0b17c7d4f - d4cf7a068
Add line continuation for script to work w/ py2.7.5
Error was -
$ SPARK_HOME=$PWD/dist ./dev/create-release/generate-changelist.py
File ./dev/create-release/generate-changelist.py, line 128
if day
Repository: spark
Updated Branches:
refs/heads/branch-1.1 d4cf7a068 - 8597e9cf3
http://git-wip-us.apache.org/repos/asf/spark/blob/8597e9cf/dev/create-release/generate-changelist.py
--
diff --git
http://git-wip-us.apache.org/repos/asf/spark/blob/8597e9cf/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
new file mode 100644
index 000..6efb022
--- /dev/null
+++ b/CHANGES.txt
@@ -0,0 +1,14470 @@
+Spark Change Log
BUILD: Updating CHANGES.txt for Spark 1.1
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/8597e9cf
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/8597e9cf
Diff:
Repository: spark
Updated Branches:
refs/heads/branch-1.1 8597e9cf3 - 58b0be6a2
[maven-release-plugin] prepare release v1.1.0-rc1
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/58b0be6a
Tree:
Repository: spark
Updated Branches:
refs/heads/branch-1.1 58b0be6a2 - 78e3c036e
[maven-release-plugin] prepare for next development iteration
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/78e3c036
Tree:
Repository: spark
Updated Tags: refs/tags/v1.1.0-rc1 [created] 1dc825d90
-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org
Repository: spark
Updated Branches:
refs/heads/master 64d8ecbbe - b86277c13
[SPARK-3271] delete unused methods in Utils
delete no used method in Utils
Author: scwf wangf...@huawei.com
Closes #2160 from scwf/delete-no-use-method and squashes the following commits:
d8f6b0d [scwf] delete no
Repository: spark
Updated Branches:
refs/heads/master b86277c13 - f38fab97c
SPARK-3265 Allow using custom ipython executable with pyspark
Although you can make pyspark use ipython with `IPYTHON=1`, and also change the
python executable with `PYSPARK_PYTHON=...`, you can't use both at the
43 matches
Mail list logo