[SPARK-23445] ColumnStat refactoring
## What changes were proposed in this pull request?
Refactor ColumnStat to be more flexible.
* Split `ColumnStat` and `CatalogColumnStat` just like `CatalogStatistics` is
split from `Statistics`. This detaches how the statistics are stored from how
they are
Repository: spark
Updated Branches:
refs/heads/master 7ec83658f -> 8077bb04f
http://git-wip-us.apache.org/repos/asf/spark/blob/8077bb04/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala
--
diff --git
http://git-wip-us.apache.org/repos/asf/spark/blob/8077bb04/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/statsEstimation/FilterEstimationSuite.scala
--
diff --git
a/sql/catalyst/src/test/scala/org/apache/spark/sql/cata
Author: pwendell
Date: Mon Feb 26 22:15:38 2018
New Revision: 25288
Log:
Apache Spark 2.3.1-SNAPSHOT-2018_02_26_14_01-6eee545 docs
[This commit notification would consist of 1443 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
---
Author: pwendell
Date: Mon Feb 26 20:15:58 2018
New Revision: 25286
Log:
Apache Spark 2.4.0-SNAPSHOT-2018_02_26_12_01-7ec8365 docs
[This commit notification would consist of 1444 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
---
Repository: spark
Updated Branches:
refs/heads/master 185f5bc7d -> 7ec83658f
[SPARK-23491][SS] Remove explicit job cancellation from ContinuousExecution
reconfiguring
## What changes were proposed in this pull request?
Remove queryExecutionThread.interrupt() from ContinuousExecution. As deta
Repository: spark
Updated Branches:
refs/heads/branch-2.3 1f180cd12 -> 6eee545f9
[SPARK-23449][K8S] Preserve extraJavaOptions ordering
For some JVM options, like `-XX:+UnlockExperimentalVMOptions` ordering is
necessary.
## What changes were proposed in this pull request?
Keep original `extr
Repository: spark
Updated Branches:
refs/heads/master b308182f2 -> 185f5bc7d
[SPARK-23449][K8S] Preserve extraJavaOptions ordering
For some JVM options, like `-XX:+UnlockExperimentalVMOptions` ordering is
necessary.
## What changes were proposed in this pull request?
Keep original `extraJav
Author: pwendell
Date: Mon Feb 26 18:16:06 2018
New Revision: 25283
Log:
Apache Spark 2.3.1-SNAPSHOT-2018_02_26_10_01-1f180cd docs
[This commit notification would consist of 1443 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
---
Repository: spark
Updated Branches:
refs/heads/branch-2.0 076c2f6a1 -> d51c6aaeb
[SPARK-23438][DSTREAMS] Fix DStreams data loss with WAL when driver crashes
There is a race condition introduced in SPARK-11141 which could cause data loss.
The problem is that ReceivedBlockTracker.insertAllocated
Repository: spark
Updated Branches:
refs/heads/branch-2.1 24fe6eb0f -> 2d751fbf6
[SPARK-23438][DSTREAMS] Fix DStreams data loss with WAL when driver crashes
There is a race condition introduced in SPARK-11141 which could cause data loss.
The problem is that ReceivedBlockTracker.insertAllocated
Repository: spark
Updated Branches:
refs/heads/branch-2.2 1cc34f3e5 -> fa3667ece
[SPARK-23438][DSTREAMS] Fix DStreams data loss with WAL when driver crashes
There is a race condition introduced in SPARK-11141 which could cause data loss.
The problem is that ReceivedBlockTracker.insertAllocated
Repository: spark
Updated Branches:
refs/heads/branch-2.3 578607b30 -> 1f180cd12
[SPARK-23438][DSTREAMS] Fix DStreams data loss with WAL when driver crashes
There is a race condition introduced in SPARK-11141 which could cause data loss.
The problem is that ReceivedBlockTracker.insertAllocated
Repository: spark
Updated Branches:
refs/heads/master 3ca9a2c56 -> b308182f2
[SPARK-23438][DSTREAMS] Fix DStreams data loss with WAL when driver crashes
## What changes were proposed in this pull request?
There is a race condition introduced in SPARK-11141 which could cause data loss.
The pro
14 matches
Mail list logo