Github user gczsjdy commented on the issue:
https://github.com/apache/spark/pull/16476
@HyukjinKwon Done, thanks : )
Ping @maropu
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user gczsjdy commented on the issue:
https://github.com/apache/spark/pull/16476
@maropu Sure, I will update it this week.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user gczsjdy closed the pull request at:
https://github.com/apache/spark/pull/19755
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user gczsjdy commented on the issue:
https://github.com/apache/spark/pull/20809
@vanzin Sorry for the late reply. According to the call stack, it's the
first place called `getScalaVersion`, `isTest` is true so we can go into that
path.
This happens in travis
Github user gczsjdy commented on the issue:
https://github.com/apache/spark/pull/20809
@vanzin Sorry but I will update it in next week, thanks.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user gczsjdy closed the pull request at:
https://github.com/apache/spark/pull/21022
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user gczsjdy commented on the issue:
https://github.com/apache/spark/pull/21022
@HyukjinKwon Sorry, it is.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
GitHub user gczsjdy opened a pull request:
https://github.com/apache/spark/pull/21022
Fpga acc
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/gczsjdy/spark fpga_acc
Alternatively you can review and apply these changes
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/20844#discussion_r178437551
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/ConfigBehaviorSuite.scala ---
@@ -39,7 +39,7 @@ class ConfigBehaviorSuite extends QueryTest
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/20844#discussion_r177311524
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/ConfigBehaviorSuite.scala ---
@@ -39,7 +39,7 @@ class ConfigBehaviorSuite extends QueryTest
Github user gczsjdy commented on the issue:
https://github.com/apache/spark/pull/20809
@vanzin Thanks. : )
I am testing using [OAP](https://github.com/Intel-bigdata/OAP) with
pre-built Spark on `LocalClusterMode`.
This is on travis and no SPARK_HOME is set.
The `mvn test
Github user gczsjdy commented on the issue:
https://github.com/apache/spark/pull/20809
@viirya Yes, but this is only for people who will investigate on Spark
code, and it also requires manual efforts. Isn't it better if we get this
automatically
Github user gczsjdy commented on the issue:
https://github.com/apache/spark/pull/20809
cc @cloud-fan @viirya
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
GitHub user gczsjdy opened a pull request:
https://github.com/apache/spark/pull/20809
[CORE] Better scala version check
## What changes were proposed in this pull request?
In some cases when outer project use pre-built Spark as dependency,
`getScalaVersion` will fail due
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/20303#discussion_r164089610
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/QueryStage.scala
---
@@ -0,0 +1,222 @@
+/*
+ * Licensed to the Apache
Github user gczsjdy closed the pull request at:
https://github.com/apache/spark/pull/19862
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user gczsjdy commented on the issue:
https://github.com/apache/spark/pull/19862
@cloud-fan Ok, thanks for your time, I will close this.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/20135#discussion_r159353652
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala
---
@@ -271,33 +271,45 @@ case class ConcatWs
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/20135#discussion_r159235432
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -684,6 +685,34 @@ object TypeCoercion
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/20135#discussion_r159234455
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala
---
@@ -271,33 +271,45 @@ case class ConcatWs
Github user gczsjdy commented on the issue:
https://github.com/apache/spark/pull/20010
Seems not a regular error?
@bdrillard Maybe you can push a commit and trigger the test again.
---
-
To unsubscribe, e-mail
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/20099#discussion_r158961184
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkStrategies.scala ---
@@ -158,45 +158,65 @@ abstract class SparkStrategies extends
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/20099#discussion_r158961453
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkStrategies.scala ---
@@ -158,45 +158,65 @@ abstract class SparkStrategies extends
Github user gczsjdy commented on the issue:
https://github.com/apache/spark/pull/20043
@viirya Thanks much. Actually local variable corresponds to `VariableValue`
and `StatementValue`? IIUC `VariableValue` is value that depends on something
else, but what is `StatementValue`? Maybe
Github user gczsjdy commented on the issue:
https://github.com/apache/spark/pull/20067
LGTM
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/20010#discussion_r158440114
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -158,11 +169,6 @@ object TypeCoercion
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/20010#discussion_r158440005
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -158,11 +213,8 @@ object TypeCoercion
Github user gczsjdy commented on the issue:
https://github.com/apache/spark/pull/20043
@viirya Sorry I didn't quite understand, how do we easily know the value by
adding wrappers? Could you explain a little bit
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/20039#discussion_r158424211
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/LiveListenerBus.scala ---
@@ -124,13 +127,19 @@ private[spark] class LiveListenerBus(conf
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/20039#discussion_r158309818
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/LiveListenerBus.scala ---
@@ -124,13 +127,19 @@ private[spark] class LiveListenerBus(conf
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/19977#discussion_r157939820
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala
---
@@ -48,17 +48,26 @@ import
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/20010#discussion_r157928910
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -99,6 +102,33 @@ object TypeCoercion
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/20010#discussion_r157929494
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -158,11 +169,6 @@ object TypeCoercion
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/20010#discussion_r157926754
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -99,6 +99,17 @@ object TypeCoercion {
case
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/20010#discussion_r157926722
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -158,11 +169,6 @@ object TypeCoercion
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/19977#discussion_r157819483
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala
---
@@ -48,17 +48,26 @@ import
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/19977#discussion_r157793004
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -1035,6 +1035,12 @@ object SQLConf {
.booleanConf
Github user gczsjdy commented on the issue:
https://github.com/apache/spark/pull/19977
You mean answers of mysql is unexpected? I think it's common these dbs get
different behaviors, while Spark mainly follows Hive
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/20010#discussion_r157780706
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -99,6 +99,17 @@ object TypeCoercion {
case
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/20010#discussion_r157775323
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -99,6 +99,17 @@ object TypeCoercion {
case
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/20010#discussion_r157697044
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -99,6 +99,17 @@ object TypeCoercion {
case
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/20010#discussion_r157696626
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -99,6 +99,17 @@ object TypeCoercion {
case
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/20010#discussion_r157695599
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -99,6 +99,17 @@ object TypeCoercion {
case
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/20015#discussion_r157686437
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
---
@@ -1295,87 +1295,184 @@ case class
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/20015#discussion_r157676669
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
---
@@ -1295,87 +1295,184 @@ case class
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/20015#discussion_r157678588
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
---
@@ -1295,87 +1295,184 @@ case class
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/20015#discussion_r157680290
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala
---
@@ -944,9 +954,16 @@ object DateTimeUtils {
date
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/20015#discussion_r157674840
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
---
@@ -1295,87 +1295,184 @@ case class
Github user gczsjdy commented on the issue:
https://github.com/apache/spark/pull/19862
cc @cloud-fan @hvanhovell @viirya
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands
Github user gczsjdy commented on the issue:
https://github.com/apache/spark/pull/19862
This is actually a small change, but it can provide not small optimization
for users who don't use `WholeStageCodegen`, for example there're still some
users who use under 2.0 version of Spark
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/19862#discussion_r156581645
--- Diff:
sql/catalyst/src/main/java/org/apache/spark/sql/execution/UnsafeExternalRowSorter.java
---
@@ -159,6 +154,12 @@ public boolean hasNext
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/19862#discussion_r154635850
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/joins/SortMergeJoinExec.scala
---
@@ -699,39 +700,44 @@ private[joins] class
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/19862#discussion_r154581844
--- Diff:
sql/catalyst/src/main/java/org/apache/spark/sql/execution/UnsafeExternalRowSorter.java
---
@@ -159,6 +154,12 @@ public boolean hasNext
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/19862#discussion_r154563897
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/joins/SortMergeJoinExec.scala
---
@@ -674,8 +674,9 @@ private[joins] class
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/19862#discussion_r154564327
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/joins/SortMergeJoinExec.scala
---
@@ -699,39 +700,44 @@ private[joins] class
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/19862#discussion_r154564488
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/joins/SortMergeJoinExec.scala
---
@@ -699,39 +700,44 @@ private[joins] class
Github user gczsjdy commented on the issue:
https://github.com/apache/spark/pull/19862
cc @cloud-fan @viirya @ConeyLiu
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
GitHub user gczsjdy opened a pull request:
https://github.com/apache/spark/pull/19862
Make SortMergeJoin read less data when wholeStageCodegen is off
## What changes were proposed in this pull request?
In SortMergeJoin(with wholeStageCodegen), an optimization already exists
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/19823#discussion_r153131202
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -341,6 +341,12 @@ case class LoadDataCommand
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/19823#discussion_r153127637
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -2624,7 +2624,13 @@ class SQLQuerySuite extends QueryTest
Github user gczsjdy commented on the issue:
https://github.com/apache/spark/pull/19788
Can we just add the `ContinuousShuffleBlockId` without adding new conf
`spark.shuffle.continuousFetch`? While in classes related to shuffle read like
`ShuffleBlockFetcherIterator`, we also pattern
Github user gczsjdy commented on the issue:
https://github.com/apache/spark/pull/19764
@caneGuy Can you give a specific example to illustrate your change? Maybe
former partition result & later partition re
Github user gczsjdy commented on the issue:
https://github.com/apache/spark/pull/19788
What are ` external shuffle service` here? Can you explain a little bit?
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/19788#discussion_r153117088
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockId.scala ---
@@ -116,8 +117,8 @@ object BlockId {
def apply(name: String): BlockId = name
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/19763#discussion_r152921091
--- Diff: core/src/main/scala/org/apache/spark/MapOutputTracker.scala ---
@@ -472,15 +475,66 @@ private[spark] class MapOutputTrackerMaster
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/19763#discussion_r152920483
--- Diff:
core/src/main/scala/org/apache/spark/internal/config/package.scala ---
@@ -485,4 +485,13 @@ package object config {
"
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/19763#discussion_r152912084
--- Diff:
core/src/main/scala/org/apache/spark/internal/config/package.scala ---
@@ -485,4 +485,13 @@ package object config {
"
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/19763#discussion_r152911325
--- Diff:
core/src/main/scala/org/apache/spark/internal/config/package.scala ---
@@ -485,4 +485,13 @@ package object config {
"
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/19763#discussion_r152907079
--- Diff:
core/src/main/scala/org/apache/spark/internal/config/package.scala ---
@@ -485,4 +485,13 @@ package object config {
"
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/19763#discussion_r152906960
--- Diff: core/src/main/scala/org/apache/spark/MapOutputTracker.scala ---
@@ -472,15 +475,66 @@ private[spark] class MapOutputTrackerMaster
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/19763#discussion_r152888380
--- Diff: core/src/main/scala/org/apache/spark/MapOutputTracker.scala ---
@@ -472,15 +475,66 @@ private[spark] class MapOutputTrackerMaster
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/19763#discussion_r152888257
--- Diff: core/src/main/scala/org/apache/spark/MapOutputTracker.scala ---
@@ -472,15 +475,66 @@ private[spark] class MapOutputTrackerMaster
Github user gczsjdy commented on the issue:
https://github.com/apache/spark/pull/19763
@cloud-fan Seems Jenkins's not started?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/19763#discussion_r152827467
--- Diff: core/src/main/scala/org/apache/spark/MapOutputTracker.scala ---
@@ -472,15 +475,66 @@ private[spark] class MapOutputTrackerMaster
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/19763#discussion_r152493779
--- Diff: core/src/main/scala/org/apache/spark/MapOutputTracker.scala ---
@@ -472,15 +475,66 @@ private[spark] class MapOutputTrackerMaster
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/19788#discussion_r152193203
--- Diff: core/src/main/scala/org/apache/spark/MapOutputTracker.scala ---
@@ -812,10 +812,14 @@ private[spark] object MapOutputTracker extends
Logging
Github user gczsjdy commented on the issue:
https://github.com/apache/spark/pull/19755
I can't find a way to distinguish `reused` and `unreused` subquery. For
example, in the `ReuseSubquery` rule, after seeing the 1st SubqueryExec(with
`unreused` in name), it's buffered. When
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/19763#discussion_r152185531
--- Diff:
core/src/main/scala/org/apache/spark/internal/config/package.scala ---
@@ -485,4 +485,13 @@ package object config {
"
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/19763#discussion_r152021708
--- Diff: core/src/main/scala/org/apache/spark/MapOutputTracker.scala ---
@@ -472,16 +475,45 @@ private[spark] class MapOutputTrackerMaster
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/19763#discussion_r152017736
--- Diff:
core/src/main/scala/org/apache/spark/internal/config/package.scala ---
@@ -485,4 +485,13 @@ package object config {
"
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/19763#discussion_r152016240
--- Diff: core/src/main/scala/org/apache/spark/MapOutputTracker.scala ---
@@ -472,16 +475,45 @@ private[spark] class MapOutputTrackerMaster
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/19763#discussion_r152008181
--- Diff:
core/src/main/scala/org/apache/spark/internal/config/package.scala ---
@@ -485,4 +485,13 @@ package object config {
"
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/19763#discussion_r152006310
--- Diff:
core/src/main/scala/org/apache/spark/internal/config/package.scala ---
@@ -485,4 +485,13 @@ package object config {
"
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/19763#discussion_r152005905
--- Diff:
core/src/main/scala/org/apache/spark/internal/config/package.scala ---
@@ -485,4 +485,13 @@ package object config {
"
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/19763#discussion_r152002860
--- Diff:
core/src/main/scala/org/apache/spark/internal/config/package.scala ---
@@ -485,4 +485,13 @@ package object config {
"
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/19763#discussion_r152002262
--- Diff:
core/src/main/scala/org/apache/spark/internal/config/package.scala ---
@@ -485,4 +485,13 @@ package object config {
"
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/19763#discussion_r151921740
--- Diff:
core/src/main/scala/org/apache/spark/internal/config/package.scala ---
@@ -485,4 +485,13 @@ package object config {
"
Github user gczsjdy commented on the issue:
https://github.com/apache/spark/pull/19763
Actually, the time gap is O(number of mappers * shuffle partitions). In
this case, number of mappers is not very large, while users are more likely to
get slowed down when they run on a big data
Github user gczsjdy commented on the issue:
https://github.com/apache/spark/pull/19763
This happens a lot in our TPC-DS 100TB test. We have a Intel Xeon CPU
E5-2699 v4 @2.2GHz CPU as master, this will influence the driver's performance.
And we set `spark.sql.shuffle.partitions
Github user gczsjdy commented on the issue:
https://github.com/apache/spark/pull/19755
This targets on subquery that's not reused, the reused subquery is
correctly shown in UI now. @cloud-fan
---
-
To unsubscribe
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/19763#discussion_r151339166
--- Diff: core/src/main/scala/org/apache/spark/MapOutputTracker.scala ---
@@ -473,16 +477,41 @@ private[spark] class MapOutputTrackerMaster
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/19763#discussion_r151332369
--- Diff: core/src/main/scala/org/apache/spark/MapOutputTracker.scala ---
@@ -473,16 +477,41 @@ private[spark] class MapOutputTrackerMaster
Github user gczsjdy commented on the issue:
https://github.com/apache/spark/pull/19763
cc @cloud-fan @viirya @gatorsmile @chenghao-intel
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
GitHub user gczsjdy opened a pull request:
https://github.com/apache/spark/pull/19763
[SPARK-22537] Aggregation of map output statistics on driver faces single
point bottleneck
## What changes were proposed in this pull request?
In adaptive execution, the map output
Github user gczsjdy commented on the issue:
https://github.com/apache/spark/pull/19755
But it might make users confused, I think what shows on UI is supposed to
be exactly things that get executed. Maybe accuracy is more important than
clearness
Github user gczsjdy commented on the issue:
https://github.com/apache/spark/pull/19755
@cloud-fan @viirya @carsonwang @gatorsmile @yucai Could you please help me
review this?
---
-
To unsubscribe, e-mail: reviews
GitHub user gczsjdy opened a pull request:
https://github.com/apache/spark/pull/19755
[SPARK-22524] Subquery shows reused on UI SQL tab even if it's not reused
After manually disabling `reuseSubquery` rule, the subqueries won't be
reused. But on the SQL graph, there is only one
Github user gczsjdy commented on the issue:
https://github.com/apache/spark/pull/11403
@davies Hi, what do you mean by "Since all the planner only work with tree,
so this rule should be the last one for the entire planning."?
Thanks if you
Github user gczsjdy closed the pull request at:
https://github.com/apache/spark/pull/17359
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user gczsjdy commented on the issue:
https://github.com/apache/spark/pull/17359
Sorry, but I think this is inactive. Thanks for your attention. @wzhfy
@viirya @gatorsmile
---
-
To unsubscribe, e-mail
1 - 100 of 181 matches
Mail list logo