Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21847#discussion_r206353416
--- Diff:
external/avro/src/main/scala/org/apache/spark/sql/avro/AvroSerializer.scala ---
@@ -120,7 +133,7 @@ class AvroSerializer(rootCatalystType
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21847#discussion_r206350423
--- Diff:
external/avro/src/main/scala/org/apache/spark/sql/avro/AvroSerializer.scala ---
@@ -87,17 +87,30 @@ class AvroSerializer(rootCatalystType
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21904#discussion_r206333426
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala
---
@@ -416,6 +450,12 @@ object SimplifyConditionals
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21852#discussion_r206271589
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala
---
@@ -416,6 +416,23 @@ object SimplifyConditionals
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21852#discussion_r206266243
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala
---
@@ -416,6 +416,23 @@ object SimplifyConditionals
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21904#discussion_r205963712
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala
---
@@ -416,6 +416,29 @@ object SimplifyConditionals
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21852#discussion_r205946975
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala
---
@@ -416,6 +416,23 @@ object SimplifyConditionals
GitHub user dbtsai opened a pull request:
https://github.com/apache/spark/pull/21904
[SPARK-24953] [SQL] Prune a branch in `CaseWhen` if previously seen
## What changes were proposed in this pull request?
If a condition in a branch is previously seen, this branch can be
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21852
+cc @cloud-fan and @gatorsmile
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21850#discussion_r205830257
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala
---
@@ -414,6 +414,9 @@ object SimplifyConditionals extends
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21847#discussion_r205692946
--- Diff:
external/avro/src/main/scala/org/apache/spark/sql/avro/AvroSerializer.scala ---
@@ -165,16 +183,112 @@ class AvroSerializer(rootCatalystType
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21847#discussion_r205692778
--- Diff:
external/avro/src/main/scala/org/apache/spark/sql/avro/AvroSerializer.scala ---
@@ -165,16 +183,112 @@ class AvroSerializer(rootCatalystType
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21847#discussion_r205684257
--- Diff:
external/avro/src/main/scala/org/apache/spark/sql/avro/AvroSerializer.scala ---
@@ -165,16 +183,112 @@ class AvroSerializer(rootCatalystType
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21847#discussion_r205685728
--- Diff:
external/avro/src/main/scala/org/apache/spark/sql/avro/AvroSerializer.scala ---
@@ -165,16 +183,112 @@ class AvroSerializer(rootCatalystType
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21847#discussion_r205683257
--- Diff:
external/avro/src/main/scala/org/apache/spark/sql/avro/AvroSerializer.scala ---
@@ -148,7 +165,8 @@ class AvroSerializer(rootCatalystType
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21847#discussion_r205648911
--- Diff:
external/avro/src/main/scala/org/apache/spark/sql/avro/AvroSerializer.scala ---
@@ -87,17 +88,33 @@ class AvroSerializer(rootCatalystType
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21852#discussion_r205599224
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala
---
@@ -416,6 +416,29 @@ object SimplifyConditionals
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21850#discussion_r205556780
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala
---
@@ -414,6 +414,9 @@ object SimplifyConditionals extends
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21847
+cc @MaxGekk and @gengliangwang who worked on this part of codebase.
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21852#discussion_r205306098
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala
---
@@ -416,6 +416,22 @@ object SimplifyConditionals
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21852#discussion_r205305691
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/SimplifyConditionalSuite.scala
---
@@ -122,4 +126,25 @@ class
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21850
@gatorsmile All the new rules added into `If` should always have `CaseWhen`
version.
But there will be time that we only add `If` version, or it only makes
sense to have `If` version
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21850#discussion_r205187664
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala
---
@@ -414,6 +414,16 @@ object SimplifyConditionals
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21850
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21850#discussion_r204953356
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala
---
@@ -414,6 +414,16 @@ object SimplifyConditionals
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21850#discussion_r204953202
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala
---
@@ -414,6 +414,16 @@ object SimplifyConditionals
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21848
Here is a followup PR for making `AssertTrue` and `AssertNotNull`
`non-deterministic` https://issues.apache.org/jira/browse/SPARK-24913
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21864
LGTM. Merged into master.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21848
@kiszk `trait Stateful extends Nondeterministic`, and this rule will not be
invoked when an expression is nondeterministic
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21848#discussion_r204938763
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala
---
@@ -1627,6 +1627,8 @@ case class
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21850#discussion_r204933164
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala
---
@@ -414,6 +414,9 @@ object SimplifyConditionals extends
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21850#discussion_r204933531
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala
---
@@ -414,6 +414,9 @@ object SimplifyConditionals extends
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21850
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21848
This will simply the scope of this PR a lot by just having both
`AssertTrue` and `AssertNotNull` as `non-deterministic` expression. My concern
is the more `non-deterministic` expressions we have
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21850
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21850
@cloud-fan and @gatorsmile
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21850#discussion_r204560250
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala
---
@@ -414,6 +414,12 @@ object SimplifyConditionals
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21848
@gatorsmile this can remove some of the expensive condition expressions, so
I would like to find a way to properly implement this.
Thank you all for chiming in with many good points. Let me
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21848#discussion_r204546087
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala
---
@@ -390,6 +390,7 @@ object SimplifyConditionals extends
GitHub user dbtsai opened a pull request:
https://github.com/apache/spark/pull/21852
[SPARK-24893] [SQL] Remove the entire CaseWhen if all the outputs are
semantic equivalence
## What changes were proposed in this pull request?
Similar to SPARK-24890, if all the outputs of
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21848#discussion_r204516658
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala
---
@@ -390,6 +390,7 @@ object SimplifyConditionals extends
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21848#discussion_r204515560
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala
---
@@ -390,6 +390,7 @@ object SimplifyConditionals extends
GitHub user dbtsai opened a pull request:
https://github.com/apache/spark/pull/21850
[SPARK-24892] [SQL] Simplify `CaseWhen` to `If` when there is only one
branch
## What changes were proposed in this pull request?
After the rule of removing the unreachable branches, it
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21848#discussion_r204509378
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala
---
@@ -403,14 +404,14 @@ object SimplifyConditionals
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21848#discussion_r204508986
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala
---
@@ -651,6 +652,7 @@ object
GitHub user dbtsai opened a pull request:
https://github.com/apache/spark/pull/21848
[SPARK-24890] [SQ] Short circuiting the `if` condition when `trueValue` and
`falseValue` are the same
## What changes were proposed in this pull request?
When `trueValue` and `falseValue
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21847
add to whitelist
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21797
Merged into master as it passes the build now.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21797
Test it again.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21797#discussion_r203169950
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala
---
@@ -218,15 +218,24 @@ object ReorderAssociativeOperator
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21442
I opened a new PR at https://github.com/apache/spark/pull/21797/files Will
work on the test issue there. Thanks.
---
-
To
GitHub user dbtsai opened a pull request:
https://github.com/apache/spark/pull/21797
[SPARK-24402] [SQL] Optimize `In` expression when only one element in the
collection or collection is empty
## What changes were proposed in this pull request?
Two new rules in the logical
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21442
@HyukjinKwon thanks for bringing this to my attention. @gatorsmile I
thought the bug is found by this PR, and not in this PR. This PR is blocked
until SPARK-24443 is addressed. I'll unblocck th
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21756
Jenkins, ok to test
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21756
add to whitelist
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21749#discussion_r201793790
--- Diff:
repl/scala-2.11/src/main/scala/org/apache/spark/repl/SparkILoop.scala ---
@@ -116,6 +124,132 @@ class SparkILoop(in0: Option[BufferedReader], out
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21495
JIRA and PR are created to make sure the messages are printed in the right
order.
https://issues.apache.org/jira/browse/SPARK-24785
https://github.com/apache/spark/pull/21749
GitHub user dbtsai opened a pull request:
https://github.com/apache/spark/pull/21749
[SPARK-24785] [SHELL] Making sure REPL prints Spark UI info and then
Welcome message
## What changes were proposed in this pull request?
After https://github.com/apache/spark/pull/21495
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21459
Thanks. Merged into master.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21459
There are three approvals from the committers, and the changes are pretty
trivial to revert if we see any performance regression which is unlikely. To
move thing forward, if there is no further
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21692
@viirya thanks for this PR. I thought SBT always uses pom for dependencies,
and I wonder why there is a discrepancy so we need to manually override it
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21495
I was on a family leave for couple weeks. Thank you all for helping out and
merging it.
The only change with this PR is that the welcome message will be printed
first, and then the Spark
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21495
retest this please.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21495
I decided to remove the hack I put to get the Spark UI consistent since
this hack will bring into more problems.
@som-snytt Is it possible to move the `printWelcome` and `splash.start
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21519
Merged into master as it's only document changes. Congratulations on first
PR! Thanks!
---
-
To unsubscribe, e-mail: re
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21519
add to whitelist
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21495#discussion_r194176806
--- Diff:
repl/scala-2.11/src/main/scala/org/apache/spark/repl/SparkILoopInterpreter.scala
---
@@ -21,8 +21,22 @@ import scala.collection.mutable
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21495#discussion_r193981694
--- Diff:
repl/scala-2.11/src/main/scala/org/apache/spark/repl/SparkILoopInterpreter.scala
---
@@ -21,8 +21,22 @@ import scala.collection.mutable
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21495
@som-snytt initialize it in `printWelcome` will not work since in order
version of Scala, `printWelcome` is the last one to be executed
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21495#discussion_r193927042
--- Diff:
repl/scala-2.11/src/main/scala/org/apache/spark/repl/SparkILoopInterpreter.scala
---
@@ -21,8 +21,22 @@ import scala.collection.mutable
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21495
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21453
I opened a PR for Scala 2.11.12 with Scala API change fix.
https://github.com/apache/spark/pull/21495 Thanks.
---
-
To
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21495#discussion_r192961905
--- Diff:
repl/scala-2.11/src/main/scala/org/apache/spark/repl/SparkILoopInterpreter.scala
---
@@ -21,8 +21,22 @@ import scala.collection.mutable
GitHub user dbtsai opened a pull request:
https://github.com/apache/spark/pull/21495
[SPARK-24418][Build] Upgrade Scala to 2.11.12 and 2.12.6
## What changes were proposed in this pull request?
(Please fill in changes proposed in this fix)
## How was this patch
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21453
I filed an issue in Scala community about the interface changes, and they
said those REPL apis are intended to be private.
https://github.com/scala/bug/issues/10913
Being said that, they
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21459
@rxin Mostly for Java 9+.
[ASM 6.x](https://mvnrepository.com/artifact/org.ow2.asm/asm/6.0/usages)
has been proven in many projects such as FB Presto, Google Guice Core Library,
CGLIB
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21459
@srowen From the release note, http://asm.ow2.io/versions.html , the
differences between 5.2 to 6.0 are
> Codebase migrated to gitlab (feature requests 317617, 317619, 317542)
Supp
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21459#discussion_r191846953
--- Diff: pom.xml ---
@@ -313,13 +313,13 @@
chill-java
${chill.version}
-
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21459
cc @srowen and @rxin for more eyes.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21459
@HyukjinKwon Once Spark can be built against JDK9+, we'll need to figure
out how we want to set it up in Jenkins. We can do two builds, and one for JDK8
and another one for JDK9+ for each PR
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21458
Merged into master. Thanks.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21459
@felixcheung Yes. All tests are passed with JDK8.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
GitHub user dbtsai opened a pull request:
https://github.com/apache/spark/pull/21459
[SPARK-24420][Build] Upgrade ASM to 6.1 to support JDK9+
## What changes were proposed in this pull request?
Upgrade ASM to 6.1 to support JDK9+
## How was this patch tested
GitHub user dbtsai opened a pull request:
https://github.com/apache/spark/pull/21458
[SPARK-24419] [Build] Upgrade SBT to 0.13.17 with Scala 2.10.7 for JDK9+
## What changes were proposed in this pull request?
Upgrade SBT to 0.13.17 with Scala 2.10.7 for JDK9
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21453
Here is the issue in Scala side. https://github.com/scala/bug/issues/10913
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21453
I'm also looking at this issue. The challenge is that one of the hacks we
use to initialize the Spark
before REPL sees any files was removed in Scala 2.11.12.
https://github.com/a
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21416
Merged into master. Thank you everyone for reviewing.
Followup PR will be created for
1. Adding tests in Java.
2. Adding docs about automagical type casting
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21416#discussion_r191506417
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Column.scala ---
@@ -786,6 +787,24 @@ class Column(val expr: Expression) extends Logging
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21416#discussion_r191505830
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/ColumnExpressionSuite.scala ---
@@ -390,11 +394,67 @@ class ColumnExpressionSuite extends QueryTest
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21416
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21442
@maropu I didn't do a full performance benchmark. I believe the performance
gain can be from predicate pushdown when only one element in the set. This can
be a lot.
I forgot which o
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21442#discussion_r191320828
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala
---
@@ -219,7 +219,14 @@ object ReorderAssociativeOperator
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21416
@rxin I simplified the test cases as you suggested. Thanks.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21416#discussion_r191317978
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/ColumnExpressionSuite.scala ---
@@ -392,9 +396,97 @@ class ColumnExpressionSuite extends QueryTest
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21416#discussion_r191317980
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/ColumnExpressionSuite.scala ---
@@ -392,9 +396,97 @@ class ColumnExpressionSuite extends QueryTest
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21416
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21442
+@cloud-fan
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
GitHub user dbtsai opened a pull request:
https://github.com/apache/spark/pull/21442
[SPARK-24402] [SQL] Optimize `In` expression when only one element in the
collection or collection is empty
## What changes were proposed in this pull request?
Two new rules in the logical
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21416
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/21416
@cloud-fan unfortunately, scala vararg can not be overloaded, and scala
will return the following error.
```scala
Error:(410, 32) ambiguous reference to overloaded definition,
both
201 - 300 of 1803 matches
Mail list logo