Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12902#discussion_r62087720
--- Diff: python/pyspark/sql/session.py ---
@@ -71,9 +71,6 @@ class SparkSession(object):
.config("spark.some.config.option&qu
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12809#issuecomment-217014407
Oh, thank you, @andrewor14 .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/12911
[SPARK-15134][EXAMPLE] Indent SparkSession builder patterns and update
binary_classification_metrics_example.py
## What changes were proposed in this pull request?
This issue
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12911#issuecomment-217022724
Hi, @andrewor14 .
To finish `SparkSession builder pattern` completely, I addressed your
remaining comments in this PR.
Jenkins will run a full test since
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12809#discussion_r62114250
--- Diff:
examples/src/main/java/org/apache/spark/examples/ml/JavaAFTSurvivalRegressionExample.java
---
@@ -21,23 +21,19 @@
import
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12809#discussion_r62116077
--- Diff:
examples/src/main/scala/org/apache/spark/examples/mllib/RegressionMetricsExample.scala
---
@@ -18,22 +18,22 @@
package
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12809#discussion_r62114371
--- Diff:
examples/src/main/java/org/apache/spark/examples/ml/JavaAFTSurvivalRegressionExample.java
---
@@ -21,23 +21,19 @@
import
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12809#discussion_r62115755
--- Diff:
examples/src/main/python/mllib/binary_classification_metrics_example.py ---
@@ -27,7 +27,7 @@
if __name__ == "__m
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12809#discussion_r62115862
--- Diff:
examples/src/main/java/org/apache/spark/examples/ml/JavaAFTSurvivalRegressionExample.java
---
@@ -21,23 +21,19 @@
import
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12809#issuecomment-217005321
Thank you for review, again!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12809#discussion_r62116382
--- Diff: python/pyspark/context.py ---
@@ -952,6 +952,11 @@ def dump_profiles(self, path
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/12831
Fix lint errors on hive module.
## What changes were proposed in this pull request?
This issue fixes or hides 181 Java linter errors introduced by SPARK-14987
which copied hive
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12830#issuecomment-216102730
Yes, right. And, this can reduce the `import` statement for `SparkConf` and
`SparkContext` for those people. It become much simpler. Cool. I will update my
PR
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12831#issuecomment-216108755
PySpark error is irrelevant to this PR. I'll rebase this to retrigger
Jenkins.
---
If your project is set up for it, you can reply to this email and have your
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12840#issuecomment-216401816
Hi, @jkbradley .
Is there any problem in this PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12860#issuecomment-216601861
Thank you for review, @davies . I'll update soon.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12831#issuecomment-216593697
Thank you, @srowen .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12839#issuecomment-216593777
Thank you, @srowen .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12719#discussion_r62450948
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -90,6 +90,8 @@ abstract class Optimizer
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12719#discussion_r62458835
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -617,6 +618,46 @@ object NullPropagation extends
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12980#issuecomment-217908304
Yep, I did the same question before.
The answer was Jenkins also do the clean build as you mentioned for Travis
CI. In other words, in AMP cluster
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12980#issuecomment-217958114
You're right. I maybe didn't remember the exact phrase I've read before.
By the way, 'this wastes a lot time for sbt building' is effectively different
from
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12980#issuecomment-218029962
Hi, @zsxwing .
According to your advice, I updated the PR; run `lint-java` after `mvn
install`. Thank you for advice!
---
If your project is set up
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12980#issuecomment-218038498
Travic CI was finished with JDK7 (30min) and JDK8 (31min) in parallel.
https://travis-ci.org/dongjoon-hyun/spark/builds/129008329
---
If your project
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12980#issuecomment-218044302
I found that the previous similar discussion on maven checkstyle overhead
with @zsxwing and @srowen here, https://github.com/apache/spark/pull/11438
---
If your
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12376#issuecomment-210550372
The above is my opinion about your second question.
For the first question, we already got three +1 for adding `bround`.
(including yours, thank you
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12376#issuecomment-210549260
Hi, @markhamstra .
If we choose one of `bround` variances, I think we had better choose the
one in Hive.
How do you think about that?
---
If your
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12376#issuecomment-210550801
By the way, is Spark heading to some SQL Standard?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12312#discussion_r59764270
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -519,22 +519,28 @@ object LikeSimplification
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12376#issuecomment-210086456
Hi, @davies .
Could you review this PR, please?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12340#issuecomment-210087678
Hi, @rxin .
This is a PR to fix `HiveTypeCoercion.IfCoercion` behavior.
I think you're the best person to review this since it was originally
written
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12376#discussion_r59776011
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
@@ -1777,6 +1777,23 @@ object functions {
def round(e: Column, scale
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12376#discussion_r59776934
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
@@ -1777,6 +1777,23 @@ object functions {
def round(e: Column, scale
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12376#discussion_r59774065
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
@@ -1777,6 +1777,23 @@ object functions {
def round(e: Column, scale
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12376#issuecomment-210109615
Thank you, @marmbrus , @davies , @markhamstra .
@markhamstra . I really appreciate your attention and understand what your
concern here correctly, but don't
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12312#issuecomment-210144411
Thank you for merging, @rxin !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11605#issuecomment-210568744
Hi, @cenyuhai .
If you don't mind, could you close this PR? It's already merged.
Currently, only PRs for master branch seems to close automatically
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12340#issuecomment-210610347
In Spark,
```
scala> sql("select not(0)").head
org.apache.spark.sql.AnalysisException: cannot resolve '(NOT 0)' due to
data type mismatc
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12340#issuecomment-210609296
It's because that is the real behavior of `assert_null`. When I used
implicit type cast yesterday. It works differently with Hive assert_null. For
example
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12353#issuecomment-210580549
Rebased to resolve conflicts.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12376#issuecomment-210558519
Sure. In this year, Spark seems to able to remove Hive code. I agree that
Spark is better than Hive and we can do more. But, in terms of Hive
compatibility
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/12421
[SPARK-14664][SQL] Fix DecimalAggregates optimizer not to break Window
queries
## What changes were proposed in this pull request?
Historically, `DecimalAggregates` optimizer
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12421#issuecomment-210578353
Oh, thank you for quick review. Actually, I tried to do first like that.
There occurs exceptions about type mismatch due to the difference from input
schema. So
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12340#issuecomment-210592040
Thanks to you, I changed the followings so far.
* Move testcase into `HiveTypeCoercionSuite` in sql module.
* Create [SPARK-14655](https
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12434#issuecomment-210685474
Hi, @rxin.
May I prepare to change my previous PR #12353 about `maxCaseBranches`
according to this PR?
---
If your project is set up for it, you can reply
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12340#issuecomment-210682394
Hi, @rxin .
Do you think we need to implement `assert_true` with different behavior in
Spark?
If then, please let me know. I'll change that.
---
If your
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12340#discussion_r60098395
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/misc.scala
---
@@ -487,6 +487,44 @@ case class PrintToStderr(child
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12353#discussion_r60096003
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/WholeStageCodegen.scala
---
@@ -305,7 +307,7 @@ case class WholeStageCodegen(child
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12340#discussion_r60096468
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/misc.scala
---
@@ -487,6 +487,44 @@ case class PrintToStderr(child
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12376#issuecomment-21142
Until now, I cannot find any reason to pursuit other `bround`
implementation.
In addition, I think the current implementation of this PR is good
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12340#issuecomment-211540230
Thank you so much for your reviewing and merging, @rxin !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12376#issuecomment-211508085
Thank you so much, @davies !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12340#issuecomment-211550949
Oops. Sorry again.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11868#issuecomment-212020852
Hi, @JoshRosen .
Could you take a look at this PR about Row(Double.NaN) == Row(Float.NaN)
for a second when you have some time today? I'm going to close
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12353#issuecomment-212013116
Thank you for merging, @cloud-fan ! :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12353#issuecomment-212013573
Also, thank you so much for your direct guidance, @rxin .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12469#issuecomment-212023124
Hi, @jaceklaskowski .
What about using `[MINOR][DOCS]` prefix for this PR?
I just leave a note since I know you are a frequent contributor
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12353#issuecomment-210875400
Actually, it's not needed.
If there are some missing things to do, could you give me some advice?
---
If your project is set up for it, you can reply
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12340#issuecomment-210877367
Hi, @rxin
Does the latest code fit your intention?
If I missed something again, please let me know.
Thank you for review so far. I learned a little
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11868#issuecomment-212247762
Thank you for your kind reviews during last month, @rxin and @srowen .
I'm happily closing this PR. It's my bad to take so much time to close this
PR
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12509#issuecomment-212248772
I mean #12329 . I want to close that one in order to reduce committers'
review & merge cost.
---
If your project is set up for it, you can reply to this e
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12509#discussion_r60347141
--- Diff: R/pkg/R/functions.R ---
@@ -1008,6 +1008,24 @@ setMethod("round",
column(jc)
})
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12509#issuecomment-212248562
Thank you, @davies !
Or, may I add the minor PR to here? Last time, I remember that you prefer
that way.
---
If your project is set up for it, you can reply
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12329#issuecomment-212257351
@davies . Thank you!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun closed the pull request at:
https://github.com/apache/spark/pull/11868
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11868#issuecomment-212147570
Sure. As I mentioned today, I'm going to close this PR since Spark don't
want this.
I'm wondering what the intention of those testcases was.
Don't worry
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12509#discussion_r60327886
--- Diff: python/pyspark/sql/functions.py ---
@@ -467,16 +467,29 @@ def randn(seed=None):
@since(1.5)
def round(col, scale=0
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11868#issuecomment-212150790
It's just for a record.
In case there is no comment from @JoshRosen , I'll close this PR at tonight.
Silence means many things and seems enough for this PR
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/12509
[WIP][SPARK-14639] Add `bround` function in Python/R.
## What changes were proposed in this pull request?
This issue aims to expose Scala `bround` function in Python/R API
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12509#issuecomment-212179141
Hi, @davies .
Could you review this PR? After your comments, I studied and finally
learned how to expose function in Python and R.
And, if you don't
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12509#discussion_r60333919
--- Diff: R/pkg/R/functions.R ---
@@ -1008,6 +1008,24 @@ setMethod("round",
column(jc)
})
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12340#issuecomment-21028
It's Hive built-in function. The following is function description.
```
hive> desc function assert_true;
OK
assert_true(condition) - Th
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12340#discussion_r59826908
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HivePlanTest.scala
---
@@ -49,4 +49,12 @@ class HivePlanTest extends QueryTest
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12340#discussion_r59826449
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HivePlanTest.scala
---
@@ -49,4 +49,12 @@ class HivePlanTest extends QueryTest
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12340#issuecomment-210282267
Hmm. The following is
[GenericUDFAssertTrue](https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/udf/generic
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12353#issuecomment-209836535
Rebased to resolve conflicts.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12312#issuecomment-209837404
Hi, @rxin .
For `LikeSimplification`, is there something to do more?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12376#issuecomment-209828163
Hi, @markhamstra ! Thank you for commenting.
I agree with your viewpoint. So, this PR has a meaning to add just a
function, `bround`; not a HQL language
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12340#issuecomment-210291837
I added the test cases into `HiveTypeCoercionSuite.test("type coercion for
If")`.
Thank you, @rxin. It should be there.
---
If your project
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12340#issuecomment-210346484
Thank you for fast review. I'll update and create the JIRA.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12340#discussion_r59840175
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/misc.scala
---
@@ -487,6 +487,50 @@ case class PrintToStderr(child
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12340#discussion_r59828732
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/HiveTypeCoercionSuite.scala
---
@@ -348,6 +350,20 @@ class
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12340#discussion_r59828693
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/HiveTypeCoercionSuite.scala
---
@@ -348,6 +350,20 @@ class
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12353#discussion_r59618775
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
---
@@ -620,7 +622,7 @@ abstract class
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12547#discussion_r60509109
--- Diff: R/pkg/R/context.R ---
@@ -225,3 +225,17 @@ broadcast <- function(sc, object) {
setCheckpointDir <- function(sc, d
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/12547
[SPARK-14780][R] Add `setLogLevel` and `version` to SparkR
## What changes were proposed in this pull request?
This PR aims to add `setLogLevel` and `version` funtions to SparkR
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12547#issuecomment-212672853
Sure~ I usually want to switch the log levels for debugging purpose.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12547#issuecomment-212671826
Hi, @shivaram and @davies .
Could you review this PR when you have some time?
Whenever I use SparkR, I feel `setLogLevel` is really needed for me
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12562#issuecomment-212768175
Thank you for review, @rxin .
I'll add a `OptimizeInSuite.scala` for this.
For the name, could you give some advice?
---
If your project is set up
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12547#discussion_r60524553
--- Diff: R/pkg/R/context.R ---
@@ -225,3 +225,17 @@ broadcast <- function(sc, object) {
setCheckpointDir <- function(sc, d
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/12562
[SPARK-14796][SQL] Add spark.sql.optimizer.minSetSize config option.
## What changes were proposed in this pull request?
Currently, `OptimizeIn` optimizer replaces `In` expression
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12562#issuecomment-212768841
Oh, sorry. There exists already `OptimizeInSuite.scala`. I'll fix soon.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12353#issuecomment-211726421
Hi, @rxin . Now, the PR is updated in the following ways.
1. `CaseWhen` split into three classes:
- `CaseWhenBase`: abstract base class
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12353#discussion_r60179727
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/CatalystConf.scala ---
@@ -29,6 +29,7 @@ trait CatalystConf {
def
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12353#discussion_r60179863
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/conditionalExpressions.scala
---
@@ -142,16 +139,54 @@ case class
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12353#discussion_r60180011
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/OptimizeCodegenSuite.scala
---
@@ -0,0 +1,50
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12353#issuecomment-211600358
Thanks. I will create `OptimizeCodegen` then.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12353#issuecomment-211592882
I see. Thank you for the solution!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12353#issuecomment-211599434
For optimizer, may I implement this in `SimplifyConditionals`?
Or, should I create another one like `CaseWhenCodegen`?
---
If your project is set up
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12353#issuecomment-211803070
Thank you for review, @cloud-fan . It's fixed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
1201 - 1300 of 7331 matches
Mail list logo