Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12840#issuecomment-218995870
Hi, @MLnick .
Could you review this PR when you have some time?
This is just about enabling the comment-outed test code according to the
TODO comments
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/13097
[SPARK-15244][PYSPARK] Type of column name created with createDataFrame is
not consistent.
## What changes were proposed in this pull request?
**createDataFrame** returns
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13125#issuecomment-219477839
Thank you for review, @MLnick .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13097#issuecomment-219501873
Thank you for review, @andrewor14 .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13097#discussion_r63401665
--- Diff: python/pyspark/sql/session.py ---
@@ -393,6 +393,8 @@ def createDataFrame(self, data, schema=None,
samplingRatio=None):
>&
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13097#discussion_r63400593
--- Diff: python/pyspark/sql/session.py ---
@@ -393,6 +393,8 @@ def createDataFrame(self, data, schema=None,
samplingRatio=None):
>&
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13097#discussion_r63401357
--- Diff: python/pyspark/sql/session.py ---
@@ -393,6 +393,8 @@ def createDataFrame(self, data, schema=None,
samplingRatio=None):
>&
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13097#issuecomment-219510310
Thank you for review, @davies . I'll fix in an hour.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13097#issuecomment-219539089
Thank you, @andrewor14 and @davies .
I moved the testcase into `sql/tests.py` and it passed the tests again.
---
If your project is set up for it, you can
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/13125
[MINOR][DOCS] Replace remaining 'sqlContext' in ScalaDoc/JavaDoc.
## What changes were proposed in this pull request?
According to the recent change, this PR replaces all
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13097#issuecomment-219005626
Hi, @andrewor14 .
Could you review this PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12719#issuecomment-218950477
Rebase to resolve conflicts.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13087#issuecomment-218949073
Hi, @liancheng and @cloud-fan .
This PR is similar with your commit, `[SPARK-13473][SQL] Don't push
predicate through project with nondeterministic field(s
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13087#issuecomment-218939725
There are several cases which assumes UDF is deterministic. It would be a
big change to user. I'll revert the change on ScalaUDF, and update this PR to
change
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13100#issuecomment-219110619
Hi, @techaddict . I already made a PR for this.
https://github.com/apache/spark/pull/12840
---
If your project is set up for it, you can reply to this email
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13100#issuecomment-219110810
It's JIRA issue, SPARK-15058.
You had better remove overlapped stuff.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13158#issuecomment-219977931
@srowen . I updated the PR to use `TimeZone.getDefault()`. Thank you again.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13158#discussion_r63675265
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala ---
@@ -172,6 +172,7 @@ private[ui] class AllJobsPage(parent: JobsTab) extends
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13158#discussion_r63674102
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala ---
@@ -172,6 +172,8 @@ private[ui] class AllJobsPage(parent: JobsTab) extends
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13164#issuecomment-219952488
Hi, @zhengruifeng .
Could you run `lint-java`, too?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13158#discussion_r63671930
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala ---
@@ -172,6 +172,8 @@ private[ui] class AllJobsPage(parent: JobsTab) extends
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13158#discussion_r63675768
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala ---
@@ -172,6 +172,7 @@ private[ui] class AllJobsPage(parent: JobsTab) extends
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13164#issuecomment-219956297
It takes less than 30 minutes in a clean build. :)
```
build/mvn -T 4 -q -DskipTests -Pyarn -Phadoop-2.3 -Pkinesis-asl -Phive
-Phive-thriftserver install
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13158#discussion_r63673599
--- Diff:
core/src/main/resources/org/apache/spark/ui/static/timeline-view.js ---
@@ -26,7 +26,10 @@ function drawApplicationTimeline(groupArray
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13158#issuecomment-220075551
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13145#issuecomment-220079321
Thank you, @srowen.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12719#issuecomment-220100881
Sure! I'll investigate more. Let you know.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13158#issuecomment-220105230
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13158#issuecomment-220106059
Oh, did I do that? Sorry, but which file did I do that?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13158#issuecomment-220106807
I see. The `js` files. I'll fix them. Thank you.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13158#discussion_r63751712
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala ---
@@ -172,6 +172,7 @@ private[ui] class AllJobsPage(parent: JobsTab) extends
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13158#issuecomment-220109621
Oh, wait a moment. I'm testing on refactored method. I'll update the PR
very soon again.
---
If your project is set up for it, you can reply to this email
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13158#issuecomment-220110453
I'm done. You can test locally now with the up-to-date code.
Thank you for review and local testing, @zsxwing !
---
If your project is set up for it, you
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12719#issuecomment-218364963
Anyway, I made a long detour for this PR. Sorry for that, @cloud-fan , and
thank you again.
---
If your project is set up for it, you can reply to this email
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13043#issuecomment-218365410
Thank you, @rxin .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/13087
[SPARK-15282][SQL] UDF funtion is not always deterministic and need to be
evaluated once.
## What changes were proposed in this pull request?
UDF functions might be non
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12719#discussion_r62800830
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -618,6 +619,48 @@ object NullPropagation extends
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12719#discussion_r62801060
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -618,6 +619,48 @@ object NullPropagation extends
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12719#discussion_r62803194
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -618,6 +619,48 @@ object NullPropagation extends
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12719#issuecomment-218388787
I addressed your comments. Thank you for lots of advice.
If there is something to fix again, please let me know.
---
If your project is set up for it, you
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12719#discussion_r62800947
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -618,6 +619,48 @@ object NullPropagation extends
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12719#discussion_r62801321
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -618,6 +619,48 @@ object NullPropagation extends
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12719#discussion_r62802207
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -618,6 +619,48 @@ object NullPropagation extends
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12719#discussion_r62802115
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -618,6 +619,48 @@ object NullPropagation extends
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12719#discussion_r62805230
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -618,6 +619,48 @@ object NullPropagation extends
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12719#issuecomment-218560535
Hi, @cloud-fan . Now, it passed again.
For the remaining comments, I can do that but the reason is not clear to me.
> I prefer to handle one Project e
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12719#discussion_r62808356
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -618,6 +619,48 @@ object NullPropagation extends
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12719#discussion_r62808854
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -618,6 +619,48 @@ object NullPropagation extends
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12719#discussion_r62811803
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -618,6 +619,48 @@ object NullPropagation extends
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12719#issuecomment-215893453
Hi, @cloud-fan .
Now, it's ready for review as `FoldablePropagation` optimizer.
I hope the current direction is right. Any comments are welcome. Thanks
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12860#issuecomment-216623388
Thank you, @davies and @andrewor14 .
Ya, it's still evolving! No problem. After merging #12873 , I'll update
accordingly again.
---
If your project is set
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12860#issuecomment-216614939
@davies . I addressed two comments, but I'm not sure about the first one.
We need to change `ScalaSession.Builder` first if we want to change.
---
If your
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12860#discussion_r61837000
--- Diff: python/pyspark/sql/session.py ---
@@ -445,6 +446,77 @@ def read(self):
"""
return DataFrameReade
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/12860
[SPARK-15084][PYTHON][SQL] Use builder pattern to create SparkSession in
PySpark.
## What changes were proposed in this pull request?
This is a python port of corresponding Scala
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12719#issuecomment-216446767
Ah. I didn't expect that you're in vacation. Sorry for asking too much.
For the generalization, I got it. What a shame on me. I did it in too
narrow way
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12860#issuecomment-216429487
@rxin .
This is the initial commit to confirm the direction. Could you give me some
advice?
---
If your project is set up for it, you can reply
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12860#issuecomment-21678
Great! Thank you, @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12860#issuecomment-216706125
Hi, @davies and @andrewor14 . Now, it's updated.
- Add `stop` in `SparkSession`
- Update builder pattern according to the Scala versions.
- One
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12860#discussion_r61970538
--- Diff: python/pyspark/sql/session.py ---
@@ -58,10 +59,16 @@ def toDF(self, schema=None, sampleRatio=None):
class SparkSession
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12860#discussion_r61968765
--- Diff: python/pyspark/sql/session.py ---
@@ -58,10 +59,16 @@ def toDF(self, schema=None, sampleRatio=None):
class SparkSession
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12860#issuecomment-216724708
Thank you, @andrewor14 !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12719#discussion_r62269823
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects.scala
---
@@ -662,7 +662,7 @@ case class AssertNotNull(child
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12911#issuecomment-217295694
Thank you, @andrewor14 !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12719#issuecomment-217472671
Hi, @cloud-fan .
Now, it's ready for review.
Could you review this when you have some time?
---
If your project is set up for it, you can reply
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12839#issuecomment-216290747
Hi, @srowen . Thank you for review!
Then, may I just update this PR by simply removing the `TODO` comments?
```
// TODO(crankshaw) turn result
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12831#issuecomment-216291461
Thank you for review, @hvanhovell and @srowen .
For `PasswdAuthenticationProvider`, I didn't change that due to the same
reason mentioned by @srowen
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12809#issuecomment-216080409
Thank you for feedback, @rxin . Sure!
Also, congratulation for branching 2.0. :)
---
If your project is set up for it, you can reply to this email and have
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12809#issuecomment-216057809
Hi, @rxin .
Could you merge this PR please?
Or, please let me know if there is something to do more.
---
If your project is set up for it, you can reply
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12719#issuecomment-216057898
Hi, @cloud-fan .
Please let me know if there is something to do more.
Thank you.
---
If your project is set up for it, you can reply to this email
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12590#discussion_r61770577
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1440,6 +1441,18 @@ object
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12719#issuecomment-216323704
Hi, @marmbrus .
Could you review this PR about `FoldablePropagation` optimization, too?
---
If your project is set up for it, you can reply to this email
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12839#issuecomment-216332953
Sure. I just thought `enum` improves the `return` and `match`
exhaustiveness, but I agree with your opinion, too.
I'll update this PR for just removing TODO
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12590#issuecomment-216339477
@marmbrus . Now, it's ready for review again.
This PR becomes to do much better than what I expected.
Thank you so much.
---
If your project is set up
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12839#issuecomment-216331726
Oh, actually, the ABCD notation comes from the description of that function.
https://github.com/dongjoon-hyun/spark/blob/SPARK-15057/graphx/src/main/scala/org
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12839#issuecomment-216334582
Thank you for making decision for this PR, @srowen .
I updated the PR and JIRA accordingly. Now, this just removes comment.
---
If your project is set up
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12839#issuecomment-216336054
Could you merge this PR, @srowen ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12590#issuecomment-216341181
Thank you, @marmbrus !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12830#discussion_r61702208
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala
---
@@ -635,6 +642,122 @@ class SparkSession private(
object
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12830#issuecomment-216102294
Thank you for notifying me. It looks good to me. Then, the three-line
pattern will be replace into one factory statement, right?
**Spark 1.x
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12830#discussion_r61706770
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala
---
@@ -635,6 +642,122 @@ class SparkSession private(
object
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12830#discussion_r61707542
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala
---
@@ -635,6 +642,122 @@ class SparkSession private(
object
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/12840
[SPARK-15058][ML][TEST] Enable Java DecisionTree Save/Load test codes.
## What changes were proposed in this pull request?
This issue enables six Java DecisionTree save/load tests
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/12839
[SPARK-15057][GRAPHX] Use enum Quadrant in GraphGenerators
## What changes were proposed in this pull request?
Currently, `chooseCell` and `pickQuadrant` of `GraphGenerators` uses
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12809#issuecomment-216386899
Sure! I'm waiting for it. :) Thank you!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12809#issuecomment-216390963
Oh! I misunderstood. :)
Thanks. I'll take that issue happily.
---
If your project is set up for it, you can reply to this email and have your
reply appear
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/12850
[SPARK-15076][SQL] Improve ConstantFolding optimizer by using integral
associative property
## What changes were proposed in this pull request?
This issue improves `ConstantFolding
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12809#issuecomment-216389243
Okay, I'll remove Python-related stuff here first and keep them for later.
When do you expect for merging new builder api for python?
---
If your project
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12719#issuecomment-217315618
Rebased.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12911#discussion_r62127104
--- Diff:
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/ExternalShuffleBlockHandler.java
---
@@ -87,8 +87,11 @@ protected
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12980#issuecomment-218056112
Hi, @srowen .
I removes `RAT` and `lint-python` in order to focus **lint-java**.
Also, `lint-scala` is removed since it's already done by `mvn install
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12980#issuecomment-218062547
@srowen
I added comments, too.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12719#issuecomment-218080532
Hi, @cloud-fan .
Now, it handles subplans of `Union` or `Command` queries, too.
---
If your project is set up for it, you can reply to this email and have
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12719#issuecomment-218080773
Locally, it passed `catalyst`, `sql`, and `hive` testsuites which have
failures before.
---
If your project is set up for it, you can reply to this email
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12980#issuecomment-218229355
@srowen , I updated the comments by explaining the purpose of this test
harness clearly. I didn't described about the Jenkins-related stuff because
that's
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12850#issuecomment-218231334
Rebased to see the result on re-enable hive queries.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12719#issuecomment-218231143
I rebased this to test re-enabled hive tests at
https://github.com/apache/spark/commit/2646265368aab0f0b800d3052e557dea7c40c2d6.
---
If your project is set up
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12809#issuecomment-216952309
Rebased to resolve conflicts.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12809#issuecomment-216953628
Hi, @rxin and @andrewor14 .
Now, Scala/Java/Python examples use new style.
Could you review this PR?
---
If your project is set up for it, you can reply
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12840#issuecomment-216954666
Rebased.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12840#issuecomment-216959841
Hi, @mengxr .
Could you review this PR when you have some time?
---
If your project is set up for it, you can reply to this email and have your
reply appear
1101 - 1200 of 7331 matches
Mail list logo