Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13087#issuecomment-220394299
Hi, @rxin .
Could you review this PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12719#issuecomment-220233909
@cloud-fan .
Now, it's ready again.
Could you merge this PR.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13188#discussion_r63826284
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/tpcds/TPCDSQueryBenchmark.scala
---
@@ -0,0 +1,106
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13245#issuecomment-220880599
Sorry, it wasn't a good idea of extending SparkSession for RDD. I'm closing
this PR.
---
If your project is set up for it, you can reply to this email and have
Github user dongjoon-hyun closed the pull request at:
https://github.com/apache/spark/pull/13245
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13087#issuecomment-220857498
I added for Scaladoc/Javadoc/Pydoc, but I cannot find proper places exposed
in SparkR doc.
Hi, @shivaram , @davies , @felixcheung .
Should we need
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13087#issuecomment-220858268
Thank you, @felixcheung ! Then, this PR is enough for the current master
branch. :)
If SparkR have something related to this in the future, we can add a note
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13192#issuecomment-220851707
Hi, @davies and @rxin .
Could you review this PR again when you have some time?
---
If your project is set up for it, you can reply to this email and have
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13087#issuecomment-220855166
All issues are resolved now. I'll update this PR now.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13239#discussion_r64154982
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkStrategies.scala ---
@@ -359,8 +359,8 @@ private[sql] abstract class
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13087#issuecomment-220855050
Thank you for sharing this, @linbojin .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13260#discussion_r64193997
--- Diff: core/src/test/scala/org/apache/spark/rdd/RDDSuite.scala ---
@@ -678,27 +678,26 @@ class RDDSuite extends SparkFunSuite
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13260#discussion_r64194767
--- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
@@ -550,17 +550,19 @@ abstract class RDD[T: ClassTag](
} else
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13260#discussion_r64194968
--- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
@@ -550,17 +550,19 @@ abstract class RDD[T: ClassTag](
} else
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13260#discussion_r64194892
--- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
@@ -550,17 +550,19 @@ abstract class RDD[T: ClassTag](
} else
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/13260
[SPARK-15481][CORE] Prevent `takeSample` from calling `collect` multiple
times
## What changes were proposed in this pull request?
`takeSample` might call `collect` multiple times
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13260#discussion_r64335944
--- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
@@ -550,17 +550,19 @@ abstract class RDD[T: ClassTag](
} else
Github user dongjoon-hyun closed the pull request at:
https://github.com/apache/spark/pull/13260
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13260#discussion_r64338520
--- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
@@ -550,17 +550,19 @@ abstract class RDD[T: ClassTag](
} else
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13260#discussion_r64343091
--- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
@@ -550,17 +550,19 @@ abstract class RDD[T: ClassTag](
} else
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13260#discussion_r64339447
--- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
@@ -550,17 +550,19 @@ abstract class RDD[T: ClassTag](
} else
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13260#discussion_r64334393
--- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
@@ -550,17 +550,19 @@ abstract class RDD[T: ClassTag](
} else
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13260#discussion_r64335663
--- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
@@ -550,17 +550,19 @@ abstract class RDD[T: ClassTag](
} else
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13260#issuecomment-221193011
@andrewor14 .
Now, it's just about fixing `takeSample` testcase.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13192#discussion_r64333811
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeFormatter.scala
---
@@ -49,6 +49,24 @@ object
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13260#discussion_r64335756
--- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
@@ -550,17 +550,19 @@ abstract class RDD[T: ClassTag](
} else
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13260#discussion_r64342963
--- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
@@ -550,17 +550,19 @@ abstract class RDD[T: ClassTag](
} else
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13260#discussion_r64344334
--- Diff: core/src/test/scala/org/apache/spark/rdd/RDDSuite.scala ---
@@ -678,27 +678,26 @@ class RDDSuite extends SparkFunSuite
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13260#discussion_r64337410
--- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
@@ -550,17 +550,19 @@ abstract class RDD[T: ClassTag](
} else
GitHub user dongjoon-hyun reopened a pull request:
https://github.com/apache/spark/pull/13260
[SPARK-15481][CORE] Prevent `takeSample` from calling `collect` multiple
times
## What changes were proposed in this pull request?
`takeSample` might call `collect` multiple times
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13260#issuecomment-221223390
Hi, @andrewor14 .
Could you review this PR again?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13192#discussion_r64354339
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeFormatter.scala
---
@@ -49,6 +49,24 @@ object
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13192#discussion_r64446968
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeFormatterSuite.scala
---
@@ -36,6 +36,22 @@ class
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13260#issuecomment-221359274
Thank you, @andrewor14 !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12850#issuecomment-221334943
Hi, @marmbrus and @rxin .
Could you review this PR when you have some time?
---
If your project is set up for it, you can reply to this email and have your
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13192#issuecomment-221344268
Thank you, @davies !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13241#issuecomment-220795489
Thank you, @cloud-fan !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13087#issuecomment-220796055
Thank you. @cloud-fan .
By the way, to be clear with this, should we revert the change on
`PushDownPredicate`, too?
I think it's another separate issue
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13239#discussion_r64139787
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkStrategies.scala ---
@@ -359,8 +359,8 @@ private[sql] abstract class
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13245#issuecomment-220801877
Hi, @rxin .
I'm wondering your opinion about this PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/13245
[SPARK-15466][SQL] Make `SparkSession` as the entry point to programming
with RDD too
## What changes were proposed in this pull request?
`SparkSession` greatly reduces the number
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13087#issuecomment-220661795
Thank you, @thunterdb .
By the way, do you mean we should resolve the JIRA issue as WONTFIX?
---
If your project is set up for it, you can reply
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13087#issuecomment-220721052
For the documentation, sure, I will. We had better declare UDF should be
deterministic as @thunterdb and you said from now.
For the runtime, I don't have
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13245#issuecomment-220814644
Unfortunately, `Dataset` (or `Dataframe`) seems not suitable to achieve the
goal on Python.
```python
>>> spark.parallelize(range(1,
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13245#issuecomment-220813421
I see. Thank you!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13192#issuecomment-220740772
Hi, @davies and @rxin .
I updated the code and description again according to the current master.
---
If your project is set up for it, you can reply
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13164#issuecomment-220643825
Thank you, @zhengruifeng !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13087#issuecomment-220643135
The PySpark failure is fixed as a HOTFIX.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13192#issuecomment-220642836
The PySpark failure is fixed as a HOTFIX.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12850#issuecomment-220416079
Rebased to trigger Jenkins test again.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/13192
[SPARK-13135][SQL] Don't print expressions recursively in generated code
## What changes were proposed in this pull request?
This PR is an up-to-date and a little bit improved
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12719#issuecomment-220256852
Thank you so much, @cloud-fan !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13030#issuecomment-218252778
Oh, there are more. I'll add more commits.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12980#issuecomment-218280054
Hi, @nchammas . Thank you so much for fast attention!
First of all, I'm not sure about the history well. But, maybe, Apache
policy seems to be changed
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12980#issuecomment-218279426
Thank you, @srowen !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13030#issuecomment-218253767
@andrewor14 . I've saw you fixed the first one. Please fix the second one,
too.
---
If your project is set up for it, you can reply to this email and have your
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13030#issuecomment-218254006
I rebased.
Right, it was a huge work. Great, @techaddict !
---
If your project is set up for it, you can reply to this email and have your
reply appear
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/13030
[HOTFIX] Replace `sparkSession` with `spark`.
## What changes were proposed in this pull request?
This replaces `sparkSession` with `spark` in CatalogSuite.scala.
## How
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13030#issuecomment-218252641
Hi, @andrewor14 @techaddict
Typo errors seem to break builds.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13030#issuecomment-218254687
Thank you. I think that's all. This passed the following, locally.
```
build/mvn -T 4 -q -DskipTests -Pyarn -Phadoop-2.3 -Pkinesis-asl -Phive
-Phive
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12719#discussion_r62869275
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -618,6 +619,48 @@ object NullPropagation extends
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12719#issuecomment-218499656
I updated again as you said. It seems I thought in a wrong way last night.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12980#issuecomment-218286685
Yep. Thank you for giving test drive chance, @srowen and for understanding
that, @nchammas .
In terms of computing resources, now we can take advantage
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12980#issuecomment-218291417
@nchammas . I see what you mean now. For INFRA-7367 , Jenkins wanted to use
Travis CI API. Yes, it's not possible as you said. We're able to see just
`pass/fail
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12719#issuecomment-218301571
Rebased to resolve conflicts.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12716#issuecomment-218518068
Hi, @xwu0226 and @liancheng .
There occurs build errors. I made a hotfix for this. Please merge that.
#13053
---
If your project is set up for it, you can
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13053#issuecomment-218518240
cc @andrewor14 , @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13053#issuecomment-218519414
Hi, @mengxr . Could you merge this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/13053
[HOTFIX] Replace sqlContext with spark.
## What changes were proposed in this pull request?
This fixes compile errors.
## How was this patch tested?
Pass
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12719#issuecomment-218515870
Interesting. It's compile error.
```
[error]
/home/jenkins/workspace/SparkPullRequestBuilder/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12719#issuecomment-218518949
There was build break on master branched. I made a hotfix for that. After
merging that, I'll retry this.
---
If your project is set up for it, you can reply
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13053#issuecomment-218524316
Thank you!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/13043
[SPARK-15265][SQL] Fix Union query error message indentation
## What changes were proposed in this pull request?
This issue fixes the error message indentation consistently
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12719#discussion_r62787780
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -618,6 +619,52 @@ object NullPropagation extends
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12719#discussion_r62789853
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -650,15 +646,15 @@ object FoldablePropagation
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12719#discussion_r62792130
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -650,15 +646,15 @@ object FoldablePropagation
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12719#discussion_r62792327
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -650,15 +646,15 @@ object FoldablePropagation
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12719#discussion_r62790052
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -650,15 +646,15 @@ object FoldablePropagation
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12719#discussion_r62793156
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -650,15 +646,15 @@ object FoldablePropagation
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12719#issuecomment-218825375
Hi, @cloud-fan .
For `AggregateOptimizeSuite.scala`, we need to use `caseSensitiveAnalysis`.
Especially, `test("remove repetition in grouping expre
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12719#discussion_r63060729
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/FoldablePropagationSuite.scala
---
@@ -0,0 +1,138
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12719#discussion_r63063384
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -618,6 +619,48 @@ object NullPropagation extends
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/12719#discussion_r63060451
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -618,6 +619,48 @@ object NullPropagation extends
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/13145
[MINOR][SQL] Remove unused pattern matching variables in Optimizers.
## What changes were proposed in this pull request?
This PR removes unused pattern matching variable
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12719#issuecomment-219670004
Thank YOU, @cloud-fan . I fix them soon.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12719#issuecomment-219682307
Oh, I think MiMa failure is irrelevant to this PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13145#issuecomment-219687641
Thank you for review, @srowen .
Probably, there exists similar cases in other modules, too.
But, for the other module, I guess the patterns are simple
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12980#issuecomment-219690860
Hi, All.
I'm reporting about the recent failures of Travis CI here.
Since this PR was committed a week ago, I've been monitoring Travis CI on
my
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12719#issuecomment-219683326
Hmm. @cloud-fan .
Unfortunately, for `resolved`, we can not pass the tests without that.
For example, `StringExpressionsSuite.format_number` has
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12719#issuecomment-219683646
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12719#issuecomment-219688169
At this time, MiMa test passed. Let's wait and see the result.
By the way, please note that I only replaced `transformAllExpressions` in
this final commit
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12719#issuecomment-219782448
Ya. I was really flustrated, too. When you read the body of
`checkEvaluationWithOptimization` function above, it has the following lines.
It adds `Alias
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13125#issuecomment-219820760
Thank you, @MLnick !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13097#issuecomment-219835499
Hi, @davies .
Could you merge this PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13097#issuecomment-219837638
Thank you, @davies !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12719#issuecomment-219864350
Rebased to resolve conflicts.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/12719#issuecomment-219881850
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/13158
[SPARK-15373][WEB UI] Spark UI should show consistent timezones.
## What changes were proposed in this pull request?
Currently, SparkUI shows two timezones in a single page when
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13158#issuecomment-219893875
cc @davies
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/13087#issuecomment-218948335
The reported error scenario is the following.
```scala
scala> val df = sc.parallelize(Seq(("a", "b"), ("a1",
"b1
1001 - 1100 of 7331 matches
Mail list logo