Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13751#discussion_r67604883
--- Diff: docs/sparkr.md ---
@@ -263,19 +256,19 @@ head(df)
## Running SQL Queries from SparkR
-A SparkR DataFrame can also
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13872
Thank you, @rxin . I hope so, too.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13872
Thank you, @mengxr , @liancheng , and @rxin .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13876#discussion_r68328083
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -793,6 +794,20 @@ object ConstantFolding extends
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13876#discussion_r68329169
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -793,6 +794,20 @@ object ConstantFolding extends
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13870
Thank you for merging, @liancheng and @davies .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13884
Hi, @cloud-fan .
LGTM.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13765
Hi, @cloud-fan .
Could you review this optimizer?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13930#discussion_r68682430
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveSessionCatalog.scala ---
@@ -174,6 +175,18 @@ private[sql] class HiveSessionCatalog
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13914#discussion_r68681583
--- Diff: project/SparkBuild.scala ---
@@ -720,6 +720,7 @@ object Unidoc {
// Skip class names containing $ and some internal packages
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13930
Hi, @hvanhovell .
Could you review this PR again?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/13906
[SPARK-16208][SQL] Add `CollapseEmptyPlan` optimizer
## What changes were proposed in this pull request?
This PR adds a new logical optimizer, `CollapseEmptyPlan`, to collapse
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13906#discussion_r68495438
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1053,6 +1055,34 @@ object PruneFilters extends
Github user dongjoon-hyun closed the pull request at:
https://github.com/apache/spark/pull/13905
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13906#discussion_r68495422
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1053,6 +1055,34 @@ object PruneFilters extends
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13913
Thank you for merging, @liancheng !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13765
Hi, @cloud-fan .
Now, this PR can handle all combinations of Repartition and RepartitionBy.
I updated the description of PR and JIRA, too.
Thank you so much for making this PR much
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13918
Hi, @liancheng .
Now, it passes the Jenkins again.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13915
Hi, @rxin and @srowen .
I also worry about merge conflicts. It's really annoying for committers.
So, what about the stepwise approach? We have 2395 files and found 90 files
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13918
Thank you, @liancheng ! I fixed it.
By the way, you already finished all before. :)
I think some updates on ParquetWriter seem to modify after that.
---
If your project is set up
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13730
Ping @tdas
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13765#discussion_r68610179
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -547,6 +548,16 @@ object CollapseRepartition
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13918
Thank you for merging, @liancheng ! :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13914
Thank you for merging, @rxin .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13906#discussion_r68701793
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/CollapseEmptyPlanSuite.scala
---
@@ -0,0 +1,133
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13930#discussion_r68700034
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveSessionCatalog.scala ---
@@ -174,6 +175,18 @@ private[sql] class HiveSessionCatalog
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13930#discussion_r68700984
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveSessionCatalog.scala ---
@@ -174,6 +175,18 @@ private[sql] class HiveSessionCatalog
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13930#discussion_r68697699
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveSessionCatalog.scala ---
@@ -174,6 +175,18 @@ private[sql] class HiveSessionCatalog
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13906
Hi, @rxin .
I just remembered this PR while looking your whitelist PR. :)
Any advice for this PR?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13930#discussion_r68700695
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveSessionCatalog.scala ---
@@ -174,6 +175,18 @@ private[sql] class HiveSessionCatalog
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13930#discussion_r68702737
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveSessionCatalog.scala ---
@@ -174,6 +175,18 @@ private[sql] class HiveSessionCatalog
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13915
@mengxr 's idea sounds good to me, too.
May I update this PR, @rxin ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13930#discussion_r68700193
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveSessionCatalog.scala ---
@@ -174,6 +175,18 @@ private[sql] class HiveSessionCatalog
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13906#discussion_r68703430
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/CollapseEmptyPlanSuite.scala
---
@@ -0,0 +1,133
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13930#discussion_r68700636
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveSessionCatalog.scala ---
@@ -174,6 +175,18 @@ private[sql] class HiveSessionCatalog
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13939#discussion_r68701105
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveSessionCatalog.scala ---
@@ -221,4 +214,18 @@ private[sql] class HiveSessionCatalog
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13906#discussion_r68701978
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1053,6 +1055,41 @@ object PruneFilters extends
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13906
Anyway, thank you for review again, @rxin !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/13930
[SPARK-16228][SQL] Support a fallback lookup for external functions with
`double`-type parameter only
## What changes were proposed in this pull request?
This PR supports a fallback
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13914
Hi, @mengxr .
Could you review this PR when you have some time?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13930#discussion_r68648985
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -881,7 +881,16 @@ class Analyzer
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13930#discussion_r68649440
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -881,7 +881,16 @@ class Analyzer
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/13914
[SPARK-16111][DOC] Hide SparkOrcNewRecordReader in API docs
## What changes were proposed in this pull request?
This PR hides `SparkOrcNewRecordReader` from API docs.
## How
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13914
Hi, @mengxr .
Could you review this PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13914#discussion_r68502088
--- Diff: project/SparkBuild.scala ---
@@ -733,7 +734,8 @@ object Unidoc {
unidocSourceBase :=
s"https://github.com/apache/spark/t
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13913
Hi, @liancheng .
When I check
[SPARK-10591](https://issues.apache.org/jira/browse/SPARK-10591) today, it is
handled by `Row.equals` correctly.
I just make this PR for ensuring
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13914#discussion_r68502066
--- Diff: project/SparkBuild.scala ---
@@ -720,6 +720,7 @@ object Unidoc {
// Skip class names containing $ and some internal packages
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/13913
[SPARK-10591][TEST] Add a testcase to ensure if `checkAnswer` handles map
correctly
## What changes were proposed in this pull request?
This PR adds a testcase to ensure
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13900
cc @davies
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/13900
[SPARK-16173][SQL] Can't join describe() of DataFrame in Scala 2.10
## What changes were proposed in this pull request?
This PR fixes `DataFrame.describe()` by forcing
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13900
@davies . I added the comment.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13887
Okay. Let's summarize before updating PR.
1. In general, `a IN (expression)` will be `a = expression`. `OptimizerIn`
optimizer will take care of this.
2. In general, `a IN (2001, 2002
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13887
Anyway, thank you in advance, @davies . :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13887
Oh, you meant adding additional constraints by using **min** and **max**. I
see.
By the way, I have one question. If there are many predicates, does Spark
use the predicate
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13887
Oh, right. It's pending Jenkins.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13887
The some of frequent TPC-DS usages were STATE, ZIP, Color strings. The
min/max of these values doesn't have much meaning.
---
If your project is set up for it, you can reply to this email
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13887
Hi, @davies .
I removes the option-related stuff from the code/PR description/JIRA
description according to your advice.
Thank you for review!
---
If your project is set up
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13887
Maybe, you are confused with https://github.com/apache/spark/pull/13900 .
It passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13900#discussion_r68469199
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -1908,7 +1908,7 @@ class Dataset[T] private[sql](
// All columns
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13900
Thank you, @davies !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13887
I'm not sure, but it's just my hope. :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13887
Hmm. The general idea is good. But, I still think this PR and the idea seem
to be complementary to each other.
Sorry, but, if possible, can we proceed that general idea in another PR
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13887#discussion_r68475255
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/InMemoryTableScanExec.scala
---
@@ -79,6 +79,11 @@ private[sql] case class
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13900
Thank you for merging.
Sure. I will make a patch for 1.6.
Should I make a patch for 1.5, too?
---
If your project is set up for it, you can reply to this email and have your
reply
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/13915
[SPARK-16081] Disallow using `l` as variable name
## What changes were proposed in this pull request?
This PR adds a ScalaStyle custom rule, `DisallowMisleadingVariableName
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/13902
[SPARK-16173] [SQL] Can't join describe() of DataFrame in Scala 2.10
## What changes were proposed in this pull request?
This PR fixes `DataFrame.describe()` by forcing
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13902
cc @davies .
This is a PR for branch 1.6.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13887
Hi, @davies .
Now, it passed.
If there is anything for me to do, please let me know.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13902
Thank you, @davies !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13900
Oh, great! I didn't notice that is mergeable for 1.5 branch.
Thank you!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13876
Hi, @rxin .
For this `OptimizeIn` PR, please let me know if we need further
optimization.
Thank you always.
---
If your project is set up for it, you can reply to this email and have
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13915
Thank you, @HyukjinKwon .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user dongjoon-hyun closed the pull request at:
https://github.com/apache/spark/pull/13902
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13902
Oh, sure. I forgot that it is not close automatically.
Thank you for pining me, @rxin .
---
If your project is set up for it, you can reply to this email and have your
reply appear
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/13905
[SPARK-16208][SQL] Add `CollapseEmptyPlan` optimizer
## What changes were proposed in this pull request?
This PR adds a new logical optimizer, `CollapseEmptyPlan`, to collapse
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13918#discussion_r68522055
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala
---
@@ -915,15 +917,14 @@ private[sql
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/13918
[SPARK-16221][SQL] Redirect Parquet JUL logger via SLF4J for WRITE
operations
## What changes were proposed in this pull request?
[SPARK-8118](https://github.com/apache/spark/pull
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13765
Hi, @cloud-fan .
Could you review this `CollapseRepartitionBy` optimizer when you have some
time?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13915
cc @rxin and @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13930
Hi, @hvanhovell .
I updated this PR according to your comments.
Definitely, this issue was only about `HiveSessionCatalog`.
Thank you!
---
If your project is set up for it, you
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13876
Hi, @rxin .
Now, variable `l` is replaced with `list`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13918
Hi, @liancheng .
Could you review this PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13721
Hi, @shivaram . The followings are updated and become ready for review
again.
- The param description is improved.
- The size and ratio of returned list is compared with those
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/13730
[SPARK-16006][SQL] Attemping to write empty DataFrame with no fields throws
non-intuitive exception
## What changes were proposed in this pull request?
This PR fixes the error
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13730#discussion_r67462488
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningUtils.scala
---
@@ -339,6 +339,9 @@ private[sql] object
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13730
Hi, @tdas .
This is the PR to care the reported corner case.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13684#discussion_r67280096
--- Diff: R/pkg/R/DataFrame.R ---
@@ -1869,14 +1871,22 @@ setMethod("where",
#' path <- "path/to/file.json"
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13684
Thank you, @sun-rui .
Now, this PR checks all parameters' type correctly.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/13734
[SPARK-14995][R] Add `since` tag in Roxygen documentation for SparkR API
methods
## What changes were proposed in this pull request?
This PR adds `since` tags to Roxygen
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13734
Hi, @shivaram , @felixcheung , @sun-rui .
It's the first draft. There is a little ambiguity like the following.
- `SparkDataFrame` is marked as `@note since 2.0.0` because
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13730
Hi, @tdas .
At first look, I thought this corner case throws exceptions.
But, after considering more carefully, I want to allow
`emptyDataFrame.write`.
That is more natural way
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/13967
[SPARK-16278][SPARK-16279][SQL] Implement map_keys/map_values SQL functions
## What changes were proposed in this pull request?
This PR adds `map_keys` and `map_values` SQL functions
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13967
cc @rxin and @cloud-fan .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13956
Could you review this PR, @srowen ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11320#issuecomment-187953829
Test build 51808 is running now. Let's see the result. :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11320#issuecomment-187953113
I've done!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11320#issuecomment-187959828
According to Jenkins, other PRs also suffer from this. I think
`retriggering` is not helpful at this time.
---
If your project is set up for it, you can reply
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11320#issuecomment-187958660
Hmm, it fails again due to Github.
```
ERROR: Timeout after 15 minutes
ERROR: Error fetching remote repo 'origin'
```
---
If your project is set
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11320#issuecomment-187821324
Thank you for reviewing, @noprom . This PR is the similar to #11053
(merged yesterday.)
Hi, @yinxusen, @mengxr .
Could you review this PR
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11320#issuecomment-188002229
Finally! Now, it's ready to be reviewed again. :)
Thank you, all.
---
If your project is set up for it, you can reply to this email and have your
reply
301 - 400 of 7331 matches
Mail list logo