[GitHub] [spark] AmplabJenkins removed a comment on issue #24025: [SPARK-27106][SQL] merge CaseInsensitiveStringMap and DataSourceOptions

2019-03-09 Thread GitBox
AmplabJenkins removed a comment on issue #24025: [SPARK-27106][SQL] merge 
CaseInsensitiveStringMap and DataSourceOptions
URL: https://github.com/apache/spark/pull/24025#issuecomment-471255080
 
 
   Test FAILed.
   Refer to this link for build results (access rights to CI server needed): 
   https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/103272/
   Test FAILed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] shivusondur commented on a change in pull request #24026: [SPARK-27090][CORE] Removing old LEGACY_DRIVER_IDENTIFIER ("")

2019-03-09 Thread GitBox
shivusondur commented on a change in pull request #24026: [SPARK-27090][CORE] 
Removing old LEGACY_DRIVER_IDENTIFIER ("")
URL: https://github.com/apache/spark/pull/24026#discussion_r264024208
 
 

 ##
 File path: core/src/test/scala/org/apache/spark/ui/UIUtilsSuite.scala
 ##
 @@ -122,8 +122,8 @@ class UIUtilsSuite extends SparkFunSuite {
   test("decodeURLParameter (SPARK-12708: Sorting task error in Stages Page 
when yarn mode.)") {
 val encoded1 = "%252F"
 val decoded1 = "/"
-val encoded2 = "%253Cdriver%253E"
-val decoded2 = ""
+val encoded2 = "driver"
 
 Review comment:
   @srowen 
   Changes done. Removed unnecessary encode.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #24025: [SPARK-27106][SQL] merge CaseInsensitiveStringMap and DataSourceOptions

2019-03-09 Thread GitBox
AmplabJenkins removed a comment on issue #24025: [SPARK-27106][SQL] merge 
CaseInsensitiveStringMap and DataSourceOptions
URL: https://github.com/apache/spark/pull/24025#issuecomment-471255077
 
 
   Merged build finished. Test FAILed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #24025: [SPARK-27106][SQL] merge CaseInsensitiveStringMap and DataSourceOptions

2019-03-09 Thread GitBox
AmplabJenkins commented on issue #24025: [SPARK-27106][SQL] merge 
CaseInsensitiveStringMap and DataSourceOptions
URL: https://github.com/apache/spark/pull/24025#issuecomment-471255080
 
 
   Test FAILed.
   Refer to this link for build results (access rights to CI server needed): 
   https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/103272/
   Test FAILed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] SparkQA removed a comment on issue #24025: [SPARK-27106][SQL] merge CaseInsensitiveStringMap and DataSourceOptions

2019-03-09 Thread GitBox
SparkQA removed a comment on issue #24025: [SPARK-27106][SQL] merge 
CaseInsensitiveStringMap and DataSourceOptions
URL: https://github.com/apache/spark/pull/24025#issuecomment-471236306
 
 
   **[Test build #103272 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/103272/testReport)**
 for PR 24025 at commit 
[`c60e2bf`](https://github.com/apache/spark/commit/c60e2bfc64e77b82764ed5191b55b0630a7f15d2).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] SparkQA commented on issue #24025: [SPARK-27106][SQL] merge CaseInsensitiveStringMap and DataSourceOptions

2019-03-09 Thread GitBox
SparkQA commented on issue #24025: [SPARK-27106][SQL] merge 
CaseInsensitiveStringMap and DataSourceOptions
URL: https://github.com/apache/spark/pull/24025#issuecomment-471255045
 
 
   **[Test build #103272 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/103272/testReport)**
 for PR 24025 at commit 
[`c60e2bf`](https://github.com/apache/spark/commit/c60e2bfc64e77b82764ed5191b55b0630a7f15d2).
* This patch **fails from timeout after a configured wait of `400m`**.
* This patch merges cleanly.
* This patch adds no public classes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #24025: [SPARK-27106][SQL] merge CaseInsensitiveStringMap and DataSourceOptions

2019-03-09 Thread GitBox
AmplabJenkins commented on issue #24025: [SPARK-27106][SQL] merge 
CaseInsensitiveStringMap and DataSourceOptions
URL: https://github.com/apache/spark/pull/24025#issuecomment-471255077
 
 
   Merged build finished. Test FAILed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] WangGuangxin commented on issue #23942: [SPARK-27033][SQL]Add Optimize rule RewriteArithmeticFiltersOnIntegralColumn

2019-03-09 Thread GitBox
WangGuangxin commented on issue #23942: [SPARK-27033][SQL]Add Optimize rule 
RewriteArithmeticFiltersOnIntegralColumn
URL: https://github.com/apache/spark/pull/23942#issuecomment-471254770
 
 
   > How do you handle this behaviour change?
   > 
   > ```
   > // v2.4.0
   > scala> Seq(0, Int.MaxValue).toDF("v").write.saveAsTable("t")
   > scala> sql("select * from t").show
   > +--+
   > | v|
   > +--+
   > | 0|
   > |2147483647|
   > +--+
   > 
   > scala> sql("select * from t where v + 1 > 0").show
   > +---+
   > |  v|
   > +---+
   > |  0|
   > +---+
   > 
   > // this pr
   > scala> sql("select * from t where v + 1 > 0").show
   > +--+
   > | v|
   > +--+
   > | 0|
   > |2147483647|
   > +--+
   > ```
   
   This is a bad case I didn't think about it before. I found there are four 
kinds of cases.
   
   - ` v + 1 > 0`  =>   `v > -1 and v <= Int.MAX - 1`
   -  `v  - 1 > 0`  =>   `v > 1  or  (v < Int.MIN + 1 && v > 0 - 1 + Int.MIN - 
Int.MAX )`
   -  `v + 1 < 0`  =>   `v < -1  or (v > Int.MAX -1 && v < 0 - 1 + Int.MAX - 
Int.MIN)`
   -  `v - 1 < 0`   =>   `v < 1 and v >= Int.MIN + 1`
   
   For one inequality, after rewrite, there may need two or three inequalities, 
which makes expressions much more complex. So I think it doesn't worth to 
convert inequality. We may only handle `= or !=`  here. What do you think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #23969: [SPARK-26920][R] Deduplicate type checking across Arrow optimization in SparkR

2019-03-09 Thread GitBox
AmplabJenkins removed a comment on issue #23969: [SPARK-26920][R] Deduplicate 
type checking across Arrow optimization in SparkR
URL: https://github.com/apache/spark/pull/23969#issuecomment-471253191
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #23969: [SPARK-26920][R] Deduplicate type checking across Arrow optimization in SparkR

2019-03-09 Thread GitBox
AmplabJenkins removed a comment on issue #23969: [SPARK-26920][R] Deduplicate 
type checking across Arrow optimization in SparkR
URL: https://github.com/apache/spark/pull/23969#issuecomment-471253192
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/103276/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] SparkQA removed a comment on issue #23969: [SPARK-26920][R] Deduplicate type checking across Arrow optimization in SparkR

2019-03-09 Thread GitBox
SparkQA removed a comment on issue #23969: [SPARK-26920][R] Deduplicate type 
checking across Arrow optimization in SparkR
URL: https://github.com/apache/spark/pull/23969#issuecomment-471251131
 
 
   **[Test build #103276 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/103276/testReport)**
 for PR 23969 at commit 
[`a58c86d`](https://github.com/apache/spark/commit/a58c86dfeff23a0c0f333b205ad182a4bf5af332).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #23969: [SPARK-26920][R] Deduplicate type checking across Arrow optimization in SparkR

2019-03-09 Thread GitBox
AmplabJenkins commented on issue #23969: [SPARK-26920][R] Deduplicate type 
checking across Arrow optimization in SparkR
URL: https://github.com/apache/spark/pull/23969#issuecomment-471253192
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/103276/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #23969: [SPARK-26920][R] Deduplicate type checking across Arrow optimization in SparkR

2019-03-09 Thread GitBox
AmplabJenkins commented on issue #23969: [SPARK-26920][R] Deduplicate type 
checking across Arrow optimization in SparkR
URL: https://github.com/apache/spark/pull/23969#issuecomment-471253191
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] SparkQA commented on issue #23969: [SPARK-26920][R] Deduplicate type checking across Arrow optimization in SparkR

2019-03-09 Thread GitBox
SparkQA commented on issue #23969: [SPARK-26920][R] Deduplicate type checking 
across Arrow optimization in SparkR
URL: https://github.com/apache/spark/pull/23969#issuecomment-471253175
 
 
   **[Test build #103276 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/103276/testReport)**
 for PR 23969 at commit 
[`a58c86d`](https://github.com/apache/spark/commit/a58c86dfeff23a0c0f333b205ad182a4bf5af332).
* This patch passes all tests.
* This patch merges cleanly.
* This patch adds no public classes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] rxin commented on issue #23882: [SPARK-26979][PYTHON] Add missing string column name support for some SQL functions

2019-03-09 Thread GitBox
rxin commented on issue #23882: [SPARK-26979][PYTHON] Add missing string column 
name support for some SQL functions
URL: https://github.com/apache/spark/pull/23882#issuecomment-471251512
 
 
   What do we do for something like split, regexp_extract?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #24042: [SPARK-27120][BUILD][TEST] Upgrade scalatest version to 3.0.5

2019-03-09 Thread GitBox
AmplabJenkins removed a comment on issue #24042: [SPARK-27120][BUILD][TEST] 
Upgrade scalatest version to 3.0.5
URL: https://github.com/apache/spark/pull/24042#issuecomment-471251272
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/103274/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #24042: [SPARK-27120][BUILD][TEST] Upgrade scalatest version to 3.0.5

2019-03-09 Thread GitBox
AmplabJenkins removed a comment on issue #24042: [SPARK-27120][BUILD][TEST] 
Upgrade scalatest version to 3.0.5
URL: https://github.com/apache/spark/pull/24042#issuecomment-471251271
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #24042: [SPARK-27120][BUILD][TEST] Upgrade scalatest version to 3.0.5

2019-03-09 Thread GitBox
AmplabJenkins commented on issue #24042: [SPARK-27120][BUILD][TEST] Upgrade 
scalatest version to 3.0.5
URL: https://github.com/apache/spark/pull/24042#issuecomment-471251272
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/103274/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #24042: [SPARK-27120][BUILD][TEST] Upgrade scalatest version to 3.0.5

2019-03-09 Thread GitBox
AmplabJenkins commented on issue #24042: [SPARK-27120][BUILD][TEST] Upgrade 
scalatest version to 3.0.5
URL: https://github.com/apache/spark/pull/24042#issuecomment-471251271
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] SparkQA removed a comment on issue #24042: [SPARK-27120][BUILD][TEST] Upgrade scalatest version to 3.0.5

2019-03-09 Thread GitBox
SparkQA removed a comment on issue #24042: [SPARK-27120][BUILD][TEST] Upgrade 
scalatest version to 3.0.5
URL: https://github.com/apache/spark/pull/24042#issuecomment-471238517
 
 
   **[Test build #103274 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/103274/testReport)**
 for PR 24042 at commit 
[`bd787da`](https://github.com/apache/spark/commit/bd787dab8c76df66063e24fcb26de0e609b0c350).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] SparkQA commented on issue #24042: [SPARK-27120][BUILD][TEST] Upgrade scalatest version to 3.0.5

2019-03-09 Thread GitBox
SparkQA commented on issue #24042: [SPARK-27120][BUILD][TEST] Upgrade scalatest 
version to 3.0.5
URL: https://github.com/apache/spark/pull/24042#issuecomment-471251153
 
 
   **[Test build #103274 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/103274/testReport)**
 for PR 24042 at commit 
[`bd787da`](https://github.com/apache/spark/commit/bd787dab8c76df66063e24fcb26de0e609b0c350).
* This patch passes all tests.
* This patch merges cleanly.
* This patch adds no public classes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] SparkQA commented on issue #23969: [SPARK-26920][R] Deduplicate type checking across Arrow optimization in SparkR

2019-03-09 Thread GitBox
SparkQA commented on issue #23969: [SPARK-26920][R] Deduplicate type checking 
across Arrow optimization in SparkR
URL: https://github.com/apache/spark/pull/23969#issuecomment-471251131
 
 
   **[Test build #103276 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/103276/testReport)**
 for PR 23969 at commit 
[`a58c86d`](https://github.com/apache/spark/commit/a58c86dfeff23a0c0f333b205ad182a4bf5af332).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #23969: [SPARK-26920][R] Deduplicate type checking across Arrow optimization in SparkR

2019-03-09 Thread GitBox
AmplabJenkins commented on issue #23969: [SPARK-26920][R] Deduplicate type 
checking across Arrow optimization in SparkR
URL: https://github.com/apache/spark/pull/23969#issuecomment-471251055
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   
https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/8722/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #23969: [SPARK-26920][R] Deduplicate type checking across Arrow optimization in SparkR

2019-03-09 Thread GitBox
AmplabJenkins removed a comment on issue #23969: [SPARK-26920][R] Deduplicate 
type checking across Arrow optimization in SparkR
URL: https://github.com/apache/spark/pull/23969#issuecomment-471251055
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   
https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/8722/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #23969: [SPARK-26920][R] Deduplicate type checking across Arrow optimization in SparkR

2019-03-09 Thread GitBox
AmplabJenkins removed a comment on issue #23969: [SPARK-26920][R] Deduplicate 
type checking across Arrow optimization in SparkR
URL: https://github.com/apache/spark/pull/23969#issuecomment-471251053
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #23969: [SPARK-26920][R] Deduplicate type checking across Arrow optimization in SparkR

2019-03-09 Thread GitBox
AmplabJenkins commented on issue #23969: [SPARK-26920][R] Deduplicate type 
checking across Arrow optimization in SparkR
URL: https://github.com/apache/spark/pull/23969#issuecomment-471251053
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] HyukjinKwon commented on a change in pull request #23969: [SPARK-26920][R] Deduplicate type checking across Arrow optimization in SparkR

2019-03-09 Thread GitBox
HyukjinKwon commented on a change in pull request #23969: [SPARK-26920][R] 
Deduplicate type checking across Arrow optimization in SparkR
URL: https://github.com/apache/spark/pull/23969#discussion_r264024876
 
 

 ##
 File path: R/pkg/R/SQLContext.R
 ##
 @@ -197,17 +197,40 @@ writeToFileInArrow <- function(fileName, rdf, 
numPartitions) {
   }
 }
 
-checkTypeRequirementForArrow <- function(dataHead, schema) {
-  # Currenty Arrow optimization does not support raw for now.
-  # Also, it does not support explicit float type set by users. It leads to
-  # incorrect conversion. We will fall back to the path without Arrow 
optimization.
-  if (any(sapply(dataHead, is.raw))) {
-stop("Arrow optimization with R DataFrame does not support raw type yet.")
-  }
-  if (inherits(schema, "structType")) {
-if (any(sapply(schema$fields(), function(x) x$dataType.toString() == 
"FloatType"))) {
-  stop("Arrow optimization with R DataFrame does not support FloatType 
type yet.")
+getSchema <- function(schema, firstRow = NULL, rdd = NULL) {
+  if (is.null(schema) || (!inherits(schema, "structType") && 
is.null(names(schema {
+if (is.null(firstRow)) {
+  stopifnot(!is.null(rdd))
+  firstRow <- firstRDD(rdd)
+}
+names <- if (is.null(schema)) {
+  names(firstRow)
+} else {
+  as.list(schema)
+}
+if (is.null(names)) {
+  names <- lapply(1:length(firstRow), function(x) {
+paste("_", as.character(x), sep = "")
+  })
 }
+
+# SPAKR-SQL does not support '.' in column name, so replace it with '_'
+# TODO(davies): remove this once SPARK-2775 is fixed
+names <- lapply(names, function(n) {
+  nn <- gsub("[.]", "_", n)
+  if (nn != n) {
+warning(paste("Use", nn, "instead of", n, " as column name"))
 
 Review comment:
   Oh, right.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] HyukjinKwon commented on a change in pull request #23969: [SPARK-26920][R] Deduplicate type checking across Arrow optimization in SparkR

2019-03-09 Thread GitBox
HyukjinKwon commented on a change in pull request #23969: [SPARK-26920][R] 
Deduplicate type checking across Arrow optimization in SparkR
URL: https://github.com/apache/spark/pull/23969#discussion_r264024872
 
 

 ##
 File path: R/pkg/R/SQLContext.R
 ##
 @@ -197,17 +197,40 @@ writeToFileInArrow <- function(fileName, rdf, 
numPartitions) {
   }
 }
 
-checkTypeRequirementForArrow <- function(dataHead, schema) {
-  # Currenty Arrow optimization does not support raw for now.
-  # Also, it does not support explicit float type set by users. It leads to
-  # incorrect conversion. We will fall back to the path without Arrow 
optimization.
-  if (any(sapply(dataHead, is.raw))) {
-stop("Arrow optimization with R DataFrame does not support raw type yet.")
 
 Review comment:
   I tried to merge that code path by checking `BinaryType`. Previously here I 
had to check if the first line of R data frame has `raw` or not because I could 
not have the schema here.
   
   After I tweaked here to have the schema first here in Arrow path, I can now 
check it by `structType`, not the first line of R data frame.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] HyukjinKwon closed pull request #24023: [SPARK-27102][R][PYTHON][CORE] Remove the references to Python's Scala codes in R's Scala codes

2019-03-09 Thread GitBox
HyukjinKwon closed pull request #24023: [SPARK-27102][R][PYTHON][CORE] Remove 
the references to Python's Scala codes in R's Scala codes
URL: https://github.com/apache/spark/pull/24023
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] HyukjinKwon commented on issue #24023: [SPARK-27102][R][PYTHON][CORE] Remove the references to Python's Scala codes in R's Scala codes

2019-03-09 Thread GitBox
HyukjinKwon commented on issue #24023: [SPARK-27102][R][PYTHON][CORE] Remove 
the references to Python's Scala codes in R's Scala codes
URL: https://github.com/apache/spark/pull/24023#issuecomment-471250363
 
 
   Merged to master.
   
   Thanks, guys.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dbtsai commented on issue #24032: [SPARK-27097] [CHERRY-PICK 2.4] Avoid embedding platform-dependent offsets literally in whole-stage generated code

2019-03-09 Thread GitBox
dbtsai commented on issue #24032: [SPARK-27097] [CHERRY-PICK 2.4] Avoid 
embedding platform-dependent offsets literally in whole-stage generated code
URL: https://github.com/apache/spark/pull/24032#issuecomment-471250069
 
 
   Merged into branch-2.4 Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] hehuiyuan commented on a change in pull request #23999: [docs]Add additional explanation for "Setting the max receiving rate" in streaming-programming-guide.md

2019-03-09 Thread GitBox
hehuiyuan commented on a change in pull request #23999: [docs]Add additional 
explanation for "Setting the max receiving rate" in 
streaming-programming-guide.md
URL: https://github.com/apache/spark/pull/23999#discussion_r264024237
 
 

 ##
 File path: docs/streaming-programming-guide.md
 ##
 @@ -2036,7 +2036,7 @@ To run a Spark Streaming applications, you need to have 
the following.
   `spark.streaming.receiver.maxRate` for receivers and 
`spark.streaming.kafka.maxRatePerPartition`
   for Direct Kafka approach. In Spark 1.5, we have introduced a feature called 
*backpressure* that
   eliminate the need to set this rate limit, as Spark Streaming automatically 
figures out the
-  rate limits and dynamically adjusts them if the processing conditions 
change. This backpressure
+  rate limits and dynamically adjusts them if the processing conditions 
change.If the first batch of data is very large which causes the first batch is 
processing all the time and the task can not work normally , using a maximum 
rate limit can solve the problem .This backpressure
 
 Review comment:
   First of all,thank you for your reply. Maybe I didn't express it very 
accurately.
   
   The original document means that setting backpressure does not require to 
set this rate limit。However, In actual usage scenarios, such as spark streaming 
consuming kafka, the first batch of data is often very large, leading to the 
first batch has been processing, affecting the normal operation of tasks。Even 
the first batch of data is finished and it  costs much more time than the batch 
time , the efficiency of processing  subsequent batches is not as good as the 
efficiency of the first batch of data  was processed in batch time then 
continue  processing subsequent batches ,especially spark streaming on 
kubernetes.
   
   In a word,i want to express  setting backpressure is not need setting rate 
limit that is not rigorous .
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] hehuiyuan commented on a change in pull request #23999: [docs]Add additional explanation for "Setting the max receiving rate" in streaming-programming-guide.md

2019-03-09 Thread GitBox
hehuiyuan commented on a change in pull request #23999: [docs]Add additional 
explanation for "Setting the max receiving rate" in 
streaming-programming-guide.md
URL: https://github.com/apache/spark/pull/23999#discussion_r264024237
 
 

 ##
 File path: docs/streaming-programming-guide.md
 ##
 @@ -2036,7 +2036,7 @@ To run a Spark Streaming applications, you need to have 
the following.
   `spark.streaming.receiver.maxRate` for receivers and 
`spark.streaming.kafka.maxRatePerPartition`
   for Direct Kafka approach. In Spark 1.5, we have introduced a feature called 
*backpressure* that
   eliminate the need to set this rate limit, as Spark Streaming automatically 
figures out the
-  rate limits and dynamically adjusts them if the processing conditions 
change. This backpressure
+  rate limits and dynamically adjusts them if the processing conditions 
change.If the first batch of data is very large which causes the first batch is 
processing all the time and the task can not work normally , using a maximum 
rate limit can solve the problem .This backpressure
 
 Review comment:
   First of all,thank you for your reply.
   
   The original document means that setting backpressure does not require to 
set this rate limit。However, In actual usage scenarios, such as spark streaming 
consuming kafka, the first batch of data is often very large, leading to the 
first batch has been processing, affecting the normal operation of tasks。Even 
the first batch of data is finished and it  costs much more time than the batch 
time , the efficiency of processing  subsequent batches is not as good as the 
efficiency of the first batch of data  was processed in batch time then 
continue  processing subsequent batches ,especially spark streaming on 
kubernetes.
   
   In a word,i want to express  setting backpressure is not need setting rate 
limit that is not rigorous .
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] hehuiyuan commented on a change in pull request #23999: [docs]Add additional explanation for "Setting the max receiving rate" in streaming-programming-guide.md

2019-03-09 Thread GitBox
hehuiyuan commented on a change in pull request #23999: [docs]Add additional 
explanation for "Setting the max receiving rate" in 
streaming-programming-guide.md
URL: https://github.com/apache/spark/pull/23999#discussion_r264024237
 
 

 ##
 File path: docs/streaming-programming-guide.md
 ##
 @@ -2036,7 +2036,7 @@ To run a Spark Streaming applications, you need to have 
the following.
   `spark.streaming.receiver.maxRate` for receivers and 
`spark.streaming.kafka.maxRatePerPartition`
   for Direct Kafka approach. In Spark 1.5, we have introduced a feature called 
*backpressure* that
   eliminate the need to set this rate limit, as Spark Streaming automatically 
figures out the
-  rate limits and dynamically adjusts them if the processing conditions 
change. This backpressure
+  rate limits and dynamically adjusts them if the processing conditions 
change.If the first batch of data is very large which causes the first batch is 
processing all the time and the task can not work normally , using a maximum 
rate limit can solve the problem .This backpressure
 
 Review comment:
   First of all,think you for your reply.
   
   The original document means that setting backpressure does not require to 
set this rate limit。However, In actual usage scenarios, such as spark streaming 
consuming kafka, the first batch of data is often very large, leading to the 
first batch has been processing, affecting the normal operation of tasks。Even 
the first batch of data is finished and it  costs much more time than the batch 
time , the efficiency of processing  subsequent batches is not as good as the 
efficiency of the first batch of data  was processed in batch time then 
continue  processing subsequent batches ,especially spark streaming on 
kubernetes.
   
   In a word,i want to express  setting backpressure is not need setting rate 
limit that is not rigorous .
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] hehuiyuan commented on a change in pull request #23999: [docs]Add additional explanation for "Setting the max receiving rate" in streaming-programming-guide.md

2019-03-09 Thread GitBox
hehuiyuan commented on a change in pull request #23999: [docs]Add additional 
explanation for "Setting the max receiving rate" in 
streaming-programming-guide.md
URL: https://github.com/apache/spark/pull/23999#discussion_r264024237
 
 

 ##
 File path: docs/streaming-programming-guide.md
 ##
 @@ -2036,7 +2036,7 @@ To run a Spark Streaming applications, you need to have 
the following.
   `spark.streaming.receiver.maxRate` for receivers and 
`spark.streaming.kafka.maxRatePerPartition`
   for Direct Kafka approach. In Spark 1.5, we have introduced a feature called 
*backpressure* that
   eliminate the need to set this rate limit, as Spark Streaming automatically 
figures out the
-  rate limits and dynamically adjusts them if the processing conditions 
change. This backpressure
+  rate limits and dynamically adjusts them if the processing conditions 
change.If the first batch of data is very large which causes the first batch is 
processing all the time and the task can not work normally , using a maximum 
rate limit can solve the problem .This backpressure
 
 Review comment:
   First of all,think you for your reply.
   
   The original document means that setting backpressure does not require to 
set this rate limit。However, In actual usage scenarios, such as spark streaming 
consuming kafka, the first batch of data is often very large, leading to the 
first batch has been processing, affecting the normal operation of tasks。Even 
the first batch of data is finished and it  costs much more time than the batch 
time , the efficiency of processing  subsequent batches is not as good as the 
efficiency of the first batch of data  was processed in batch time then 
continue  processing subsequent batches.
   
   In a word,i want to express  setting backpressure is not need setting rate 
limit that is not rigorous .
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] hehuiyuan commented on a change in pull request #23999: [docs]Add additional explanation for "Setting the max receiving rate" in streaming-programming-guide.md

2019-03-09 Thread GitBox
hehuiyuan commented on a change in pull request #23999: [docs]Add additional 
explanation for "Setting the max receiving rate" in 
streaming-programming-guide.md
URL: https://github.com/apache/spark/pull/23999#discussion_r264024237
 
 

 ##
 File path: docs/streaming-programming-guide.md
 ##
 @@ -2036,7 +2036,7 @@ To run a Spark Streaming applications, you need to have 
the following.
   `spark.streaming.receiver.maxRate` for receivers and 
`spark.streaming.kafka.maxRatePerPartition`
   for Direct Kafka approach. In Spark 1.5, we have introduced a feature called 
*backpressure* that
   eliminate the need to set this rate limit, as Spark Streaming automatically 
figures out the
-  rate limits and dynamically adjusts them if the processing conditions 
change. This backpressure
+  rate limits and dynamically adjusts them if the processing conditions 
change.If the first batch of data is very large which causes the first batch is 
processing all the time and the task can not work normally , using a maximum 
rate limit can solve the problem .This backpressure
 
 Review comment:
   First of all,think you for your reply.
   
   The original document means that setting backpressure does not require to 
set this rate limit。However, In actual usage scenarios, such as spark streaming 
consuming kafka, the first batch of data is often very large, leading to the 
first batch has been processing, affecting the normal operation of tasks。Even 
the first batch of data is finished and it  costs much more time than the batch 
time , the efficiency of processing  subsequent batch is not as good as the 
efficiency of the first batch of data  was processed in batch time then 
continue  processing ubsequent batch.
   
   In a word,i want to express  setting backpressure is not need setting rate 
limit that is not rigorous .
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] shivusondur commented on a change in pull request #24026: [SPARK-27090][CORE] Removing old LEGACY_DRIVER_IDENTIFIER ("")

2019-03-09 Thread GitBox
shivusondur commented on a change in pull request #24026: [SPARK-27090][CORE] 
Removing old LEGACY_DRIVER_IDENTIFIER ("")
URL: https://github.com/apache/spark/pull/24026#discussion_r264024208
 
 

 ##
 File path: core/src/test/scala/org/apache/spark/ui/UIUtilsSuite.scala
 ##
 @@ -122,8 +122,8 @@ class UIUtilsSuite extends SparkFunSuite {
   test("decodeURLParameter (SPARK-12708: Sorting task error in Stages Page 
when yarn mode.)") {
 val encoded1 = "%252F"
 val decoded1 = "/"
-val encoded2 = "%253Cdriver%253E"
-val decoded2 = ""
+val encoded2 = "driver"
 
 Review comment:
   done. 
   Removed unnecessary encode.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] fangshil commented on a change in pull request #20303: [SPARK-23128][SQL] A new approach to do adaptive execution in Spark SQL

2019-03-09 Thread GitBox
fangshil commented on a change in pull request #20303: [SPARK-23128][SQL] A new 
approach to do adaptive execution in Spark SQL
URL: https://github.com/apache/spark/pull/20303#discussion_r264023809
 
 

 ##
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/exchange/EnsureRequirements.scala
 ##
 @@ -36,107 +35,12 @@ import org.apache.spark.sql.internal.SQLConf
  * the input partition ordering requirements are met.
  */
 case class EnsureRequirements(conf: SQLConf) extends Rule[SparkPlan] {
-  private def defaultNumPreShufflePartitions: Int = conf.numShufflePartitions
-
-  private def targetPostShuffleInputSize: Long = 
conf.targetPostShuffleInputSize
-
-  private def adaptiveExecutionEnabled: Boolean = conf.adaptiveExecutionEnabled
-
-  private def minNumPostShufflePartitions: Option[Int] = {
-val minNumPostShufflePartitions = conf.minNumPostShufflePartitions
-if (minNumPostShufflePartitions > 0) Some(minNumPostShufflePartitions) 
else None
-  }
-
-  /**
-   * Adds [[ExchangeCoordinator]] to [[ShuffleExchangeExec]]s if adaptive 
query execution is enabled
-   * and partitioning schemes of these [[ShuffleExchangeExec]]s support 
[[ExchangeCoordinator]].
-   */
-  private def withExchangeCoordinator(
-  children: Seq[SparkPlan],
-  requiredChildDistributions: Seq[Distribution]): Seq[SparkPlan] = {
-val supportsCoordinator =
-  if (children.exists(_.isInstanceOf[ShuffleExchangeExec])) {
-// Right now, ExchangeCoordinator only support HashPartitionings.
-children.forall {
-  case e @ ShuffleExchangeExec(hash: HashPartitioning, _, _) => true
-  case child =>
-child.outputPartitioning match {
-  case hash: HashPartitioning => true
-  case collection: PartitioningCollection =>
-
collection.partitionings.forall(_.isInstanceOf[HashPartitioning])
-  case _ => false
-}
-}
-  } else {
-// In this case, although we do not have Exchange operators, we may 
still need to
-// shuffle data when we have more than one children because data 
generated by
-// these children may not be partitioned in the same way.
-// Please see the comment in withCoordinator for more details.
-val supportsDistribution = requiredChildDistributions.forall { dist =>
-  dist.isInstanceOf[ClusteredDistribution] || 
dist.isInstanceOf[HashClusteredDistribution]
-}
-children.length > 1 && supportsDistribution
-  }
-
-val withCoordinator =
-  if (adaptiveExecutionEnabled && supportsCoordinator) {
-val coordinator =
-  new ExchangeCoordinator(
-targetPostShuffleInputSize,
-minNumPostShufflePartitions)
-children.zip(requiredChildDistributions).map {
-  case (e: ShuffleExchangeExec, _) =>
-// This child is an Exchange, we need to add the coordinator.
-e.copy(coordinator = Some(coordinator))
-  case (child, distribution) =>
-// If this child is not an Exchange, we need to add an Exchange 
for now.
-// Ideally, we can try to avoid this Exchange. However, when we 
reach here,
-// there are at least two children operators (because if there is 
a single child
-// and we can avoid Exchange, supportsCoordinator will be false 
and we
-// will not reach here.). Although we can make two children have 
the same number of
-// post-shuffle partitions. Their numbers of pre-shuffle 
partitions may be different.
-// For example, let's say we have the following plan
-// Join
-// /  \
-//   Agg  Exchange
-//   /  \
-//Exchange  t2
-//  /
-// t1
-// In this case, because a post-shuffle partition can include 
multiple pre-shuffle
-// partitions, a HashPartitioning will not be strictly partitioned 
by the hashcodes
-// after shuffle. So, even we can use the child Exchange operator 
of the Join to
-// have a number of post-shuffle partitions that matches the 
number of partitions of
-// Agg, we cannot say these two children are partitioned in the 
same way.
-// Here is another case
-// Join
-// /  \
-//   Agg1  Agg2
-//   /  \
-//   Exchange1  Exchange2
-//   /   \
-//  t1   t2
-// In this case, two Aggs shuffle data with the same column of the 
join condition.
-// After we use ExchangeCoordinator, these two Aggs may not be 
partitioned in the same
-// way. Let's say that Agg1 and Agg2 both have 5 pre-shuffle 
partitions and 2
-// post-shuffle partitions. It is possible that Agg1 fetches those 
p

[GitHub] [spark] fangshil commented on a change in pull request #20303: [SPARK-23128][SQL] A new approach to do adaptive execution in Spark SQL

2019-03-09 Thread GitBox
fangshil commented on a change in pull request #20303: [SPARK-23128][SQL] A new 
approach to do adaptive execution in Spark SQL
URL: https://github.com/apache/spark/pull/20303#discussion_r264023809
 
 

 ##
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/exchange/EnsureRequirements.scala
 ##
 @@ -36,107 +35,12 @@ import org.apache.spark.sql.internal.SQLConf
  * the input partition ordering requirements are met.
  */
 case class EnsureRequirements(conf: SQLConf) extends Rule[SparkPlan] {
-  private def defaultNumPreShufflePartitions: Int = conf.numShufflePartitions
-
-  private def targetPostShuffleInputSize: Long = 
conf.targetPostShuffleInputSize
-
-  private def adaptiveExecutionEnabled: Boolean = conf.adaptiveExecutionEnabled
-
-  private def minNumPostShufflePartitions: Option[Int] = {
-val minNumPostShufflePartitions = conf.minNumPostShufflePartitions
-if (minNumPostShufflePartitions > 0) Some(minNumPostShufflePartitions) 
else None
-  }
-
-  /**
-   * Adds [[ExchangeCoordinator]] to [[ShuffleExchangeExec]]s if adaptive 
query execution is enabled
-   * and partitioning schemes of these [[ShuffleExchangeExec]]s support 
[[ExchangeCoordinator]].
-   */
-  private def withExchangeCoordinator(
-  children: Seq[SparkPlan],
-  requiredChildDistributions: Seq[Distribution]): Seq[SparkPlan] = {
-val supportsCoordinator =
-  if (children.exists(_.isInstanceOf[ShuffleExchangeExec])) {
-// Right now, ExchangeCoordinator only support HashPartitionings.
-children.forall {
-  case e @ ShuffleExchangeExec(hash: HashPartitioning, _, _) => true
-  case child =>
-child.outputPartitioning match {
-  case hash: HashPartitioning => true
-  case collection: PartitioningCollection =>
-
collection.partitionings.forall(_.isInstanceOf[HashPartitioning])
-  case _ => false
-}
-}
-  } else {
-// In this case, although we do not have Exchange operators, we may 
still need to
-// shuffle data when we have more than one children because data 
generated by
-// these children may not be partitioned in the same way.
-// Please see the comment in withCoordinator for more details.
-val supportsDistribution = requiredChildDistributions.forall { dist =>
-  dist.isInstanceOf[ClusteredDistribution] || 
dist.isInstanceOf[HashClusteredDistribution]
-}
-children.length > 1 && supportsDistribution
-  }
-
-val withCoordinator =
-  if (adaptiveExecutionEnabled && supportsCoordinator) {
-val coordinator =
-  new ExchangeCoordinator(
-targetPostShuffleInputSize,
-minNumPostShufflePartitions)
-children.zip(requiredChildDistributions).map {
-  case (e: ShuffleExchangeExec, _) =>
-// This child is an Exchange, we need to add the coordinator.
-e.copy(coordinator = Some(coordinator))
-  case (child, distribution) =>
-// If this child is not an Exchange, we need to add an Exchange 
for now.
-// Ideally, we can try to avoid this Exchange. However, when we 
reach here,
-// there are at least two children operators (because if there is 
a single child
-// and we can avoid Exchange, supportsCoordinator will be false 
and we
-// will not reach here.). Although we can make two children have 
the same number of
-// post-shuffle partitions. Their numbers of pre-shuffle 
partitions may be different.
-// For example, let's say we have the following plan
-// Join
-// /  \
-//   Agg  Exchange
-//   /  \
-//Exchange  t2
-//  /
-// t1
-// In this case, because a post-shuffle partition can include 
multiple pre-shuffle
-// partitions, a HashPartitioning will not be strictly partitioned 
by the hashcodes
-// after shuffle. So, even we can use the child Exchange operator 
of the Join to
-// have a number of post-shuffle partitions that matches the 
number of partitions of
-// Agg, we cannot say these two children are partitioned in the 
same way.
-// Here is another case
-// Join
-// /  \
-//   Agg1  Agg2
-//   /  \
-//   Exchange1  Exchange2
-//   /   \
-//  t1   t2
-// In this case, two Aggs shuffle data with the same column of the 
join condition.
-// After we use ExchangeCoordinator, these two Aggs may not be 
partitioned in the same
-// way. Let's say that Agg1 and Agg2 both have 5 pre-shuffle 
partitions and 2
-// post-shuffle partitions. It is possible that Agg1 fetches those 
p

[GitHub] [spark] AmplabJenkins removed a comment on issue #24004: [SPARK-27084][SQL] Add function alias for bitand/bitnot/bitor/bitxor

2019-03-09 Thread GitBox
AmplabJenkins removed a comment on issue #24004: [SPARK-27084][SQL] Add 
function alias for bitand/bitnot/bitor/bitxor
URL: https://github.com/apache/spark/pull/24004#issuecomment-471248111
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #24004: [SPARK-27084][SQL] Add function alias for bitand/bitnot/bitor/bitxor

2019-03-09 Thread GitBox
AmplabJenkins commented on issue #24004: [SPARK-27084][SQL] Add function alias 
for bitand/bitnot/bitor/bitxor
URL: https://github.com/apache/spark/pull/24004#issuecomment-471248111
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #24004: [SPARK-27084][SQL] Add function alias for bitand/bitnot/bitor/bitxor

2019-03-09 Thread GitBox
AmplabJenkins removed a comment on issue #24004: [SPARK-27084][SQL] Add 
function alias for bitand/bitnot/bitor/bitxor
URL: https://github.com/apache/spark/pull/24004#issuecomment-471248112
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/103273/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #24004: [SPARK-27084][SQL] Add function alias for bitand/bitnot/bitor/bitxor

2019-03-09 Thread GitBox
AmplabJenkins commented on issue #24004: [SPARK-27084][SQL] Add function alias 
for bitand/bitnot/bitor/bitxor
URL: https://github.com/apache/spark/pull/24004#issuecomment-471248112
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/103273/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] SparkQA removed a comment on issue #24004: [SPARK-27084][SQL] Add function alias for bitand/bitnot/bitor/bitxor

2019-03-09 Thread GitBox
SparkQA removed a comment on issue #24004: [SPARK-27084][SQL] Add function 
alias for bitand/bitnot/bitor/bitxor
URL: https://github.com/apache/spark/pull/24004#issuecomment-471237131
 
 
   **[Test build #103273 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/103273/testReport)**
 for PR 24004 at commit 
[`4ae6916`](https://github.com/apache/spark/commit/4ae691650452efc2fc3a7c190655bca6215cc8c9).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] SparkQA commented on issue #24004: [SPARK-27084][SQL] Add function alias for bitand/bitnot/bitor/bitxor

2019-03-09 Thread GitBox
SparkQA commented on issue #24004: [SPARK-27084][SQL] Add function alias for 
bitand/bitnot/bitor/bitxor
URL: https://github.com/apache/spark/pull/24004#issuecomment-471248033
 
 
   **[Test build #103273 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/103273/testReport)**
 for PR 24004 at commit 
[`4ae6916`](https://github.com/apache/spark/commit/4ae691650452efc2fc3a7c190655bca6215cc8c9).
* This patch passes all tests.
* This patch merges cleanly.
* This patch adds no public classes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] fangshil edited a comment on issue #20303: [SPARK-23128][SQL] A new approach to do adaptive execution in Spark SQL

2019-03-09 Thread GitBox
fangshil edited a comment on issue #20303: [SPARK-23128][SQL] A new approach to 
do adaptive execution in Spark SQL
URL: https://github.com/apache/spark/pull/20303#issuecomment-469151637
 
 
   Excited to see AE making progress in upstream:) We have used the new AE 
framework to add SQL optimization rules and the result looks very promising. We 
have a few comments for this patch in general:
   
   1. The current patch handles shuffle parallelism on reducer side, as it 
starts with a relatively large number of mapper partitions(500), and merge into 
fewer reducer partitions by allowing each reducer to read multiple mappers. For 
large data scale, setting 10K to spark.sql.shuffle.partitions in non-AE VS 
maxNumPostShufflePartitions in AE should have same results since the reducer 
number won't change when data is large. I think with this patch, we haven't got 
the optimal performance since we only save the overhead of launching a certain 
number reduce tasks. A better approach would be dynamically estimating the 
initial/mapper parallelism between 0 and maxNumPostShufflePartitions. This 
should be made possible by AE as well, while this patch should be a solid 
foundation for future improvements. Hope we can merge it soon!
   
   2. This patch uses submitMapStage API. The API would submit each stage as a 
new job, so AE breaks Spark's vanilla definition of a job. This is an issue we 
inherit from the original AE, not originating from this new AE.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] fangshil commented on a change in pull request #20303: [SPARK-23128][SQL] A new approach to do adaptive execution in Spark SQL

2019-03-09 Thread GitBox
fangshil commented on a change in pull request #20303: [SPARK-23128][SQL] A new 
approach to do adaptive execution in Spark SQL
URL: https://github.com/apache/spark/pull/20303#discussion_r264023809
 
 

 ##
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/exchange/EnsureRequirements.scala
 ##
 @@ -36,107 +35,12 @@ import org.apache.spark.sql.internal.SQLConf
  * the input partition ordering requirements are met.
  */
 case class EnsureRequirements(conf: SQLConf) extends Rule[SparkPlan] {
-  private def defaultNumPreShufflePartitions: Int = conf.numShufflePartitions
-
-  private def targetPostShuffleInputSize: Long = 
conf.targetPostShuffleInputSize
-
-  private def adaptiveExecutionEnabled: Boolean = conf.adaptiveExecutionEnabled
-
-  private def minNumPostShufflePartitions: Option[Int] = {
-val minNumPostShufflePartitions = conf.minNumPostShufflePartitions
-if (minNumPostShufflePartitions > 0) Some(minNumPostShufflePartitions) 
else None
-  }
-
-  /**
-   * Adds [[ExchangeCoordinator]] to [[ShuffleExchangeExec]]s if adaptive 
query execution is enabled
-   * and partitioning schemes of these [[ShuffleExchangeExec]]s support 
[[ExchangeCoordinator]].
-   */
-  private def withExchangeCoordinator(
-  children: Seq[SparkPlan],
-  requiredChildDistributions: Seq[Distribution]): Seq[SparkPlan] = {
-val supportsCoordinator =
-  if (children.exists(_.isInstanceOf[ShuffleExchangeExec])) {
-// Right now, ExchangeCoordinator only support HashPartitionings.
-children.forall {
-  case e @ ShuffleExchangeExec(hash: HashPartitioning, _, _) => true
-  case child =>
-child.outputPartitioning match {
-  case hash: HashPartitioning => true
-  case collection: PartitioningCollection =>
-
collection.partitionings.forall(_.isInstanceOf[HashPartitioning])
-  case _ => false
-}
-}
-  } else {
-// In this case, although we do not have Exchange operators, we may 
still need to
-// shuffle data when we have more than one children because data 
generated by
-// these children may not be partitioned in the same way.
-// Please see the comment in withCoordinator for more details.
-val supportsDistribution = requiredChildDistributions.forall { dist =>
-  dist.isInstanceOf[ClusteredDistribution] || 
dist.isInstanceOf[HashClusteredDistribution]
-}
-children.length > 1 && supportsDistribution
-  }
-
-val withCoordinator =
-  if (adaptiveExecutionEnabled && supportsCoordinator) {
-val coordinator =
-  new ExchangeCoordinator(
-targetPostShuffleInputSize,
-minNumPostShufflePartitions)
-children.zip(requiredChildDistributions).map {
-  case (e: ShuffleExchangeExec, _) =>
-// This child is an Exchange, we need to add the coordinator.
-e.copy(coordinator = Some(coordinator))
-  case (child, distribution) =>
-// If this child is not an Exchange, we need to add an Exchange 
for now.
-// Ideally, we can try to avoid this Exchange. However, when we 
reach here,
-// there are at least two children operators (because if there is 
a single child
-// and we can avoid Exchange, supportsCoordinator will be false 
and we
-// will not reach here.). Although we can make two children have 
the same number of
-// post-shuffle partitions. Their numbers of pre-shuffle 
partitions may be different.
-// For example, let's say we have the following plan
-// Join
-// /  \
-//   Agg  Exchange
-//   /  \
-//Exchange  t2
-//  /
-// t1
-// In this case, because a post-shuffle partition can include 
multiple pre-shuffle
-// partitions, a HashPartitioning will not be strictly partitioned 
by the hashcodes
-// after shuffle. So, even we can use the child Exchange operator 
of the Join to
-// have a number of post-shuffle partitions that matches the 
number of partitions of
-// Agg, we cannot say these two children are partitioned in the 
same way.
-// Here is another case
-// Join
-// /  \
-//   Agg1  Agg2
-//   /  \
-//   Exchange1  Exchange2
-//   /   \
-//  t1   t2
-// In this case, two Aggs shuffle data with the same column of the 
join condition.
-// After we use ExchangeCoordinator, these two Aggs may not be 
partitioned in the same
-// way. Let's say that Agg1 and Agg2 both have 5 pre-shuffle 
partitions and 2
-// post-shuffle partitions. It is possible that Agg1 fetches those 
p

[GitHub] [spark] fangshil commented on a change in pull request #20303: [SPARK-23128][SQL] A new approach to do adaptive execution in Spark SQL

2019-03-09 Thread GitBox
fangshil commented on a change in pull request #20303: [SPARK-23128][SQL] A new 
approach to do adaptive execution in Spark SQL
URL: https://github.com/apache/spark/pull/20303#discussion_r264023809
 
 

 ##
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/exchange/EnsureRequirements.scala
 ##
 @@ -36,107 +35,12 @@ import org.apache.spark.sql.internal.SQLConf
  * the input partition ordering requirements are met.
  */
 case class EnsureRequirements(conf: SQLConf) extends Rule[SparkPlan] {
-  private def defaultNumPreShufflePartitions: Int = conf.numShufflePartitions
-
-  private def targetPostShuffleInputSize: Long = 
conf.targetPostShuffleInputSize
-
-  private def adaptiveExecutionEnabled: Boolean = conf.adaptiveExecutionEnabled
-
-  private def minNumPostShufflePartitions: Option[Int] = {
-val minNumPostShufflePartitions = conf.minNumPostShufflePartitions
-if (minNumPostShufflePartitions > 0) Some(minNumPostShufflePartitions) 
else None
-  }
-
-  /**
-   * Adds [[ExchangeCoordinator]] to [[ShuffleExchangeExec]]s if adaptive 
query execution is enabled
-   * and partitioning schemes of these [[ShuffleExchangeExec]]s support 
[[ExchangeCoordinator]].
-   */
-  private def withExchangeCoordinator(
-  children: Seq[SparkPlan],
-  requiredChildDistributions: Seq[Distribution]): Seq[SparkPlan] = {
-val supportsCoordinator =
-  if (children.exists(_.isInstanceOf[ShuffleExchangeExec])) {
-// Right now, ExchangeCoordinator only support HashPartitionings.
-children.forall {
-  case e @ ShuffleExchangeExec(hash: HashPartitioning, _, _) => true
-  case child =>
-child.outputPartitioning match {
-  case hash: HashPartitioning => true
-  case collection: PartitioningCollection =>
-
collection.partitionings.forall(_.isInstanceOf[HashPartitioning])
-  case _ => false
-}
-}
-  } else {
-// In this case, although we do not have Exchange operators, we may 
still need to
-// shuffle data when we have more than one children because data 
generated by
-// these children may not be partitioned in the same way.
-// Please see the comment in withCoordinator for more details.
-val supportsDistribution = requiredChildDistributions.forall { dist =>
-  dist.isInstanceOf[ClusteredDistribution] || 
dist.isInstanceOf[HashClusteredDistribution]
-}
-children.length > 1 && supportsDistribution
-  }
-
-val withCoordinator =
-  if (adaptiveExecutionEnabled && supportsCoordinator) {
-val coordinator =
-  new ExchangeCoordinator(
-targetPostShuffleInputSize,
-minNumPostShufflePartitions)
-children.zip(requiredChildDistributions).map {
-  case (e: ShuffleExchangeExec, _) =>
-// This child is an Exchange, we need to add the coordinator.
-e.copy(coordinator = Some(coordinator))
-  case (child, distribution) =>
-// If this child is not an Exchange, we need to add an Exchange 
for now.
-// Ideally, we can try to avoid this Exchange. However, when we 
reach here,
-// there are at least two children operators (because if there is 
a single child
-// and we can avoid Exchange, supportsCoordinator will be false 
and we
-// will not reach here.). Although we can make two children have 
the same number of
-// post-shuffle partitions. Their numbers of pre-shuffle 
partitions may be different.
-// For example, let's say we have the following plan
-// Join
-// /  \
-//   Agg  Exchange
-//   /  \
-//Exchange  t2
-//  /
-// t1
-// In this case, because a post-shuffle partition can include 
multiple pre-shuffle
-// partitions, a HashPartitioning will not be strictly partitioned 
by the hashcodes
-// after shuffle. So, even we can use the child Exchange operator 
of the Join to
-// have a number of post-shuffle partitions that matches the 
number of partitions of
-// Agg, we cannot say these two children are partitioned in the 
same way.
-// Here is another case
-// Join
-// /  \
-//   Agg1  Agg2
-//   /  \
-//   Exchange1  Exchange2
-//   /   \
-//  t1   t2
-// In this case, two Aggs shuffle data with the same column of the 
join condition.
-// After we use ExchangeCoordinator, these two Aggs may not be 
partitioned in the same
-// way. Let's say that Agg1 and Agg2 both have 5 pre-shuffle 
partitions and 2
-// post-shuffle partitions. It is possible that Agg1 fetches those 
p

[GitHub] [spark] SparkQA commented on issue #23986: [SPARK-27070] Fix performance bug in DefaultPartitionCoalescer

2019-03-09 Thread GitBox
SparkQA commented on issue #23986: [SPARK-27070] Fix performance bug in 
DefaultPartitionCoalescer
URL: https://github.com/apache/spark/pull/23986#issuecomment-471246386
 
 
   **[Test build #103275 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/103275/testReport)**
 for PR 23986 at commit 
[`2566639`](https://github.com/apache/spark/commit/2566639e81a227c3161cdd02a56ac5f57065dd43).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #23986: [SPARK-27070] Fix performance bug in DefaultPartitionCoalescer

2019-03-09 Thread GitBox
AmplabJenkins removed a comment on issue #23986: [SPARK-27070] Fix performance 
bug in DefaultPartitionCoalescer
URL: https://github.com/apache/spark/pull/23986#issuecomment-471246295
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #23986: [SPARK-27070] Fix performance bug in DefaultPartitionCoalescer

2019-03-09 Thread GitBox
AmplabJenkins removed a comment on issue #23986: [SPARK-27070] Fix performance 
bug in DefaultPartitionCoalescer
URL: https://github.com/apache/spark/pull/23986#issuecomment-471246296
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   
https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/8721/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #23986: [SPARK-27070] Fix performance bug in DefaultPartitionCoalescer

2019-03-09 Thread GitBox
AmplabJenkins commented on issue #23986: [SPARK-27070] Fix performance bug in 
DefaultPartitionCoalescer
URL: https://github.com/apache/spark/pull/23986#issuecomment-471246295
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #23986: [SPARK-27070] Fix performance bug in DefaultPartitionCoalescer

2019-03-09 Thread GitBox
AmplabJenkins commented on issue #23986: [SPARK-27070] Fix performance bug in 
DefaultPartitionCoalescer
URL: https://github.com/apache/spark/pull/23986#issuecomment-471246296
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   
https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/8721/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] felixcheung commented on issue #23377: [SPARK-26439][CORE][WIP] Introduce WorkerOffer reservation mechanism for Barrier TaskSet

2019-03-09 Thread GitBox
felixcheung commented on issue #23377: [SPARK-26439][CORE][WIP] Introduce 
WorkerOffer reservation mechanism for Barrier TaskSet
URL: https://github.com/apache/spark/pull/23377#issuecomment-471246065
 
 
   thanks @Ngone51 - I think you did bring up some very good point here in your 
description, and would be precisely what I think we would need to get barrier 
scheduling actually useful for its intention use cases. would you mind thinking 
about a way this could be done with smaller changes? also welcome to bring the 
design discussion to d...@spark.apache.org


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #23986: [SPARK-27070] Fix performance bug in DefaultPartitionCoalescer

2019-03-09 Thread GitBox
AmplabJenkins removed a comment on issue #23986: [SPARK-27070] Fix performance 
bug in DefaultPartitionCoalescer
URL: https://github.com/apache/spark/pull/23986#issuecomment-471245837
 
 
   Test FAILed.
   Refer to this link for build results (access rights to CI server needed): 
   https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/103271/
   Test FAILed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #23986: [SPARK-27070] Fix performance bug in DefaultPartitionCoalescer

2019-03-09 Thread GitBox
AmplabJenkins removed a comment on issue #23986: [SPARK-27070] Fix performance 
bug in DefaultPartitionCoalescer
URL: https://github.com/apache/spark/pull/23986#issuecomment-471245835
 
 
   Merged build finished. Test FAILed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #23986: [SPARK-27070] Fix performance bug in DefaultPartitionCoalescer

2019-03-09 Thread GitBox
AmplabJenkins commented on issue #23986: [SPARK-27070] Fix performance bug in 
DefaultPartitionCoalescer
URL: https://github.com/apache/spark/pull/23986#issuecomment-471245837
 
 
   Test FAILed.
   Refer to this link for build results (access rights to CI server needed): 
   https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/103271/
   Test FAILed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #23986: [SPARK-27070] Fix performance bug in DefaultPartitionCoalescer

2019-03-09 Thread GitBox
AmplabJenkins commented on issue #23986: [SPARK-27070] Fix performance bug in 
DefaultPartitionCoalescer
URL: https://github.com/apache/spark/pull/23986#issuecomment-471245835
 
 
   Merged build finished. Test FAILed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] SparkQA removed a comment on issue #23986: [SPARK-27070] Fix performance bug in DefaultPartitionCoalescer

2019-03-09 Thread GitBox
SparkQA removed a comment on issue #23986: [SPARK-27070] Fix performance bug in 
DefaultPartitionCoalescer
URL: https://github.com/apache/spark/pull/23986#issuecomment-471234931
 
 
   **[Test build #103271 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/103271/testReport)**
 for PR 23986 at commit 
[`c8424af`](https://github.com/apache/spark/commit/c8424af002d870da54f767a25c62a4579daeef2c).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] felixcheung commented on issue #24023: [SPARK-27102][R][PYTHON][CORE] Remove the references to Python's Scala codes in R's Scala codes

2019-03-09 Thread GitBox
felixcheung commented on issue #24023: [SPARK-27102][R][PYTHON][CORE] Remove 
the references to Python's Scala codes in R's Scala codes
URL: https://github.com/apache/spark/pull/24023#issuecomment-471245842
 
 
   LGTM


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] felixcheung commented on a change in pull request #24023: [SPARK-27102][R][PYTHON][CORE] Remove the references to Python's Scala codes in R's Scala codes

2019-03-09 Thread GitBox
felixcheung commented on a change in pull request #24023: 
[SPARK-27102][R][PYTHON][CORE] Remove the references to Python's Scala codes in 
R's Scala codes
URL: https://github.com/apache/spark/pull/24023#discussion_r264023028
 
 

 ##
 File path: core/src/main/scala/org/apache/spark/api/r/RRDD.scala
 ##
 @@ -177,23 +175,11 @@ private[spark] object RRDD {
  * over a socket. This is used in preference to writing data to a file when 
encryption is enabled.
  */
 private[spark] class RParallelizeServer(sc: JavaSparkContext, parallelism: Int)
-extends PythonServer[JavaRDD[Array[Byte]]](
-  new RSocketAuthHelper(), "sparkr-parallelize-server") {
+extends SocketAuthServer[JavaRDD[Array[Byte]]](
+  new RAuthHelper(SparkEnv.get.conf), "sparkr-parallelize-server") {
 
   override def handleConnection(sock: Socket): JavaRDD[Array[Byte]] = {
 val in = sock.getInputStream()
-PythonRDD.readRDDFromInputStream(sc.sc, in, parallelism)
-  }
-}
-
-private[spark] class RSocketAuthHelper extends 
SocketAuthHelper(SparkEnv.get.conf) {
-  override protected def readUtf8(s: Socket): String = {
 
 Review comment:
   ok


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] SparkQA commented on issue #23986: [SPARK-27070] Fix performance bug in DefaultPartitionCoalescer

2019-03-09 Thread GitBox
SparkQA commented on issue #23986: [SPARK-27070] Fix performance bug in 
DefaultPartitionCoalescer
URL: https://github.com/apache/spark/pull/23986#issuecomment-471245791
 
 
   **[Test build #103271 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/103271/testReport)**
 for PR 23986 at commit 
[`c8424af`](https://github.com/apache/spark/commit/c8424af002d870da54f767a25c62a4579daeef2c).
* This patch **fails Spark unit tests**.
* This patch merges cleanly.
* This patch adds no public classes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] felixcheung commented on issue #18784: [SPARK-21559][Mesos] remove mesos fine-grained mode

2019-03-09 Thread GitBox
felixcheung commented on issue #18784: [SPARK-21559][Mesos] remove mesos 
fine-grained mode
URL: https://github.com/apache/spark/pull/18784#issuecomment-471245732
 
 
   let's do it!
   it's only 2017 though. maybe we should wait until July 2019 its 2nd year 
anniversary to merge this? (ok, I'm joking)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] gatorsmile commented on issue #24032: [SPARK-27097] [CHERRY-PICK 2.4] Avoid embedding platform-dependent offsets literally in whole-stage generated code

2019-03-09 Thread GitBox
gatorsmile commented on issue #24032: [SPARK-27097] [CHERRY-PICK 2.4] Avoid 
embedding platform-dependent offsets literally in whole-stage generated code
URL: https://github.com/apache/spark/pull/24032#issuecomment-471239960
 
 
   cc @dbtsai @rednaxelafx @cloud-fan @kiszk 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] ajithme commented on a change in pull request #23918: [SPARK-27011][SQL] reset command fails with cache

2019-03-09 Thread GitBox
ajithme commented on a change in pull request #23918: [SPARK-27011][SQL] reset 
command fails with cache
URL: https://github.com/apache/spark/pull/23918#discussion_r264020674
 
 

 ##
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/IgnoreCachedData.scala
 ##
 @@ -0,0 +1,23 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.catalyst.plans.logical
+
+/**
+ * A [[LogicalPlan]] operator that does not use the cached results stored in 
CacheManager
+ */
 
 Review comment:
   @maropu even a non case logical plan if needed can be forced not to use 
cache. Right.?? So is the comment really needed.?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #24042: [SPARK-27120][TEST] Upgrade scalatest version to 3.0.5

2019-03-09 Thread GitBox
AmplabJenkins commented on issue #24042: [SPARK-27120][TEST] Upgrade scalatest 
version to 3.0.5
URL: https://github.com/apache/spark/pull/24042#issuecomment-471238775
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   
https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/8720/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #24042: [SPARK-27120][TEST] Upgrade scalatest version to 3.0.5

2019-03-09 Thread GitBox
AmplabJenkins removed a comment on issue #24042: [SPARK-27120][TEST] Upgrade 
scalatest version to 3.0.5
URL: https://github.com/apache/spark/pull/24042#issuecomment-471238775
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   
https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/8720/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #24042: [SPARK-27120][TEST] Upgrade scalatest version to 3.0.5

2019-03-09 Thread GitBox
AmplabJenkins removed a comment on issue #24042: [SPARK-27120][TEST] Upgrade 
scalatest version to 3.0.5
URL: https://github.com/apache/spark/pull/24042#issuecomment-471238773
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #24042: [SPARK-27120][TEST] Upgrade scalatest version to 3.0.5

2019-03-09 Thread GitBox
AmplabJenkins commented on issue #24042: [SPARK-27120][TEST] Upgrade scalatest 
version to 3.0.5
URL: https://github.com/apache/spark/pull/24042#issuecomment-471238773
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] SparkQA commented on issue #24042: [SPARK-27120][TEST] Upgrade scalatest version to 3.0.5

2019-03-09 Thread GitBox
SparkQA commented on issue #24042: [SPARK-27120][TEST] Upgrade scalatest 
version to 3.0.5
URL: https://github.com/apache/spark/pull/24042#issuecomment-471238517
 
 
   **[Test build #103274 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/103274/testReport)**
 for PR 24042 at commit 
[`bd787da`](https://github.com/apache/spark/commit/bd787dab8c76df66063e24fcb26de0e609b0c350).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] wangyum opened a new pull request #24042: [SPARK-27120][TEST] Upgrade scalatest version to 3.0.5

2019-03-09 Thread GitBox
wangyum opened a new pull request #24042: [SPARK-27120][TEST] Upgrade scalatest 
version to 3.0.5
URL: https://github.com/apache/spark/pull/24042
 
 
   ## What changes were proposed in this pull request?
   
   **ScalaTest 3.0.5 Release Notes**
   
   **Bug Fixes**
   
   - Fixed the implicit view not available problem when used with compile macro.
   - Fixed a stack depth problem in RefSpecLike and fixture.SpecLike under 
Scala 2.13.
   - Changed Framework and ScalaTestFramework to set spanScaleFactor for Runner 
object instances for different Runners using different class loaders. This 
fixed a problem whereby an incorrect Runner.spanScaleFactor could be used when 
the tests for multiple sbt project's were run concurrently.
   - Fixed a bug in endsWith regex matcher.
   
   **Improvements**
   - Removed duplicated parsing code for -C in ArgsParser.
   - Improved performance in WebBrowser.
   - Documentation typo rectification.
   - Improve validity of Junit XML reports.
   - Improved performance by replacing all .size == 0 and .length == 0 to 
.isEmpty.
   
   **Enhancements**
   - Added 'C' option to -P, which will tell -P to use cached thread pool.
   - External Dependencies Update
   - Bumped up scala-js version to 0.6.22.
   - Changed to depend on mockito-core, not mockito-all.
   - Bumped up jmock version to 2.8.3.
   - Bumped up junit version to 4.12.
   - Removed dependency to scala-parser-combinators.
   
   More details:
   http://www.scalatest.org/release_notes/3.0.5
   
   ## How was this patch tested?
   
   manual tests on local machine:
   ```
   nohup build/sbt clean -Djline.terminal=jline.UnsupportedTerminal 
-Phadoop-2.7  -Pkubernetes -Phive-thriftserver -Pyarn -Pspark-ganglia-lgpl 
-Phive -Pkinesis-asl -Pmesos test > run.scalatest.log &
   ```
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #23599: [SPARK-24793][K8s] Enhance spark-submit for app management

2019-03-09 Thread GitBox
AmplabJenkins removed a comment on issue #23599: [SPARK-24793][K8s] Enhance 
spark-submit for app management
URL: https://github.com/apache/spark/pull/23599#issuecomment-471238394
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #23599: [SPARK-24793][K8s] Enhance spark-submit for app management

2019-03-09 Thread GitBox
AmplabJenkins removed a comment on issue #23599: [SPARK-24793][K8s] Enhance 
spark-submit for app management
URL: https://github.com/apache/spark/pull/23599#issuecomment-471238395
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/103270/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #23599: [SPARK-24793][K8s] Enhance spark-submit for app management

2019-03-09 Thread GitBox
AmplabJenkins commented on issue #23599: [SPARK-24793][K8s] Enhance 
spark-submit for app management
URL: https://github.com/apache/spark/pull/23599#issuecomment-471238395
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/103270/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #23599: [SPARK-24793][K8s] Enhance spark-submit for app management

2019-03-09 Thread GitBox
AmplabJenkins commented on issue #23599: [SPARK-24793][K8s] Enhance 
spark-submit for app management
URL: https://github.com/apache/spark/pull/23599#issuecomment-471238394
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] SparkQA removed a comment on issue #23599: [SPARK-24793][K8s] Enhance spark-submit for app management

2019-03-09 Thread GitBox
SparkQA removed a comment on issue #23599: [SPARK-24793][K8s] Enhance 
spark-submit for app management
URL: https://github.com/apache/spark/pull/23599#issuecomment-471221455
 
 
   **[Test build #103270 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/103270/testReport)**
 for PR 23599 at commit 
[`8a09f67`](https://github.com/apache/spark/commit/8a09f67c07da327193e13e10a5f5574fd54db3d0).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] SparkQA commented on issue #23599: [SPARK-24793][K8s] Enhance spark-submit for app management

2019-03-09 Thread GitBox
SparkQA commented on issue #23599: [SPARK-24793][K8s] Enhance spark-submit for 
app management
URL: https://github.com/apache/spark/pull/23599#issuecomment-471238304
 
 
   **[Test build #103270 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/103270/testReport)**
 for PR 23599 at commit 
[`8a09f67`](https://github.com/apache/spark/commit/8a09f67c07da327193e13e10a5f5574fd54db3d0).
* This patch passes all tests.
* This patch merges cleanly.
* This patch adds the following public classes _(experimental)_:
 * `trait CommandLineLoggingUtils `


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #23599: [SPARK-24793][K8s] Enhance spark-submit for app management

2019-03-09 Thread GitBox
AmplabJenkins removed a comment on issue #23599: [SPARK-24793][K8s] Enhance 
spark-submit for app management
URL: https://github.com/apache/spark/pull/23599#issuecomment-471237869
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/103269/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun edited a comment on issue #24018: [SPARK-23749][SQL] Workaround built-in Hive api changes (phase 1)

2019-03-09 Thread GitBox
dongjoon-hyun edited a comment on issue #24018: [SPARK-23749][SQL] Workaround 
built-in Hive api changes (phase 1)
URL: https://github.com/apache/spark/pull/24018#issuecomment-471237830
 
 
   Thank you for requesting, @felixcheung . I'll take a look, too.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #23599: [SPARK-24793][K8s] Enhance spark-submit for app management

2019-03-09 Thread GitBox
AmplabJenkins commented on issue #23599: [SPARK-24793][K8s] Enhance 
spark-submit for app management
URL: https://github.com/apache/spark/pull/23599#issuecomment-471237866
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #23599: [SPARK-24793][K8s] Enhance spark-submit for app management

2019-03-09 Thread GitBox
AmplabJenkins removed a comment on issue #23599: [SPARK-24793][K8s] Enhance 
spark-submit for app management
URL: https://github.com/apache/spark/pull/23599#issuecomment-471237866
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun edited a comment on issue #24018: [SPARK-23749][SQL] Workaround built-in Hive api changes (phase 1)

2019-03-09 Thread GitBox
dongjoon-hyun edited a comment on issue #24018: [SPARK-23749][SQL] Workaround 
built-in Hive api changes (phase 1)
URL: https://github.com/apache/spark/pull/24018#issuecomment-471237830
 
 
   Thank you for requesting, @felixcheung . I'll take a look.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #23599: [SPARK-24793][K8s] Enhance spark-submit for app management

2019-03-09 Thread GitBox
AmplabJenkins commented on issue #23599: [SPARK-24793][K8s] Enhance 
spark-submit for app management
URL: https://github.com/apache/spark/pull/23599#issuecomment-471237869
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/103269/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on issue #24018: [SPARK-23749][SQL] Workaround built-in Hive api changes (phase 1)

2019-03-09 Thread GitBox
dongjoon-hyun commented on issue #24018: [SPARK-23749][SQL] Workaround built-in 
Hive api changes (phase 1)
URL: https://github.com/apache/spark/pull/24018#issuecomment-471237830
 
 
   Thank you for requesting me, @felixcheung . I'll take a look, too.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] SparkQA removed a comment on issue #23599: [SPARK-24793][K8s] Enhance spark-submit for app management

2019-03-09 Thread GitBox
SparkQA removed a comment on issue #23599: [SPARK-24793][K8s] Enhance 
spark-submit for app management
URL: https://github.com/apache/spark/pull/23599#issuecomment-471220904
 
 
   **[Test build #103269 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/103269/testReport)**
 for PR 23599 at commit 
[`27a531a`](https://github.com/apache/spark/commit/27a531a12c29131c65b6bab8afd5a399fdd8e6d2).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] SparkQA commented on issue #23599: [SPARK-24793][K8s] Enhance spark-submit for app management

2019-03-09 Thread GitBox
SparkQA commented on issue #23599: [SPARK-24793][K8s] Enhance spark-submit for 
app management
URL: https://github.com/apache/spark/pull/23599#issuecomment-471237766
 
 
   **[Test build #103269 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/103269/testReport)**
 for PR 23599 at commit 
[`27a531a`](https://github.com/apache/spark/commit/27a531a12c29131c65b6bab8afd5a399fdd8e6d2).
* This patch passes all tests.
* This patch merges cleanly.
* This patch adds the following public classes _(experimental)_:
 * `trait CommandLineLoggingUtils `


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on issue #23508: [SPARK-21351][SQL] Remove the UpdateAttributeNullability rule from the optimizer

2019-03-09 Thread GitBox
dongjoon-hyun commented on issue #23508: [SPARK-21351][SQL] Remove the 
UpdateAttributeNullability rule from the optimizer
URL: https://github.com/apache/spark/pull/23508#issuecomment-471237605
 
 
   Hi, @maropu . 
   Based on the our discussion history and the recent nullability fixes, we 
decided to remove this from the optimizer step.
   
   As a the final piece of this removal, could you remove the following 
`UpdateAttributeNullability` optimizer description? Otherwise, someone may try 
to add it back.
   - 
https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/UpdateAttributeNullability.scala#L32-L34
   ```
*
* This rule should be executed again at the end of optimization phase, as 
optimizer may change
* some expressions and their nullabilities as well. See SPARK-21351 for 
more details.
   ```
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] wangyum commented on issue #24040: [SPARK-27118][SQL] Upgrade Hive Metastore Client to the latest versions for Hive 1.0.x/1.1.x

2019-03-09 Thread GitBox
wangyum commented on issue #24040: [SPARK-27118][SQL] Upgrade Hive Metastore 
Client to the latest versions for Hive 1.0.x/1.1.x
URL: https://github.com/apache/spark/pull/24040#issuecomment-471237317
 
 
Got it. Thank you @srowen @dongjoon-hyun


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] SparkQA commented on issue #24004: [SPARK-27084][SQL] Add function alias for bitand/bitnot/bitor/bitxor

2019-03-09 Thread GitBox
SparkQA commented on issue #24004: [SPARK-27084][SQL] Add function alias for 
bitand/bitnot/bitor/bitxor
URL: https://github.com/apache/spark/pull/24004#issuecomment-471237131
 
 
   **[Test build #103273 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/103273/testReport)**
 for PR 24004 at commit 
[`4ae6916`](https://github.com/apache/spark/commit/4ae691650452efc2fc3a7c190655bca6215cc8c9).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #24004: [SPARK-27084][SQL] Add function alias for bitand/bitnot/bitor/bitxor

2019-03-09 Thread GitBox
AmplabJenkins removed a comment on issue #24004: [SPARK-27084][SQL] Add 
function alias for bitand/bitnot/bitor/bitxor
URL: https://github.com/apache/spark/pull/24004#issuecomment-471237024
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #24004: [SPARK-27084][SQL] Add function alias for bitand/bitnot/bitor/bitxor

2019-03-09 Thread GitBox
AmplabJenkins commented on issue #24004: [SPARK-27084][SQL] Add function alias 
for bitand/bitnot/bitor/bitxor
URL: https://github.com/apache/spark/pull/24004#issuecomment-471237024
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] srowen commented on a change in pull request #24028: [SPARK-26917][SQL] Further reduce locks in CacheManager

2019-03-09 Thread GitBox
srowen commented on a change in pull request #24028: [SPARK-26917][SQL] Further 
reduce locks in CacheManager
URL: https://github.com/apache/spark/pull/24028#discussion_r264020050
 
 

 ##
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/CacheManager.scala
 ##
 @@ -144,16 +144,10 @@ class CacheManager extends Logging {
   } else {
 _.sameResult(plan)
   }
-val plansToUncache = mutable.Buffer[CachedData]()
-readLock {
-  val it = cachedData.iterator()
-  while (it.hasNext) {
-val cd = it.next()
-if (shouldRemove(cd.plan)) {
-  plansToUncache += cd
-}
-  }
+val cachedDataCopy = readLock {
+  cachedData.asScala.clone()
 }
+val plansToUncache = cachedDataCopy.filter(cd => shouldRemove(cd.plan))
 
 Review comment:
   I suspect that it's the logic in `shouldRemove` that takes the time here, 
and can be done without the lock.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #24004: [SPARK-27084][SQL] Add function alias for bitand/bitnot/bitor/bitxor

2019-03-09 Thread GitBox
AmplabJenkins removed a comment on issue #24004: [SPARK-27084][SQL] Add 
function alias for bitand/bitnot/bitor/bitxor
URL: https://github.com/apache/spark/pull/24004#issuecomment-471237026
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   
https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/8719/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #24004: [SPARK-27084][SQL] Add function alias for bitand/bitnot/bitor/bitxor

2019-03-09 Thread GitBox
AmplabJenkins commented on issue #24004: [SPARK-27084][SQL] Add function alias 
for bitand/bitnot/bitor/bitxor
URL: https://github.com/apache/spark/pull/24004#issuecomment-471237026
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   
https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/8719/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] srowen commented on issue #24004: [SPARK-27084][SQL] Add function alias for bitand/bitnot/bitor/bitxor

2019-03-09 Thread GitBox
srowen commented on issue #24004: [SPARK-27084][SQL] Add function alias for 
bitand/bitnot/bitor/bitxor
URL: https://github.com/apache/spark/pull/24004#issuecomment-471236940
 
 
   I think we would generally not copy non-standard language features from 
other DBs. I am not sure this is worth it. Could be convinced if there is a 
common need for these bitwise operations, but UDFs are so easy in Spark that it 
takes away most need for these functions as builtins


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #24004: [SPARK-27084][SQL] Add function alias for bitand/bitnot/bitor/bitxor

2019-03-09 Thread GitBox
dongjoon-hyun commented on a change in pull request #24004: [SPARK-27084][SQL] 
Add function alias for bitand/bitnot/bitor/bitxor
URL: https://github.com/apache/spark/pull/24004#discussion_r264020007
 
 

 ##
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/FunctionRegistry.scala
 ##
 @@ -511,6 +511,11 @@ object FunctionRegistry {
 expression[BitwiseOr]("|"),
 expression[BitwiseXor]("^"),
 
+expression[BitwiseAnd]("bitand"),
+expression[BitwiseNot]("bitnot"),
+expression[BitwiseOr]("bitor"),
+expression[BitwiseXor]("bitxor"),
 
 Review comment:
   Hi, @lipzhu . Thank you for contribution. For the aliases, we put them 
together like the following.
   ```scala
   expression[BitwiseAnd]("&"),
   expression[BitwiseAnd]("bitand"),
   expression[BitwiseNot]("~"),
   expression[BitwiseNot]("bitnot"),
   expression[BitwiseOr]("|"),
   expression[BitwiseOr]("bitor"),
   expression[BitwiseXor]("^"),
   expression[BitwiseXor]("bitxor"),
   ```
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on issue #24004: [SPARK-27084][SQL] Add function alias for bitand/bitnot/bitor/bitxor

2019-03-09 Thread GitBox
dongjoon-hyun commented on issue #24004: [SPARK-27084][SQL] Add function alias 
for bitand/bitnot/bitor/bitxor
URL: https://github.com/apache/spark/pull/24004#issuecomment-471236826
 
 
   ok to test


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #24004: [SPARK-27084][SQL] Add function alias for bitand/bitnot/bitor/bitxor

2019-03-09 Thread GitBox
AmplabJenkins removed a comment on issue #24004: [SPARK-27084][SQL] Add 
function alias for bitand/bitnot/bitor/bitxor
URL: https://github.com/apache/spark/pull/24004#issuecomment-470500924
 
 
   Can one of the admins verify this patch?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #24025: [SPARK-27106][SQL] merge CaseInsensitiveStringMap and DataSourceOptions

2019-03-09 Thread GitBox
AmplabJenkins removed a comment on issue #24025: [SPARK-27106][SQL] merge 
CaseInsensitiveStringMap and DataSourceOptions
URL: https://github.com/apache/spark/pull/24025#issuecomment-471236644
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #24025: [SPARK-27106][SQL] merge CaseInsensitiveStringMap and DataSourceOptions

2019-03-09 Thread GitBox
AmplabJenkins removed a comment on issue #24025: [SPARK-27106][SQL] merge 
CaseInsensitiveStringMap and DataSourceOptions
URL: https://github.com/apache/spark/pull/24025#issuecomment-471236645
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   
https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/8718/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



  1   2   3   4   5   >