[GitHub] spark issue #22402: [SPARK-25414][SS][TEST] make it clear that the numRows m...

2018-09-13 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22402
  
**[Test build #96056 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/96056/testReport)**
 for PR 22402 at commit 
[`0c661a0`](https://github.com/apache/spark/commit/0c661a08e74fea90b025ad21fb9da6113ef70d4c).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #17400: [SPARK-19981][SQL] Respect aliases in output partitionin...

2018-09-13 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/17400
  
**[Test build #96061 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/96061/testReport)**
 for PR 17400 at commit 
[`5482b1b`](https://github.com/apache/spark/commit/5482b1be6308ddf7e77dc25c0bdfca3ede2d61a7).


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #20433: [SPARK-23264][SQL] Make INTERVAL keyword optional in INT...

2018-09-13 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/20433
  
Merged build finished. Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22204: [SPARK-25196][SQL] Extends Analyze commands for cached t...

2018-09-13 Thread maropu
Github user maropu commented on the issue:

https://github.com/apache/spark/pull/22204
  
retest this please


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #20433: [SPARK-23264][SQL] Make INTERVAL keyword optional in INT...

2018-09-13 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/20433
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 

https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/3103/
Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #17400: [SPARK-19981][SQL] Respect aliases in output partitionin...

2018-09-13 Thread maropu
Github user maropu commented on the issue:

https://github.com/apache/spark/pull/17400
  
retest this please


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #20433: [SPARK-23264][SQL] Make INTERVAL keyword optional in INT...

2018-09-13 Thread maropu
Github user maropu commented on the issue:

https://github.com/apache/spark/pull/20433
  
I'll update the migration guide when bumping up master branch version to 
next one.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #20433: [SPARK-23264][SQL] Make INTERVAL keyword optional in INT...

2018-09-13 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/20433
  
**[Test build #96060 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/96060/testReport)**
 for PR 20433 at commit 
[`452566a`](https://github.com/apache/spark/commit/452566a9f3d1e87ba079c6c428aeb70cbd909607).


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22418: [SPARK-25427][SQL][TEST] Add BloomFilter creation test c...

2018-09-13 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22418
  
**[Test build #96059 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/96059/testReport)**
 for PR 22418 at commit 
[`2e3aca5`](https://github.com/apache/spark/commit/2e3aca5f0fd0e2ec274d24774b57f7c84b28e454).


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22418: [SPARK-25427][SQL][TEST] Add BloomFilter creation test c...

2018-09-13 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22418
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 

https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/3102/
Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22418: [SPARK-25427][SQL][TEST] Add BloomFilter creation test c...

2018-09-13 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22418
  
Merged build finished. Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22418: [SPARK-25427][SQL][TEST] Add BloomFilter creation...

2018-09-13 Thread dongjoon-hyun
GitHub user dongjoon-hyun opened a pull request:

https://github.com/apache/spark/pull/22418

[SPARK-25427][SQL][TEST] Add BloomFilter creation test cases

## What changes were proposed in this pull request?

Spark supports BloomFilter creation for ORC files. This PR aims to add test 
coverages to prevent accidental regressions like SPARK-12417.

## How was this patch tested?

Pass the Jenkins with newly added test cases.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dongjoon-hyun/spark SPARK-25427

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/22418.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #22418


commit 2e3aca5f0fd0e2ec274d24774b57f7c84b28e454
Author: Dongjoon Hyun 
Date:   2018-09-14T05:11:08Z

[SPARK-25427][SQL][TEST] Add BloomFilter creation test cases




---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22408: [SPARK-25417][SQL] ArrayContains function may return inc...

2018-09-13 Thread dilipbiswal
Github user dilipbiswal commented on the issue:

https://github.com/apache/spark/pull/22408
  
@ushin @gatorsmile Here are the results from presto. Please let me know if 
you want me try any case in particular. One thing to note is that presto allows 
comparison between int and decimal. In our `findTightestCommonType` we don't do 
the promotion.

``` SQL
presto:default> select contains(array[1,2,3], '1');
Query 20180914_053612_6_pru6h failed: line 1:8: Unexpected parameters 
(array(integer), varchar(1)) for function contains. Expected: 
contains(array(T), T) T:comparable
select contains(array[1,2,3], '1')

presto:default> select contains(array[1,2,3], 'foo');
Query 20180914_053729_7_pru6h failed: line 1:8: Unexpected parameters 
(array(integer), varchar(3)) for function contains. Expected: 
contains(array(T), T) T:comparable
select contains(array[1,2,3], 'foo')

presto:default> select contains(array['1','2','3'], 1);
Query 20180914_053850_8_pru6h failed: line 1:8: Unexpected parameters 
(array(varchar(1)), integer) for function contains. Expected: 
contains(array(T), T) T:comparable
select contains(array['1','2','3'], 1)

presto:default> select contains(array[1,2,3], cast(1.0 as decimal(10,2)));
 _col0 
---
 true  
(1 row)

presto:default> select contains(array[1,2,3], cast(1.0 as double));
 _col0 
---
 true  
(1 row)
```


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22408: [SPARK-25417][SQL] ArrayContains function may return inc...

2018-09-13 Thread gatorsmile
Github user gatorsmile commented on the issue:

https://github.com/apache/spark/pull/22408
  
My general idea is to avoid risky implicit type casting at the beginning. 
We can relax it in the future, if needed. After all, users can manually cast 
the types after seeing the reasonable error message. This should not be a big 
deal. However, returning a confusing result due to implicit type casting is not 
good in general. 


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22408: [SPARK-25417][SQL] ArrayContains function may return inc...

2018-09-13 Thread gatorsmile
Github user gatorsmile commented on the issue:

https://github.com/apache/spark/pull/22408
  
What is the corresponding behavior in Presto? 


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22410: [SPARK-25418][SQL] The metadata of DataSource tab...

2018-09-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/spark/pull/22410


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22408: [SPARK-25417][SQL] ArrayContains function may ret...

2018-09-13 Thread dilipbiswal
Github user dilipbiswal commented on a diff in the pull request:

https://github.com/apache/spark/pull/22408#discussion_r217603789
  
--- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/DataFrameFunctionsSuite.scala ---
@@ -735,6 +735,44 @@ class DataFrameFunctionsSuite extends QueryTest with 
SharedSQLContext {
   df.selectExpr("array_contains(array(1, null), array(1, null)[0])"),
   Seq(Row(true), Row(true))
 )
+
+checkAnswer(
+  df.selectExpr("array_contains(array(1), 1.23D)"),
+  Seq(Row(false), Row(false))
+)
+
+checkAnswer(
+  df.selectExpr("array_contains(array(1), 1.0D)"),
+  Seq(Row(true), Row(true))
+)
+
+checkAnswer(
+  df.selectExpr("array_contains(array(1.0D), 1)"),
+  Seq(Row(true), Row(true))
+)
+
+checkAnswer(
+  df.selectExpr("array_contains(array(1.23D), 1)"),
+  Seq(Row(false), Row(false))
+)
+
+checkAnswer(
+  df.selectExpr("array_contains(array(array(1)), array(1.0D))"),
+  Seq(Row(true), Row(true))
+)
+
+checkAnswer(
+  df.selectExpr("array_contains(array(array(1)), array(1.23D))"),
+  Seq(Row(false), Row(false))
+)
+
+intercept[AnalysisException] {
+  df.selectExpr("array_contains(array(1), 1.23)")
--- End diff --

@ueshin Sure. We could use `findWiderCommonType`. My thinking was, since we 
are injecting this cast implicitly, we should pick the safest cast so we don't 
see data dependent surprises. Users could always specify an explicit cast and 
take the the responsibility of the result :-)

However, i don't have a strong opinion. I will change it use  
findWiderCommonType


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22038: [SPARK-25056][SQL] Unify the InConversion and Bin...

2018-09-13 Thread maropu
Github user maropu commented on a diff in the pull request:

https://github.com/apache/spark/pull/22038#discussion_r217603577
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
 ---
@@ -485,8 +494,8 @@ object TypeCoercion {
   i
 }
 
-  case i @ In(a, b) if b.exists(_.dataType != a.dataType) =>
-findWiderCommonType(i.children.map(_.dataType)) match {
+  case i @ In(value, list) if list.exists(_.dataType != 
value.dataType) =>
+findInCommonType(value.dataType, list.map(_.dataType), conf) match 
{
--- End diff --

If `findInCommonTyp` is used only for this case, can we inline the 
`findInCommonType`here?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22038: [SPARK-25056][SQL] Unify the InConversion and BinaryComp...

2018-09-13 Thread maropu
Github user maropu commented on the issue:

https://github.com/apache/spark/pull/22038
  
@wangyum Can you put the sammary of the other databases behaivours in the 
PR description?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22410: [SPARK-25418][SQL] The metadata of DataSource table shou...

2018-09-13 Thread gatorsmile
Github user gatorsmile commented on the issue:

https://github.com/apache/spark/pull/22410
  
LGTM

Thanks! Merged to master.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22410: [SPARK-25418][SQL] The metadata of DataSource tab...

2018-09-13 Thread gatorsmile
Github user gatorsmile commented on a diff in the pull request:

https://github.com/apache/spark/pull/22410#discussion_r217603253
  
--- Diff: 
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -1309,6 +1312,8 @@ object HiveExternalCatalog {
 
   val CREATED_SPARK_VERSION = SPARK_SQL_PREFIX + "create.version"
 
+  val HIVE_GENERATED_STORAGE_PROPERTIES = Set(SERIALIZATION_FORMAT)
--- End diff --

We can add more in the future. Basically, these properties are useless to 
Spark data source tables. 


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22408: [SPARK-25417][SQL] ArrayContains function may return inc...

2018-09-13 Thread dilipbiswal
Github user dilipbiswal commented on the issue:

https://github.com/apache/spark/pull/22408
  
@maropu This is the case we were discussing for which @ueshin suggested 
using `findWiderTypeWithoutStringPromotion`. Lets see what @cloud-fan and 
@gatorsmile think and we will do accordingly. We are picking a much restrictive 
cast (since we are injecting this implicitly). 


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22408: [SPARK-25417][SQL] ArrayContains function may return inc...

2018-09-13 Thread maropu
Github user maropu commented on the issue:

https://github.com/apache/spark/pull/22408
  
How about this decimal case?
```
// v2.3.1
scala> spark.range(10).selectExpr("cast(id AS decimal(9, 0)) as 
value").selectExpr("array_contains(array(1, 2, 3), value)").show
+--+
|array_contains(array(1, 2, 3), CAST(value AS INT))|
+--+
| false|
|  true|
|  true|
|  true|
| false|
| false|
| false|
| false|
| false|
| false|
+--+

// this patch
scala> spark.range(10).selectExpr("cast(id AS decimal(9, 0)) as 
value").selectExpr("array_contains(array(1, 2, 3), value)").show
org.apache.spark.sql.AnalysisException: cannot resolve 
'array_contains(array(1, 2, 3), `value`)' due to data type mismatch: Input to 
function array_contains should have been array followed by a value with same 
element type, but it's [array, decimal(9,0)].; line 1 pos 0;
'Project [unresolvedalias(array_contains(array(1, 2, 3), value#2), 
Some())]
+- Project [cast(id#0L as decimal(9,0)) AS value#2]
   +- Range (0, 10, step=1, splits=Some(4))
```


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22408: [SPARK-25417][SQL] ArrayContains function may ret...

2018-09-13 Thread ueshin
Github user ueshin commented on a diff in the pull request:

https://github.com/apache/spark/pull/22408#discussion_r217601459
  
--- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/DataFrameFunctionsSuite.scala ---
@@ -735,6 +735,44 @@ class DataFrameFunctionsSuite extends QueryTest with 
SharedSQLContext {
   df.selectExpr("array_contains(array(1, null), array(1, null)[0])"),
   Seq(Row(true), Row(true))
 )
+
+checkAnswer(
+  df.selectExpr("array_contains(array(1), 1.23D)"),
+  Seq(Row(false), Row(false))
+)
+
+checkAnswer(
+  df.selectExpr("array_contains(array(1), 1.0D)"),
+  Seq(Row(true), Row(true))
+)
+
+checkAnswer(
+  df.selectExpr("array_contains(array(1.0D), 1)"),
+  Seq(Row(true), Row(true))
+)
+
+checkAnswer(
+  df.selectExpr("array_contains(array(1.23D), 1)"),
+  Seq(Row(false), Row(false))
+)
+
+checkAnswer(
+  df.selectExpr("array_contains(array(array(1)), array(1.0D))"),
+  Seq(Row(true), Row(true))
+)
+
+checkAnswer(
+  df.selectExpr("array_contains(array(array(1)), array(1.23D))"),
+  Seq(Row(false), Row(false))
+)
+
+intercept[AnalysisException] {
+  df.selectExpr("array_contains(array(1), 1.23)")
--- End diff --

Hmm, if we can `array_contains(array(1), '1')` in 2.3 as 
https://github.com/apache/spark/pull/22408#issuecomment-421223501, we should 
use `findWiderCommonType`?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22408: [SPARK-25417][SQL] ArrayContains function may return inc...

2018-09-13 Thread dilipbiswal
Github user dilipbiswal commented on the issue:

https://github.com/apache/spark/pull/22408
  
@maropu I thought we added this function in 2.4 :-). Yeah.. i will update 
the migration guide. 


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22408: [SPARK-25417][SQL] ArrayContains function may return inc...

2018-09-13 Thread maropu
Github user maropu commented on the issue:

https://github.com/apache/spark/pull/22408
  
plz update the migration guide.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22408: [SPARK-25417][SQL] ArrayContains function may ret...

2018-09-13 Thread ueshin
Github user ueshin commented on a diff in the pull request:

https://github.com/apache/spark/pull/22408#discussion_r217593872
  
--- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/DataFrameFunctionsSuite.scala ---
@@ -735,6 +735,44 @@ class DataFrameFunctionsSuite extends QueryTest with 
SharedSQLContext {
   df.selectExpr("array_contains(array(1, null), array(1, null)[0])"),
   Seq(Row(true), Row(true))
 )
+
+checkAnswer(
+  df.selectExpr("array_contains(array(1), 1.23D)"),
+  Seq(Row(false), Row(false))
+)
+
+checkAnswer(
+  df.selectExpr("array_contains(array(1), 1.0D)"),
+  Seq(Row(true), Row(true))
+)
+
+checkAnswer(
+  df.selectExpr("array_contains(array(1.0D), 1)"),
+  Seq(Row(true), Row(true))
+)
+
+checkAnswer(
+  df.selectExpr("array_contains(array(1.23D), 1)"),
+  Seq(Row(false), Row(false))
+)
+
+checkAnswer(
+  df.selectExpr("array_contains(array(array(1)), array(1.0D))"),
+  Seq(Row(true), Row(true))
+)
+
+checkAnswer(
+  df.selectExpr("array_contains(array(array(1)), array(1.23D))"),
+  Seq(Row(false), Row(false))
+)
+
+intercept[AnalysisException] {
+  df.selectExpr("array_contains(array(1), 1.23)")
--- End diff --

Good point. Yes, I can do a lossy conversion.
Seems like `BinaryComparison` uses wider DecimalType, so we could follow 
the behavior to use `findWiderTypeWithoutStringPromotion`.
cc @gatorsmile @cloud-fan 


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22410: [SPARK-25418][SQL] The metadata of DataSource tab...

2018-09-13 Thread ueshin
Github user ueshin commented on a diff in the pull request:

https://github.com/apache/spark/pull/22410#discussion_r217592782
  
--- Diff: 
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -1309,6 +1312,8 @@ object HiveExternalCatalog {
 
   val CREATED_SPARK_VERSION = SPARK_SQL_PREFIX + "create.version"
 
+  val HIVE_GENERATED_STORAGE_PROPERTIES = Set(SERIALIZATION_FORMAT)
--- End diff --

Actually the hive-generated storage property I think we should exclude for 
now is only this one, but we might have some more in the future, so I'd say 
"properties" and we will add them to this set in the case. WDYT?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18106: [SPARK-20754][SQL] Support TRUNC (number)

2018-09-13 Thread dongjoon-hyun
Github user dongjoon-hyun commented on the issue:

https://github.com/apache/spark/pull/18106
  
+100, @wangyum . Thanks. :)


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22410: [SPARK-25418][SQL] The metadata of DataSource tab...

2018-09-13 Thread dongjoon-hyun
Github user dongjoon-hyun commented on a diff in the pull request:

https://github.com/apache/spark/pull/22410#discussion_r217590735
  
--- Diff: 
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -1309,6 +1312,8 @@ object HiveExternalCatalog {
 
   val CREATED_SPARK_VERSION = SPARK_SQL_PREFIX + "create.version"
 
+  val HIVE_GENERATED_STORAGE_PROPERTIES = Set(SERIALIZATION_FORMAT)
--- End diff --

@ueshin . The title means `Hive-generated storage properties`, but this PR 
excludes only this one. Could you add more? Othewise, can we make this as a 
SQLConf in order to be configurable?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18106: [SPARK-20754][SQL] Support TRUNC (number)

2018-09-13 Thread wangyum
Github user wangyum commented on the issue:

https://github.com/apache/spark/pull/18106
  
@dongjoon-hyun  Actually `TRUNC (number)`  not resolved. I will fix it soon.
https://issues.apache.org/jira/browse/SPARK-23906


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22364: [SPARK-25379][SQL] Improve AttributeSet and ColumnPrunin...

2018-09-13 Thread maropu
Github user maropu commented on the issue:

https://github.com/apache/spark/pull/22364
  
Can we replace the syntax(`(ouputSetA -- outputSetB).nonEmpty`)  in other 
places, too? e.g., 

https://github.com/apache/spark/blob/9deddbb13edebfefb3fd03f063679ed12e73c575/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/CostBasedJoinReorder.scala#L294


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22417: [SPARK-25426][SQL] Handles subexpression elimination con...

2018-09-13 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22417
  
**[Test build #96058 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/96058/testReport)**
 for PR 22417 at commit 
[`35e7911`](https://github.com/apache/spark/commit/35e7911958d5be8d7b66803ba7e11cd22bc4bbe5).


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22417: [SPARK-25426][SQL] Handles subexpression elimination con...

2018-09-13 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22417
  
Merged build finished. Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22417: [SPARK-25426][SQL] Handles subexpression elimination con...

2018-09-13 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22417
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 

https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/3101/
Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22417: [SPARK-25426][SQL] Handles subexpression eliminat...

2018-09-13 Thread maropu
GitHub user maropu opened a pull request:

https://github.com/apache/spark/pull/22417

[SPARK-25426][SQL] Handles subexpression elimination config inside 
CodeGeneratorWithInterpretedFallback

## What changes were proposed in this pull request?
This pr handled the subexpression elimination config inside 
`CodeGeneratorWithInterpretedFallback`, and then removed the duplicate fallback 
logic in `UnsafeProjection`.


This pr comes from #22355.

## How was this patch tested?
Added tests in `CodeGeneratorWithInterpretedFallbackSuite`.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/maropu/spark SPARK-25426

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/22417.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #22417


commit 35e7911958d5be8d7b66803ba7e11cd22bc4bbe5
Author: Takeshi Yamamuro 
Date:   2018-09-14T02:23:30Z

Fix




---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22413: [SPARK-25425][SQL] Extra options overwrite session optio...

2018-09-13 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22413
  
**[Test build #96057 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/96057/testReport)**
 for PR 22413 at commit 
[`a443054`](https://github.com/apache/spark/commit/a4430544d51a35c1c662494e53157f77a737f252).


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22413: [SPARK-25425][SQL] Extra options overwrite session optio...

2018-09-13 Thread maropu
Github user maropu commented on the issue:

https://github.com/apache/spark/pull/22413
  
retest this please


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22402: [SPARK-25414][SS][TEST] make it clear that the numRows m...

2018-09-13 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22402
  
**[Test build #96056 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/96056/testReport)**
 for PR 22402 at commit 
[`0c661a0`](https://github.com/apache/spark/commit/0c661a08e74fea90b025ad21fb9da6113ef70d4c).


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22402: [SPARK-25414][SS] The numInputRows metrics can be incorr...

2018-09-13 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22402
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 

https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/3100/
Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22402: [SPARK-25414][SS] The numInputRows metrics can be incorr...

2018-09-13 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22402
  
Merged build finished. Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22411: [SPARK-25421][SQL] Abstract an output path field ...

2018-09-13 Thread LantaoJin
Github user LantaoJin commented on a diff in the pull request:

https://github.com/apache/spark/pull/22411#discussion_r217584439
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlanInfo.scala ---
@@ -18,6 +18,7 @@
 package org.apache.spark.sql.execution
 
 import org.apache.spark.annotation.DeveloperApi
+import org.apache.spark.sql.execution.command.{DataWritingCommand, 
DataWritingCommandExec}
--- End diff --

Will remove it


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22411: [SPARK-25421][SQL] Abstract an output path field ...

2018-09-13 Thread LantaoJin
Github user LantaoJin commented on a diff in the pull request:

https://github.com/apache/spark/pull/22411#discussion_r217584359
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
 ---
@@ -440,7 +440,7 @@ case class DataSource(
 // ordering of data.logicalPlan (partition columns are all moved after 
data column).  This
 // will be adjusted within InsertIntoHadoopFsRelation.
 InsertIntoHadoopFsRelationCommand(
-  outputPath = outputPath,
+  outputFsPath = outputPath,
--- End diff --

This field overwrites the outputPath in DataWritingCommand and the return 
type is different (Path vs Option[Path]), so I rename this.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22415: [SPARK-25291][K8S] Fixing Flakiness of Executor P...

2018-09-13 Thread liyinan926
Github user liyinan926 commented on a diff in the pull request:

https://github.com/apache/spark/pull/22415#discussion_r217583729
  
--- Diff: 
resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/KubernetesSuite.scala
 ---
@@ -218,17 +223,25 @@ private[spark] class KubernetesSuite extends 
SparkFunSuite
   .getItems
   .get(0)
 driverPodChecker(driverPod)
-
-val executorPods = kubernetesTestComponents.kubernetesClient
+val execPods = scala.collection.mutable.Stack[Pod]()
+val execWatcher = kubernetesTestComponents.kubernetesClient
   .pods()
   .withLabel("spark-app-locator", appLocator)
   .withLabel("spark-role", "executor")
-  .list()
-  .getItems
-executorPods.asScala.foreach { pod =>
-  executorPodChecker(pod)
-}
-
+  .watch(new Watcher[Pod] {
--- End diff --

Why you chose to use a watch instead of listing?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22414: [SPARK-25424][SQL] Window duration and slide duration wi...

2018-09-13 Thread maropu
Github user maropu commented on the issue:

https://github.com/apache/spark/pull/22414
  
btw, what's the current behaivour of Spark in case of the nagative value?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22414: [SPARK-25424][SQL] Window duration and slide dura...

2018-09-13 Thread maropu
Github user maropu commented on a diff in the pull request:

https://github.com/apache/spark/pull/22414#discussion_r217576901
  
--- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/TimeWindowSuite.scala
 ---
@@ -122,11 +122,57 @@ class TimeWindowSuite extends SparkFunSuite with 
ExpressionEvalHelper with Priva
 }
   }
 
+  test("windowDuration and slideDuration should be positive.") {
+import org.scalatest.prop.TableDrivenPropertyChecks.{Table, forAll => 
forAllRows}
+val fractions = Table(
+  ("windowDuration", "slideDuration"), // First tuple defines column 
names
+  ("-2 seconds", "1 seconds"),
+  ("1 seconds", "-2 seconds"),
+  ("0 seconds", "1 seconds"),
+  ("1 seconds", "0 seconds"),
+  ("-2 seconds", "-2 seconds"),
+  ("-2 seconds", "-2 hours"),
+  ("0 seconds", "0 seconds"),
+  (-2L, 2L),
+  (2L, -2L),
+  (-2, 2),
+  (2, -2)
+)
+forAllRows(fractions) { (windowDuration: Any, slideDuration: Any) =>
+  logInfo(s"windowDuration = $windowDuration slideDuration = 
$slideDuration")
+
+  val thrown = intercept[IllegalArgumentException] {
+(windowDuration, slideDuration) match {
+  case (wd: String, sd: String) => TimeWindow(Literal(10L), wd, 
sd, "0 seconds")
+  case (wd: Long, sd: Long) => TimeWindow(Literal(10L), wd, sd, 0)
+  case (wd: Int, sd: Int) => TimeWindow(Literal(10L), wd, sd, 0)
+}
+
+  }
+  def isNonPositive(s: Any): Boolean = {
+val trimmed = s.toString.trim
+trimmed.startsWith("-") || trimmed.startsWith("0")
+  }
+  val expectedMsg =
+if (isNonPositive(windowDuration)) {
+  s"requirement failed: The window duration must be a " +
--- End diff --

remove `s`.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22414: [SPARK-25424][SQL] Window duration and slide dura...

2018-09-13 Thread maropu
Github user maropu commented on a diff in the pull request:

https://github.com/apache/spark/pull/22414#discussion_r217576920
  
--- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/TimeWindowSuite.scala
 ---
@@ -122,11 +122,57 @@ class TimeWindowSuite extends SparkFunSuite with 
ExpressionEvalHelper with Priva
 }
   }
 
+  test("windowDuration and slideDuration should be positive.") {
+import org.scalatest.prop.TableDrivenPropertyChecks.{Table, forAll => 
forAllRows}
+val fractions = Table(
+  ("windowDuration", "slideDuration"), // First tuple defines column 
names
+  ("-2 seconds", "1 seconds"),
+  ("1 seconds", "-2 seconds"),
+  ("0 seconds", "1 seconds"),
+  ("1 seconds", "0 seconds"),
+  ("-2 seconds", "-2 seconds"),
+  ("-2 seconds", "-2 hours"),
+  ("0 seconds", "0 seconds"),
+  (-2L, 2L),
+  (2L, -2L),
+  (-2, 2),
+  (2, -2)
+)
+forAllRows(fractions) { (windowDuration: Any, slideDuration: Any) =>
+  logInfo(s"windowDuration = $windowDuration slideDuration = 
$slideDuration")
+
+  val thrown = intercept[IllegalArgumentException] {
+(windowDuration, slideDuration) match {
+  case (wd: String, sd: String) => TimeWindow(Literal(10L), wd, 
sd, "0 seconds")
+  case (wd: Long, sd: Long) => TimeWindow(Literal(10L), wd, sd, 0)
+  case (wd: Int, sd: Int) => TimeWindow(Literal(10L), wd, sd, 0)
+}
+
+  }
+  def isNonPositive(s: Any): Boolean = {
+val trimmed = s.toString.trim
+trimmed.startsWith("-") || trimmed.startsWith("0")
+  }
+  val expectedMsg =
+if (isNonPositive(windowDuration)) {
+  s"requirement failed: The window duration must be a " +
+s"positive integer, long or string literal, found: 
${windowDuration}"
+} else if (isNonPositive(slideDuration)) {
+  s"requirement failed: The slide duration must be a " +
--- End diff --

ditto


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22402: [SPARK-25414][SS] The numInputRows metrics can be...

2018-09-13 Thread cloud-fan
Github user cloud-fan commented on a diff in the pull request:

https://github.com/apache/spark/pull/22402#discussion_r217576877
  
--- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQuerySuite.scala
 ---
@@ -460,9 +460,9 @@ class StreamingQuerySuite extends StreamTest with 
BeforeAndAfter with Logging wi
 val streamingInputDF = 
createSingleTriggerStreamingDF(streamingTriggerDF).toDF("value")
 
 val progress = 
getFirstProgress(streamingInputDF.join(streamingInputDF, "value"))
-assert(progress.numInputRows === 20) // data is read multiple times in 
self-joins
--- End diff --

The exchange reuse is not triggered here, because the project of one side 
is eliminated. In the kafka test, we have a cast in the project so Spark 
doesn't eliminate project ay any side, and the trigger exchange reuse.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22414: [SPARK-25424][SQL] Window duration and slide dura...

2018-09-13 Thread maropu
Github user maropu commented on a diff in the pull request:

https://github.com/apache/spark/pull/22414#discussion_r217576774
  
--- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/TimeWindowSuite.scala
 ---
@@ -122,11 +122,57 @@ class TimeWindowSuite extends SparkFunSuite with 
ExpressionEvalHelper with Priva
 }
   }
 
+  test("windowDuration and slideDuration should be positive.") {
+import org.scalatest.prop.TableDrivenPropertyChecks.{Table, forAll => 
forAllRows}
--- End diff --

Plz move this import into the head.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22395: [SPARK-16323][SQL] Add IntegralDivide expression

2018-09-13 Thread cloud-fan
Github user cloud-fan commented on a diff in the pull request:

https://github.com/apache/spark/pull/22395#discussion_r217576598
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/arithmetic.scala
 ---
@@ -314,6 +314,27 @@ case class Divide(left: Expression, right: Expression) 
extends DivModLike {
   override def evalOperation(left: Any, right: Any): Any = div(left, right)
 }
 
+@ExpressionDescription(
+  usage = "expr1 _FUNC_ expr2 - Returns `expr1`/`expr2`. It performs 
integral division.",
+  examples = """
+Examples:
+  > SELECT 3 _FUNC_ 2;
+   1
+  """,
+  since = "3.0.0")
+case class IntegralDivide(left: Expression, right: Expression) extends 
DivModLike {
+
+  override def inputType: AbstractDataType = IntegralType
+
+  override def symbol: String = "/"
+  override def sqlOperator: String = "div"
+
+  private lazy val div: (Any, Any) => Any = dataType match {
+case i: IntegralType => i.integral.asInstanceOf[Integral[Any]].quot
+  }
+  override def evalOperation(left: Any, right: Any): Any = div(left, right)
--- End diff --

Then I'd prefer always returning long, since it was the behavior before. We 
can consider changing the behavior in another PR.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22414: [SPARK-25424][SQL] Window duration and slide dura...

2018-09-13 Thread maropu
Github user maropu commented on a diff in the pull request:

https://github.com/apache/spark/pull/22414#discussion_r217576554
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/TimeWindow.scala
 ---
@@ -35,29 +35,15 @@ case class TimeWindow(
   with ImplicitCastInputTypes
   with Unevaluable
   with NonSQLExpression {
+  require(windowDuration > 0, "The window duration must be " +
+s"a positive integer, long or string literal, found: $windowDuration")
+  require(slideDuration > 0, "The slide duration must be " +
+s"a positive integer, long or string literal, found: $slideDuration")
 
   //
   // SQL Constructors
   //
 
-  def this(
-  timeColumn: Expression,
-  windowDuration: Expression,
-  slideDuration: Expression,
-  startTime: Expression) = {
-this(timeColumn, TimeWindow.parseExpression(windowDuration),
-  TimeWindow.parseExpression(slideDuration), 
TimeWindow.parseExpression(startTime))
-  }
-
-  def this(timeColumn: Expression, windowDuration: Expression, 
slideDuration: Expression) = {
-this(timeColumn, TimeWindow.parseExpression(windowDuration),
-  TimeWindow.parseExpression(slideDuration), 0)
-  }
-
-  def this(timeColumn: Expression, windowDuration: Expression) = {
-this(timeColumn, windowDuration, windowDuration)
-  }
--- End diff --

You cannot remove these constructors.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21217: [SPARK-24151][SQL] Fix CURRENT_DATE, CURRENT_TIMESTAMP t...

2018-09-13 Thread viirya
Github user viirya commented on the issue:

https://github.com/apache/spark/pull/21217
  
@HyukjinKwon thanks for pinging me. I'd wait for others to take over this 
first, if no one does, I can do it later.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22414: [SPARK-25424][SQL] Window duration and slide duration wi...

2018-09-13 Thread raghavgautam
Github user raghavgautam commented on the issue:

https://github.com/apache/spark/pull/22414
  
@tdas Can you please take a look ?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18106: [SPARK-20754][SQL] Support TRUNC (number)

2018-09-13 Thread wangyum
Github user wangyum closed the pull request at:

https://github.com/apache/spark/pull/18106


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22392: [SPARK-23200] Reset Kubernetes-specific config on Checkp...

2018-09-13 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22392
  
Merged build finished. Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22392: [SPARK-23200] Reset Kubernetes-specific config on Checkp...

2018-09-13 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22392
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/96055/
Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22392: [SPARK-23200] Reset Kubernetes-specific config on Checkp...

2018-09-13 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22392
  
**[Test build #96055 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/96055/testReport)**
 for PR 22392 at commit 
[`f457d02`](https://github.com/apache/spark/commit/f457d023e8e488b89b97f5b3b9936831d7fc9bb6).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21217: [SPARK-24151][SQL] Fix CURRENT_DATE, CURRENT_TIMESTAMP t...

2018-09-13 Thread HyukjinKwon
Github user HyukjinKwon commented on the issue:

https://github.com/apache/spark/pull/21217
  
Can anyone take over this then?

cc @kiszk, @mgaido91 and @viirya as well FYI.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #19538: [SPARK-20393][WEBU UI][BACKPORT-2.0] Strengthen Spark to...

2018-09-13 Thread ambauma
Github user ambauma commented on the issue:

https://github.com/apache/spark/pull/19538
  
No argument.

On Thu, Sep 13, 2018, 12:25 PM Dongjoon Hyun 
wrote:

> @ambauma  Unfortunately, it seems to be too
> old and the PR on 1.6 also is closed. Can we close this, too?
>
> My goal is to get the fix into the official branch 1.6 to reduce the
> number of forks necessary and so that if CVE-2018- comes and I've 
moved
> on my replacement doesn't have to apply this plus that.
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> , or 
mute
> the thread
> 

> .
>



---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22227: [SPARK-25202] [SQL] Implements split with limit sql func...

2018-09-13 Thread HyukjinKwon
Github user HyukjinKwon commented on the issue:

https://github.com/apache/spark/pull/7
  
Seems fine otherwise.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22227: [SPARK-25202] [SQL] Implements split with limit s...

2018-09-13 Thread HyukjinKwon
Github user HyukjinKwon commented on a diff in the pull request:

https://github.com/apache/spark/pull/7#discussion_r217563904
  
--- Diff: python/pyspark/sql/functions.py ---
@@ -1671,18 +1671,32 @@ def repeat(col, n):
 
 @since(1.5)
 @ignore_unicode_prefix
-def split(str, pattern):
+def split(str, pattern, limit=-1):
 """
-Splits str around pattern (pattern is a regular expression).
+Splits str around matches of the given pattern.
 
-.. note:: pattern is a string represent the regular expression.
+:param str: a string expression to split
+:param pattern: a string representing a regular expression. The regex 
string should be
+  a Java regular expression.
+:param limit: an integer which controls the number of times `pattern` 
is applied.
 
->>> df = spark.createDataFrame([('ab12cd',)], ['s',])
->>> df.select(split(df.s, '[0-9]+').alias('s')).collect()
-[Row(s=[u'ab', u'cd'])]
+* ``limit > 0``: The resulting array's length will not be more 
than `limit`, and the
+ resulting array's last entry will contain all 
input beyond the last
+ matched pattern.
+* ``limit <= 0``: `pattern` will be applied as many times as 
possible, and the resulting
+  array can be of any size.
+
+.. versionchanged:: 3.0
+   `split` now takes an optional `limit` field. If not provided, 
default limit value is -1.
+
+>>> df = spark.createDataFrame([('oneAtwoBthreeC',)], ['s',])
+>>> df.select(split(df.s, '[ABC]', 2).alias('s')).collect()
+[Row(s=[u'one', u'twoBthreeC'])]
+>>> df.select(split(df.s, '[ABC]', -1).alias('s')).collect()
--- End diff --

Let's turn into this an example without limit argument.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22227: [SPARK-25202] [SQL] Implements split with limit s...

2018-09-13 Thread HyukjinKwon
Github user HyukjinKwon commented on a diff in the pull request:

https://github.com/apache/spark/pull/7#discussion_r217563726
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/regexpExpressions.scala
 ---
@@ -229,33 +229,53 @@ case class RLike(left: Expression, right: Expression) 
extends StringRegexExpress
 
 
 /**
- * Splits str around pat (pattern is a regular expression).
+ * Splits str around matches of the given regex.
  */
 @ExpressionDescription(
-  usage = "_FUNC_(str, regex) - Splits `str` around occurrences that match 
`regex`.",
+  usage = "_FUNC_(str, regex, limit) - Splits `str` around occurrences 
that match `regex`" +
+" and returns an array with a length of at most `limit`",
+  arguments = """
+Arguments:
+  * str - a string expression to split.
+  * regex - a string representing a regular expression. The regex 
string should be a
+Java regular expression.
+  * limit - an integer expression which controls the number of times 
the regex is applied.
+  * limit > 0: The resulting array's length will not be more than 
`limit`,
+ and the resulting array's last entry will contain all 
input
+ beyond the last matched regex.
--- End diff --

indentation:

```
* limit > 0: The resulting array's length will not be more than `limit`,
  and the resulting array's last entry will contain all input
  beyond the last matched regex.
```


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22227: [SPARK-25202] [SQL] Implements split with limit s...

2018-09-13 Thread HyukjinKwon
Github user HyukjinKwon commented on a diff in the pull request:

https://github.com/apache/spark/pull/7#discussion_r217563572
  
--- Diff: python/pyspark/sql/functions.py ---
@@ -1671,18 +1671,32 @@ def repeat(col, n):
 
 @since(1.5)
 @ignore_unicode_prefix
-def split(str, pattern):
+def split(str, pattern, limit=-1):
 """
-Splits str around pattern (pattern is a regular expression).
+Splits str around matches of the given pattern.
 
-.. note:: pattern is a string represent the regular expression.
+:param str: a string expression to split
+:param pattern: a string representing a regular expression. The regex 
string should be
+  a Java regular expression.
+:param limit: an integer which controls the number of times `pattern` 
is applied.
 
->>> df = spark.createDataFrame([('ab12cd',)], ['s',])
->>> df.select(split(df.s, '[0-9]+').alias('s')).collect()
-[Row(s=[u'ab', u'cd'])]
+* ``limit > 0``: The resulting array's length will not be more 
than `limit`, and the
--- End diff --

Let's make it 4 spaced too


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22227: [SPARK-25202] [SQL] Implements split with limit s...

2018-09-13 Thread HyukjinKwon
Github user HyukjinKwon commented on a diff in the pull request:

https://github.com/apache/spark/pull/7#discussion_r217563366
  
--- Diff: python/pyspark/sql/functions.py ---
@@ -1671,18 +1671,32 @@ def repeat(col, n):
 
 @since(1.5)
 @ignore_unicode_prefix
-def split(str, pattern):
+def split(str, pattern, limit=-1):
 """
-Splits str around pattern (pattern is a regular expression).
+Splits str around matches of the given pattern.
 
-.. note:: pattern is a string represent the regular expression.
+:param str: a string expression to split
+:param pattern: a string representing a regular expression. The regex 
string should be
+  a Java regular expression.
--- End diff --

Shall we make it four-spaced.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22415: [SPARK-25291][K8S] Fixing Flakiness of Executor P...

2018-09-13 Thread ifilonenko
Github user ifilonenko commented on a diff in the pull request:

https://github.com/apache/spark/pull/22415#discussion_r217561045
  
--- Diff: 
resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/KubernetesSuite.scala
 ---
@@ -218,17 +223,25 @@ private[spark] class KubernetesSuite extends 
SparkFunSuite
   .getItems
   .get(0)
 driverPodChecker(driverPod)
-
-val executorPods = kubernetesTestComponents.kubernetesClient
+val execPods = scala.collection.mutable.Stack[Pod]()
+val execWatcher = kubernetesTestComponents.kubernetesClient
   .pods()
   .withLabel("spark-app-locator", appLocator)
   .withLabel("spark-role", "executor")
-  .list()
-  .getItems
-executorPods.asScala.foreach { pod =>
-  executorPodChecker(pod)
-}
-
+  .watch(new Watcher[Pod] {
+logInfo("Beginning watch of executors")
+override def onClose(cause: KubernetesClientException): Unit =
+  logInfo("Ending watch of executors")
+override def eventReceived(action: Watcher.Action, resource: Pod): 
Unit = {
+  action match {
+case Action.ADDED | Action.MODIFIED =>
+  execPods.push(resource)
+  }
+}
+  })
+Eventually.eventually(TIMEOUT, INTERVAL) { execPods.nonEmpty should be 
(true) }
+execWatcher.close()
+executorPodChecker(execPods.pop())
--- End diff --

Well they’d all be identical. If it is necessary we can do a .foreach{} 
but I think it might be extraneous, no? 


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22231: [SPARK-25238][PYTHON] lint-python: Upgrade pycodestyle t...

2018-09-13 Thread HyukjinKwon
Github user HyukjinKwon commented on the issue:

https://github.com/apache/spark/pull/22231
  
> OK... # noqa works within the flake8 wrapper around PyFlakes but does not 
work when PyFlakes is called outside of flake8. Also added r"comment" as you 
suggested.

Is this a bug in flake8? `# noqa` should work since I fixed one by `# noqa` 
- 
https://github.com/apache/spark/blob/5cdb8a23df6f269d6be0bf3536e9af9e29c4a05f/python/setup.py#L37


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22415: [SPARK-25291][K8S] Fixing Flakiness of Executor P...

2018-09-13 Thread liyinan926
Github user liyinan926 commented on a diff in the pull request:

https://github.com/apache/spark/pull/22415#discussion_r217560548
  
--- Diff: 
resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/KubernetesSuite.scala
 ---
@@ -218,17 +223,25 @@ private[spark] class KubernetesSuite extends 
SparkFunSuite
   .getItems
   .get(0)
 driverPodChecker(driverPod)
-
-val executorPods = kubernetesTestComponents.kubernetesClient
+val execPods = scala.collection.mutable.Stack[Pod]()
+val execWatcher = kubernetesTestComponents.kubernetesClient
   .pods()
   .withLabel("spark-app-locator", appLocator)
   .withLabel("spark-role", "executor")
-  .list()
-  .getItems
-executorPods.asScala.foreach { pod =>
-  executorPodChecker(pod)
-}
-
+  .watch(new Watcher[Pod] {
+logInfo("Beginning watch of executors")
+override def onClose(cause: KubernetesClientException): Unit =
+  logInfo("Ending watch of executors")
+override def eventReceived(action: Watcher.Action, resource: Pod): 
Unit = {
+  action match {
+case Action.ADDED | Action.MODIFIED =>
+  execPods.push(resource)
+  }
+}
+  })
+Eventually.eventually(TIMEOUT, INTERVAL) { execPods.nonEmpty should be 
(true) }
+execWatcher.close()
+executorPodChecker(execPods.pop())
--- End diff --

Why? 


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22416: [SPARK-25291][K8S][BACKPORT] Fixing Flakiness of Executo...

2018-09-13 Thread liyinan926
Github user liyinan926 commented on the issue:

https://github.com/apache/spark/pull/22416
  
OK, so we can merge the other one to both master and branch-2.4.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22415: [SPARK-25291][K8S] Fixing Flakiness of Executor P...

2018-09-13 Thread ifilonenko
Github user ifilonenko commented on a diff in the pull request:

https://github.com/apache/spark/pull/22415#discussion_r217560220
  
--- Diff: 
resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/KubernetesSuite.scala
 ---
@@ -218,17 +223,25 @@ private[spark] class KubernetesSuite extends 
SparkFunSuite
   .getItems
   .get(0)
 driverPodChecker(driverPod)
-
-val executorPods = kubernetesTestComponents.kubernetesClient
+val execPods = scala.collection.mutable.Stack[Pod]()
+val execWatcher = kubernetesTestComponents.kubernetesClient
   .pods()
   .withLabel("spark-app-locator", appLocator)
   .withLabel("spark-role", "executor")
-  .list()
-  .getItems
-executorPods.asScala.foreach { pod =>
-  executorPodChecker(pod)
-}
-
+  .watch(new Watcher[Pod] {
+logInfo("Beginning watch of executors")
+override def onClose(cause: KubernetesClientException): Unit =
+  logInfo("Ending watch of executors")
+override def eventReceived(action: Watcher.Action, resource: Pod): 
Unit = {
+  action match {
+case Action.ADDED | Action.MODIFIED =>
+  execPods.push(resource)
+  }
+}
+  })
+Eventually.eventually(TIMEOUT, INTERVAL) { execPods.nonEmpty should be 
(true) }
+execWatcher.close()
+executorPodChecker(execPods.pop())
--- End diff --

Yeah, no need to check other executors. 


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22416: [SPARK-25291][K8S][BACKPORT] Fixing Flakiness of Executo...

2018-09-13 Thread ifilonenko
Github user ifilonenko commented on the issue:

https://github.com/apache/spark/pull/22416
  
Yes @linyinan926


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22392: [SPARK-23200] Reset Kubernetes-specific config on Checkp...

2018-09-13 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22392
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 

https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/3099/
Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22392: [SPARK-23200] Reset Kubernetes-specific config on Checkp...

2018-09-13 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22392
  
Kubernetes integration test status success
URL: 
https://amplab.cs.berkeley.edu/jenkins/job/testing-k8s-prb-make-spark-distribution-unified/3099/



---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22392: [SPARK-23200] Reset Kubernetes-specific config on Checkp...

2018-09-13 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22392
  
Merged build finished. Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22413: [SPARK-25425][SQL] Extra options overwrite session optio...

2018-09-13 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22413
  
Test FAILed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/96051/
Test FAILed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22392: [SPARK-23200] Reset Kubernetes-specific config on Checkp...

2018-09-13 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22392
  
Kubernetes integration test starting
URL: 
https://amplab.cs.berkeley.edu/jenkins/job/testing-k8s-prb-make-spark-distribution-unified/3099/



---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22413: [SPARK-25425][SQL] Extra options overwrite session optio...

2018-09-13 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22413
  
Merged build finished. Test FAILed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22413: [SPARK-25425][SQL] Extra options overwrite session optio...

2018-09-13 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22413
  
**[Test build #96051 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/96051/testReport)**
 for PR 22413 at commit 
[`a443054`](https://github.com/apache/spark/commit/a4430544d51a35c1c662494e53157f77a737f252).
 * This patch **fails Spark unit tests**.
 * This patch merges cleanly.
 * This patch adds no public classes.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22392: [SPARK-23200] Reset Kubernetes-specific config on Checkp...

2018-09-13 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22392
  
**[Test build #96055 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/96055/testReport)**
 for PR 22392 at commit 
[`f457d02`](https://github.com/apache/spark/commit/f457d023e8e488b89b97f5b3b9936831d7fc9bb6).


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22392: [SPARK-23200] Reset Kubernetes-specific config on Checkp...

2018-09-13 Thread foxish
Github user foxish commented on the issue:

https://github.com/apache/spark/pull/22392
  
Jenkins, test this please


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22231: [SPARK-25238][PYTHON] lint-python: Upgrade pycodestyle t...

2018-09-13 Thread srowen
Github user srowen commented on the issue:

https://github.com/apache/spark/pull/22231
  
Oh, that line isn't the error, although the warning says it is! It's really 
line 134, which actually has escaped back-ticks. I think you can honestly 
remove all of these back-ticks. 


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22415: [SPARK-25291][K8S] Fixing Flakiness of Executor Pod test...

2018-09-13 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22415
  
Merged build finished. Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22415: [SPARK-25291][K8S] Fixing Flakiness of Executor Pod test...

2018-09-13 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22415
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 

https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/3098/
Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22415: [SPARK-25291][K8S] Fixing Flakiness of Executor Pod test...

2018-09-13 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22415
  
Kubernetes integration test status success
URL: 
https://amplab.cs.berkeley.edu/jenkins/job/testing-k8s-prb-make-spark-distribution-unified/3098/



---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22415: [SPARK-25291][K8S] Fixing Flakiness of Executor Pod test...

2018-09-13 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22415
  
Kubernetes integration test starting
URL: 
https://amplab.cs.berkeley.edu/jenkins/job/testing-k8s-prb-make-spark-distribution-unified/3098/



---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22415: [SPARK-25291][K8S] Fixing Flakiness of Executor P...

2018-09-13 Thread liyinan926
Github user liyinan926 commented on a diff in the pull request:

https://github.com/apache/spark/pull/22415#discussion_r217543984
  
--- Diff: 
resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/KubernetesSuite.scala
 ---
@@ -218,17 +223,25 @@ private[spark] class KubernetesSuite extends 
SparkFunSuite
   .getItems
   .get(0)
 driverPodChecker(driverPod)
-
-val executorPods = kubernetesTestComponents.kubernetesClient
+val execPods = scala.collection.mutable.Stack[Pod]()
+val execWatcher = kubernetesTestComponents.kubernetesClient
   .pods()
   .withLabel("spark-app-locator", appLocator)
   .withLabel("spark-role", "executor")
-  .list()
-  .getItems
-executorPods.asScala.foreach { pod =>
-  executorPodChecker(pod)
-}
-
+  .watch(new Watcher[Pod] {
+logInfo("Beginning watch of executors")
+override def onClose(cause: KubernetesClientException): Unit =
+  logInfo("Ending watch of executors")
+override def eventReceived(action: Watcher.Action, resource: Pod): 
Unit = {
+  action match {
+case Action.ADDED | Action.MODIFIED =>
+  execPods.push(resource)
+  }
+}
+  })
+Eventually.eventually(TIMEOUT, INTERVAL) { execPods.nonEmpty should be 
(true) }
+execWatcher.close()
+executorPodChecker(execPods.pop())
--- End diff --

So this only checks the executor pod at the top of the stack?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #19267: [WIP][SPARK-20628][CORE] Blacklist nodes when they trans...

2018-09-13 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/19267
  
Can one of the admins verify this patch?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22416: [SPARK-25291][K8S][BACKPORT] Fixing Flakiness of Executo...

2018-09-13 Thread liyinan926
Github user liyinan926 commented on the issue:

https://github.com/apache/spark/pull/22416
  
@ifilonenko so this is the same as #22415 except that it's for 
`branch-2.4`, right?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #21386: [SPARK-23928][SQL][WIP] Add shuffle collection fu...

2018-09-13 Thread pkuwm
Github user pkuwm closed the pull request at:

https://github.com/apache/spark/pull/21386


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21386: [SPARK-23928][SQL][WIP] Add shuffle collection function.

2018-09-13 Thread pkuwm
Github user pkuwm commented on the issue:

https://github.com/apache/spark/pull/21386
  
Thanks for reminding, @dongjoon-hyun 


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22416: [SPARK-25291][K8S][BACKPORT] Fixing Flakiness of Executo...

2018-09-13 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22416
  
Merged build finished. Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22416: [SPARK-25291][K8S][BACKPORT] Fixing Flakiness of Executo...

2018-09-13 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22416
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 

https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/3097/
Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22416: [SPARK-25291][K8S][BACKPORT] Fixing Flakiness of Executo...

2018-09-13 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22416
  
Kubernetes integration test status success
URL: 
https://amplab.cs.berkeley.edu/jenkins/job/testing-k8s-prb-make-spark-distribution-unified/3097/



---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22416: [SPARK-25291][K8S][BACKPORT] Fixing Flakiness of Executo...

2018-09-13 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22416
  
Kubernetes integration test starting
URL: 
https://amplab.cs.berkeley.edu/jenkins/job/testing-k8s-prb-make-spark-distribution-unified/3097/



---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22415: [SPARK-25291][K8S] Fixing Flakiness of Executor Pod test...

2018-09-13 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22415
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 

https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/3096/
Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22415: [SPARK-25291][K8S] Fixing Flakiness of Executor Pod test...

2018-09-13 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22415
  
Merged build finished. Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22415: [SPARK-25291][K8S] Fixing Flakiness of Executor Pod test...

2018-09-13 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22415
  
Kubernetes integration test status success
URL: 
https://amplab.cs.berkeley.edu/jenkins/job/testing-k8s-prb-make-spark-distribution-unified/3096/



---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22385: [SPARK-25400][CORE][TEST] Increase test timeouts

2018-09-13 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22385
  
Merged build finished. Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22385: [SPARK-25400][CORE][TEST] Increase test timeouts

2018-09-13 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22385
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/96046/
Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22385: [SPARK-25400][CORE][TEST] Increase test timeouts

2018-09-13 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22385
  
**[Test build #96046 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/96046/testReport)**
 for PR 22385 at commit 
[`daf76ed`](https://github.com/apache/spark/commit/daf76ed592ed82aa4b390b444c4669ae65c9b355).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22415: [SPARK-25291][K8S] Fixing Flakiness of Executor Pod test...

2018-09-13 Thread ifilonenko
Github user ifilonenko commented on the issue:

https://github.com/apache/spark/pull/22415
  
@holdenk @liyinan926 @felixcheung for merge since @mccheah is out 


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



  1   2   3   4   5   >