[GitHub] spark issue #18852: [SPARK-21588][SQL] SQLContext.getConf(key, null) should ...

2017-08-05 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/18852
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80292/
Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18852: [SPARK-21588][SQL] SQLContext.getConf(key, null) should ...

2017-08-05 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/18852
  
Merged build finished. Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18769: [SPARK-21574][SQL] Point out user to set hive con...

2017-08-05 Thread wangyum
Github user wangyum commented on a diff in the pull request:

https://github.com/apache/spark/pull/18769#discussion_r131534966
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/SetCommand.scala 
---
@@ -87,6 +88,13 @@ case class SetCommand(kv: Option[(String, 
Option[String])]) extends RunnableComm
 // Configures a single property.
 case Some((key, Some(value))) =>
   val runFunc = (sparkSession: SparkSession) => {
+if 
(sparkSession.conf.get(CATALOG_IMPLEMENTATION.key).equals("hive")
+  && key.startsWith("hive.")) {
--- End diff --

Line length exceed 100:
https://user-images.githubusercontent.com/5399861/29000987-f7ee1d1c-7aae-11e7-9185-e59b4fd5ea4e.png;>


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18852: [SPARK-21588][SQL] SQLContext.getConf(key, null) should ...

2017-08-05 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/18852
  
**[Test build #80292 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80292/testReport)**
 for PR 18852 at commit 
[`4295cd3`](https://github.com/apache/spark/commit/4295cd3342178285c2465c080cafa8dd8b68ba16).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18769: [SPARK-21574][SQL] Point out user to set hive con...

2017-08-05 Thread gatorsmile
Github user gatorsmile commented on a diff in the pull request:

https://github.com/apache/spark/pull/18769#discussion_r131534846
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/SetCommand.scala 
---
@@ -87,6 +88,13 @@ case class SetCommand(kv: Option[(String, 
Option[String])]) extends RunnableComm
 // Configures a single property.
 case Some((key, Some(value))) =>
   val runFunc = (sparkSession: SparkSession) => {
+if 
(sparkSession.conf.get(CATALOG_IMPLEMENTATION.key).equals("hive")
+  && key.startsWith("hive.")) {
--- End diff --

Nit: Could you move this `&&` to the line 91?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18856: [SPARKR][BUILD] AppVeyor change to latest R version

2017-08-05 Thread HyukjinKwon
Github user HyukjinKwon commented on the issue:

https://github.com/apache/spark/pull/18856
  
To cut this short, it was ...

- 3.3.2 release for Linux only
- Gives the latest as 3.3.2 and the download link for 3.3.1 becomes 
`windows/base/old`
- 3.3.2 release for WIndows yet
- 3.3.1 is still not in `windows/base/old` but `windows/base`
- Failed to download


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18856: [SPARKR][BUILD] AppVeyor change to latest R version

2017-08-05 Thread HyukjinKwon
Github user HyukjinKwon commented on the issue:

https://github.com/apache/spark/pull/18856
  
Ah, I meant, the PR I linked actually describes a case when we were using 
3.3.1 but it was broken after 3.3.2 release. The reason was, 3.3.2 was released 
but 3.3.2 for Windows was not synced (but was synced after few hours).

The script we are using checks if it is latest or not and adds `../old/..` 
part to build the download link correctly. So..

- Check given version is latest via https://rversions.r-pkg.org/r-release
  - If so, don't add `../old/..`, e.g., 
https://cran.r-project.org/bin/windows/base/R-3.3.1-win.exe
  - if not, add `../old/..`,  e.g., 
https://cran.r-project.org/bin/windows/base/old/3.3.1/R-3.3.1-win.exe

But, the issue was, https://rversions.r-pkg.org/r-release gives 3.3.2 after 
the release but 3.3.1 for WIndows was not in old repo as the latest.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18856: [SPARKR][BUILD] AppVeyor change to latest R version

2017-08-05 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/18856
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80291/
Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18856: [SPARKR][BUILD] AppVeyor change to latest R version

2017-08-05 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/18856
  
Merged build finished. Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18856: [SPARKR][BUILD] AppVeyor change to latest R version

2017-08-05 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/18856
  
**[Test build #80291 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80291/testReport)**
 for PR 18856 at commit 
[`2449e7e`](https://github.com/apache/spark/commit/2449e7e5488e8f59860123b6681c90b9415b9a3c).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18856: [SPARKR][BUILD] AppVeyor change to latest R version

2017-08-05 Thread felixcheung
Github user felixcheung commented on the issue:

https://github.com/apache/spark/pull/18856
  
It's the latest but it's not new - 3.4.1 was release a month ago. I think 
there shouldn't be the sync problem.





---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18576: [SPARK-21351][SQL] Update nullability based on children'...

2017-08-05 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/18576
  
Test FAILed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80293/
Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18576: [SPARK-21351][SQL] Update nullability based on children'...

2017-08-05 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/18576
  
Merged build finished. Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18576: [SPARK-21351][SQL] Update nullability based on children'...

2017-08-05 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/18576
  
**[Test build #80293 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80293/testReport)**
 for PR 18576 at commit 
[`aba4eba`](https://github.com/apache/spark/commit/aba4ebac5bf945e87a39004ebb5569b991debbf7).
 * This patch **fails Spark unit tests**.
 * This patch merges cleanly.
 * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18856: [SPARKR][BUILD] AppVeyor change to latest R version

2017-08-05 Thread HyukjinKwon
Github user HyukjinKwon commented on the issue:

https://github.com/apache/spark/pull/18856
  
@felixcheung, BTW, I think optionally we could also consider setting it 
3.4.0 a bit more conservatively for now. There was a rather minor problem for 
using the latest version (see https://github.com/apache/spark/pull/15709) due 
to sync'ing issue of R version, which was fixed within few hours outside Spark 
IIRC. Either way is fine to me.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18852: [SPARK-21588][SQL] SQLContext.getConf(key, null) should ...

2017-08-05 Thread viirya
Github user viirya commented on the issue:

https://github.com/apache/spark/pull/18852
  
LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18852: [SPARK-21588][SQL] SQLContext.getConf(key, null) ...

2017-08-05 Thread viirya
Github user viirya commented on a diff in the pull request:

https://github.com/apache/spark/pull/18852#discussion_r131534323
  
--- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/internal/SQLConfSuite.scala ---
@@ -270,4 +270,15 @@ class SQLConfSuite extends QueryTest with 
SharedSQLContext {
 val e2 = 
intercept[AnalysisException](spark.conf.unset(SCHEMA_STRING_LENGTH_THRESHOLD.key))
 assert(e2.message.contains("Cannot modify the value of a static 
config"))
   }
+
+  test("SPARK-21588 SQLContext.getConf(key, null) should return null") {
+withSQLConf(SQLConf.SHUFFLE_PARTITIONS.key -> "1") {
+  assert("1" == spark.conf.get(SQLConf.SHUFFLE_PARTITIONS.key, null))
+  assert("1" == spark.conf.get(SQLConf.SHUFFLE_PARTITIONS.key, 
""))
+}
+
+assert(spark.conf.getOption("spark.sql.nonexistent").isEmpty)
+assert(null == spark.conf.get("spark.sql.nonexistent", null))
--- End diff --

Aha, right. Sorry. Miss it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18852: [SPARK-21588][SQL] SQLContext.getConf(key, null) ...

2017-08-05 Thread gatorsmile
Github user gatorsmile commented on a diff in the pull request:

https://github.com/apache/spark/pull/18852#discussion_r131534296
  
--- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/internal/SQLConfSuite.scala ---
@@ -270,4 +270,15 @@ class SQLConfSuite extends QueryTest with 
SharedSQLContext {
 val e2 = 
intercept[AnalysisException](spark.conf.unset(SCHEMA_STRING_LENGTH_THRESHOLD.key))
 assert(e2.message.contains("Cannot modify the value of a static 
config"))
   }
+
+  test("SPARK-21588 SQLContext.getConf(key, null) should return null") {
+withSQLConf(SQLConf.SHUFFLE_PARTITIONS.key -> "1") {
+  assert("1" == spark.conf.get(SQLConf.SHUFFLE_PARTITIONS.key, null))
+  assert("1" == spark.conf.get(SQLConf.SHUFFLE_PARTITIONS.key, 
""))
+}
+
+assert(spark.conf.getOption("spark.sql.nonexistent").isEmpty)
+assert(null == spark.conf.get("spark.sql.nonexistent", null))
--- End diff --

The line 276 does not verify the fix? Why we still need to add another test 
case?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18852: [SPARK-21588][SQL] SQLContext.getConf(key, null) ...

2017-08-05 Thread viirya
Github user viirya commented on a diff in the pull request:

https://github.com/apache/spark/pull/18852#discussion_r131534275
  
--- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/internal/SQLConfSuite.scala ---
@@ -270,4 +270,15 @@ class SQLConfSuite extends QueryTest with 
SharedSQLContext {
 val e2 = 
intercept[AnalysisException](spark.conf.unset(SCHEMA_STRING_LENGTH_THRESHOLD.key))
 assert(e2.message.contains("Cannot modify the value of a static 
config"))
   }
+
+  test("SPARK-21588 SQLContext.getConf(key, null) should return null") {
+withSQLConf(SQLConf.SHUFFLE_PARTITIONS.key -> "1") {
+  assert("1" == spark.conf.get(SQLConf.SHUFFLE_PARTITIONS.key, null))
+  assert("1" == spark.conf.get(SQLConf.SHUFFLE_PARTITIONS.key, 
""))
+}
+
+assert(spark.conf.getOption("spark.sql.nonexistent").isEmpty)
+assert(null == spark.conf.get("spark.sql.nonexistent", null))
--- End diff --

Yeah, those tests are good. We should add another test for NPE case too.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18724: [SPARK-21519][SQL] Add an option to the JDBC data source...

2017-08-05 Thread gatorsmile
Github user gatorsmile commented on the issue:

https://github.com/apache/spark/pull/18724
  
ping @LucaCanali 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18852: [SPARK-21588][SQL] SQLContext.getConf(key, null) ...

2017-08-05 Thread gatorsmile
Github user gatorsmile commented on a diff in the pull request:

https://github.com/apache/spark/pull/18852#discussion_r131534246
  
--- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/internal/SQLConfSuite.scala ---
@@ -270,4 +270,15 @@ class SQLConfSuite extends QueryTest with 
SharedSQLContext {
 val e2 = 
intercept[AnalysisException](spark.conf.unset(SCHEMA_STRING_LENGTH_THRESHOLD.key))
 assert(e2.message.contains("Cannot modify the value of a static 
config"))
   }
+
+  test("SPARK-21588 SQLContext.getConf(key, null) should return null") {
+withSQLConf(SQLConf.SHUFFLE_PARTITIONS.key -> "1") {
+  assert("1" == spark.conf.get(SQLConf.SHUFFLE_PARTITIONS.key, null))
+  assert("1" == spark.conf.get(SQLConf.SHUFFLE_PARTITIONS.key, 
""))
+}
+
+assert(spark.conf.getOption("spark.sql.nonexistent").isEmpty)
+assert(null == spark.conf.get("spark.sql.nonexistent", null))
--- End diff --

I knew it, but I just do not want to introduce any regression. Thus, I just 
to cover both scenarios.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18852: [SPARK-21588][SQL] SQLContext.getConf(key, null) ...

2017-08-05 Thread viirya
Github user viirya commented on a diff in the pull request:

https://github.com/apache/spark/pull/18852#discussion_r131534235
  
--- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/internal/SQLConfSuite.scala ---
@@ -270,4 +270,15 @@ class SQLConfSuite extends QueryTest with 
SharedSQLContext {
 val e2 = 
intercept[AnalysisException](spark.conf.unset(SCHEMA_STRING_LENGTH_THRESHOLD.key))
 assert(e2.message.contains("Cannot modify the value of a static 
config"))
   }
+
+  test("SPARK-21588 SQLContext.getConf(key, null) should return null") {
+withSQLConf(SQLConf.SHUFFLE_PARTITIONS.key -> "1") {
+  assert("1" == spark.conf.get(SQLConf.SHUFFLE_PARTITIONS.key, null))
+  assert("1" == spark.conf.get(SQLConf.SHUFFLE_PARTITIONS.key, 
""))
+}
+
+assert(spark.conf.getOption("spark.sql.nonexistent").isEmpty)
+assert(null == spark.conf.get("spark.sql.nonexistent", null))
--- End diff --

That's good. But with current test, we don't actually test against the case 
of NPE which is the main issue.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18852: [SPARK-21588][SQL] SQLContext.getConf(key, null) ...

2017-08-05 Thread gatorsmile
Github user gatorsmile commented on a diff in the pull request:

https://github.com/apache/spark/pull/18852#discussion_r131534222
  
--- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/internal/SQLConfSuite.scala ---
@@ -270,4 +270,15 @@ class SQLConfSuite extends QueryTest with 
SharedSQLContext {
 val e2 = 
intercept[AnalysisException](spark.conf.unset(SCHEMA_STRING_LENGTH_THRESHOLD.key))
 assert(e2.message.contains("Cannot modify the value of a static 
config"))
   }
+
+  test("SPARK-21588 SQLContext.getConf(key, null) should return null") {
+withSQLConf(SQLConf.SHUFFLE_PARTITIONS.key -> "1") {
+  assert("1" == spark.conf.get(SQLConf.SHUFFLE_PARTITIONS.key, null))
+  assert("1" == spark.conf.get(SQLConf.SHUFFLE_PARTITIONS.key, 
""))
+}
+
+assert(spark.conf.getOption("spark.sql.nonexistent").isEmpty)
+assert(null == spark.conf.get("spark.sql.nonexistent", null))
--- End diff --

This is to improve the test case coverage.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #12646: [SPARK-14878][SQL] Trim characters string functio...

2017-08-05 Thread gatorsmile
Github user gatorsmile commented on a diff in the pull request:

https://github.com/apache/spark/pull/12646#discussion_r131534214
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala
 ---
@@ -502,69 +503,311 @@ case class FindInSet(left: Expression, right: 
Expression) extends BinaryExpressi
   override def prettyName: String = "find_in_set"
 }
 
+trait String2TrimExpression extends Expression with ImplicitCastInputTypes 
{
+
+  override def dataType: DataType = StringType
+  override def inputTypes: Seq[AbstractDataType] = 
Seq.fill(children.size)(StringType)
+
+  override def nullable: Boolean = children.exists(_.nullable)
+  override def foldable: Boolean = children.forall(_.foldable)
+
+  override def sql: String = {
+if (children.size == 1) {
+  val childrenSQL = children.map(_.sql).mkString(", ")
+  s"$prettyName($childrenSQL)"
+} else {
+  val trimSQL = children(0).map(_.sql).mkString(", ")
+  val tarSQL = children(1).map(_.sql).mkString(", ")
+  s"$prettyName($trimSQL, $tarSQL)"
+}
+  }
+}
+
+object StringTrim {
+  def apply(str: Expression, trimStr: Expression) : StringTrim = 
StringTrim(str, Some(trimStr))
+  def apply(str: Expression) : StringTrim = StringTrim(str, None)
+}
+
 /**
- * A function that trim the spaces from both ends for the specified string.
- */
+ * A function that takes a character string, removes the leading and 
trailing characters matching with the characters
+ * in the trim string, returns the new string.
+ * If BOTH and trimStr keywords are not specified, it defaults to remove 
space character from both ends. The trim
+ * function will have one argument, which contains the source string.
+ * If BOTH and trimStr keywords are specified, it trims the characters 
from both ends, and the trim function will have
+ * two arguments, the first argument contains trimStr, the second argument 
contains the source string.
+ * trimStr: A character string to be trimmed from the source string, if it 
has multiple characters, the function
+ * searches for each character in the source string, removes the 
characters from the source string until it
+ * encounters the first non-match character.
+ * BOTH: removes any characters from both ends of the source string that 
matches characters in the trim string.
+  */
 @ExpressionDescription(
-  usage = "_FUNC_(str) - Removes the leading and trailing space characters 
from `str`.",
+  usage = """
+_FUNC_(str) - Removes the leading and trailing space characters from 
`str`.
+_FUNC_(BOTH trimStr FROM str) - Remove the leading and trailing 
trimString from `str`
+  """,
   extended = """
+Arguments:
+  str - a string expression
+  trimString - the trim string
+  BOTH, FROM - these are keyword to specify for trim string from both 
ends of the string
 Examples:
   > SELECT _FUNC_('SparkSQL   ');
SparkSQL
+  > SELECT _FUNC_(BOTH 'SL' FROM 'SSparkSQLS');
+   parkSQ
   """)
-case class StringTrim(child: Expression)
-  extends UnaryExpression with String2StringExpression {
+case class StringTrim(
+srcStr: Expression,
+trimStr: Option[Expression] = None)
+  extends String2TrimExpression {
+
+  def this (trimStr: Expression, srcStr: Expression) = this(srcStr, 
Option(trimStr))
 
-  def convert(v: UTF8String): UTF8String = v.trim()
+  def this(srcStr: Expression) = this(srcStr, None)
 
   override def prettyName: String = "trim"
 
+  override def children: Seq[Expression] = if (trimStr.isDefined) {
+srcStr :: trimStr.get :: Nil
+  } else {
+srcStr :: Nil
+  }
+  override def eval(input: InternalRow): Any = {
+val srcString = srcStr.eval(input).asInstanceOf[UTF8String]
+if (srcString != null) {
+  if (trimStr.isDefined) {
+return 
srcString.trim(trimStr.get.eval(input).asInstanceOf[UTF8String])
+  } else {
+return srcString.trim()
+  }
+}
+null
+  }
+
   override def doGenCode(ctx: CodegenContext, ev: ExprCode): ExprCode = {
-defineCodeGen(ctx, ev, c => s"($c).trim()")
+val evals = children.map(_.genCode(ctx))
+val srcString = evals(0)
+
+if (evals.length == 1) {
+  ev.copy(evals.map(_.code).mkString("\n") + s"""
+boolean ${ev.isNull} = false;
+UTF8String ${ev.value} = null;
+if (${srcString.isNull}) {
+  ${ev.isNull} = true;
+} else {
+  ${ev.value} = ${srcString.value}.trim();
+}
+ """.stripMargin)
+} else {
+  val trimString = evals(1)
+ 

[GitHub] spark pull request #18852: [SPARK-21588][SQL] SQLContext.getConf(key, null) ...

2017-08-05 Thread viirya
Github user viirya commented on a diff in the pull request:

https://github.com/apache/spark/pull/18852#discussion_r131534217
  
--- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/internal/SQLConfSuite.scala ---
@@ -270,4 +270,15 @@ class SQLConfSuite extends QueryTest with 
SharedSQLContext {
 val e2 = 
intercept[AnalysisException](spark.conf.unset(SCHEMA_STRING_LENGTH_THRESHOLD.key))
 assert(e2.message.contains("Cannot modify the value of a static 
config"))
   }
+
+  test("SPARK-21588 SQLContext.getConf(key, null) should return null") {
+withSQLConf(SQLConf.SHUFFLE_PARTITIONS.key -> "1") {
+  assert("1" == spark.conf.get(SQLConf.SHUFFLE_PARTITIONS.key, null))
+  assert("1" == spark.conf.get(SQLConf.SHUFFLE_PARTITIONS.key, 
""))
+}
+
+assert(spark.conf.getOption("spark.sql.nonexistent").isEmpty)
+assert(null == spark.conf.get("spark.sql.nonexistent", null))
--- End diff --

The NPE is only happened for an existing entry like the above 
`SHUFFLE_PARTITIONS` or the previous 
`spark.sql.thriftServer.incrementalCollect`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #12646: [SPARK-14878][SQL] Trim characters string functio...

2017-08-05 Thread gatorsmile
Github user gatorsmile commented on a diff in the pull request:

https://github.com/apache/spark/pull/12646#discussion_r131534204
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala
 ---
@@ -1121,6 +1125,30 @@ class AstBuilder(conf: SQLConf) extends 
SqlBaseBaseVisitor[AnyRef] with Logging
   }
 
   /**
+   * Create a function name LTRIM for TRIM(Leading), RTRIM for 
TRIM(Trailing), TRIM for TRIM(BOTH),
+   * otherwise, return the original function identifier.
+   */
+  private def replaceTrimFunction(funcID: FunctionIdentifier, ctx: 
FunctionCallContext)
--- End diff --

Rename it to `replaceFunctions`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #12646: [SPARK-14878][SQL] Trim characters string functio...

2017-08-05 Thread gatorsmile
Github user gatorsmile commented on a diff in the pull request:

https://github.com/apache/spark/pull/12646#discussion_r131534200
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala
 ---
@@ -1108,7 +1108,11 @@ class AstBuilder(conf: SQLConf) extends 
SqlBaseBaseVisitor[AnyRef] with Logging
   case expressions =>
 expressions
 }
-val function = 
UnresolvedFunction(visitFunctionName(ctx.qualifiedName), arguments, isDistinct)
+val function = UnresolvedFunction(
--- End diff --

Something like ...
```Scala
val funcId = replaceFunctions(visitFunctionName(ctx.qualifiedName), ctx)
val function = UnresolvedFunction(funcId, arguments, isDistinct)
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #12646: [SPARK-14878][SQL] Trim characters string functio...

2017-08-05 Thread gatorsmile
Github user gatorsmile commented on a diff in the pull request:

https://github.com/apache/spark/pull/12646#discussion_r131534165
  
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala 
---
@@ -2658,4 +2659,23 @@ class SQLQuerySuite extends QueryTest with 
SharedSQLContext {
   checkAnswer(sql("SELECT __auto_generated_subquery_name.i from 
(SELECT i FROM v)"), Row(1))
 }
   }
+
+  test("TRIM function") {
--- End diff --

Move this to `PlanParserSuite`?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #12646: [SPARK-14878][SQL] Trim characters string functio...

2017-08-05 Thread gatorsmile
Github user gatorsmile commented on a diff in the pull request:

https://github.com/apache/spark/pull/12646#discussion_r131534145
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala
 ---
@@ -1121,6 +1125,30 @@ class AstBuilder(conf: SQLConf) extends 
SqlBaseBaseVisitor[AnyRef] with Logging
   }
 
   /**
+   * Create a function name LTRIM for TRIM(Leading), RTRIM for 
TRIM(Trailing), TRIM for TRIM(BOTH),
+   * otherwise, return the original function identifier.
+   */
+  private def replaceTrimFunction(funcID: FunctionIdentifier, ctx: 
FunctionCallContext)
--- End diff --

move this function into `visitFunctionCall `


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18852: [SPARK-21588][SQL] SQLContext.getConf(key, null) ...

2017-08-05 Thread viirya
Github user viirya commented on a diff in the pull request:

https://github.com/apache/spark/pull/18852#discussion_r131534140
  
--- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/internal/SQLConfSuite.scala ---
@@ -270,4 +270,15 @@ class SQLConfSuite extends QueryTest with 
SharedSQLContext {
 val e2 = 
intercept[AnalysisException](spark.conf.unset(SCHEMA_STRING_LENGTH_THRESHOLD.key))
 assert(e2.message.contains("Cannot modify the value of a static 
config"))
   }
+
+  test("SPARK-21588 SQLContext.getConf(key, null) should return null") {
+withSQLConf(SQLConf.SHUFFLE_PARTITIONS.key -> "1") {
+  assert("1" == spark.conf.get(SQLConf.SHUFFLE_PARTITIONS.key, null))
+  assert("1" == spark.conf.get(SQLConf.SHUFFLE_PARTITIONS.key, 
""))
+}
+
+assert(spark.conf.getOption("spark.sql.nonexistent").isEmpty)
+assert(null == spark.conf.get("spark.sql.nonexistent", null))
--- End diff --

Because the key doesn't exist, this doesn't actually test the issue. This 
line passes without this change too.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #12646: [SPARK-14878][SQL] Trim characters string functio...

2017-08-05 Thread gatorsmile
Github user gatorsmile commented on a diff in the pull request:

https://github.com/apache/spark/pull/12646#discussion_r131534107
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala
 ---
@@ -1121,6 +1125,30 @@ class AstBuilder(conf: SQLConf) extends 
SqlBaseBaseVisitor[AnyRef] with Logging
   }
 
   /**
+   * Create a function name LTRIM for TRIM(Leading), RTRIM for 
TRIM(Trailing), TRIM for TRIM(BOTH),
+   * otherwise, return the original function identifier.
+   */
+  private def replaceTrimFunction(funcID: FunctionIdentifier, ctx: 
FunctionCallContext)
+: FunctionIdentifier = {
+val opt = ctx.trimOption
+if (opt != null) {
+  if (ctx.qualifiedName.getText.toLowerCase != "trim") {
+throw new ParseException(s"The specified function 
${ctx.qualifiedName.getText} " +
+  s"doesn't support with option ${opt.getText}.", ctx)
+  }
+  opt.getType match {
+case SqlBaseParser.BOTH => funcID
+case SqlBaseParser.LEADING => funcID.copy(funcName = "ltrim")
+case SqlBaseParser.TRAILING => funcID.copy(funcName = "rtrim")
+case _ => throw new ParseException(s"Function trim doesn't support 
with" +
--- End diff --

Nit: remove `s` and add a space at the end ->
`"Function trim doesn't support with "`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #12646: [SPARK-14878][SQL] Trim characters string functio...

2017-08-05 Thread gatorsmile
Github user gatorsmile commented on a diff in the pull request:

https://github.com/apache/spark/pull/12646#discussion_r131534098
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala
 ---
@@ -1121,6 +1125,30 @@ class AstBuilder(conf: SQLConf) extends 
SqlBaseBaseVisitor[AnyRef] with Logging
   }
 
   /**
+   * Create a function name LTRIM for TRIM(Leading), RTRIM for 
TRIM(Trailing), TRIM for TRIM(BOTH),
+   * otherwise, return the original function identifier.
+   */
+  private def replaceTrimFunction(funcID: FunctionIdentifier, ctx: 
FunctionCallContext)
+: FunctionIdentifier = {
--- End diff --

```Scala
private def replaceTrimFunction(
funcID: FunctionIdentifier,
ctx: FunctionCallContext): FunctionIdentifier = {
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18831: [SPARK-21622][ML][SparkR] Support offset in SparkR GLM

2017-08-05 Thread actuaryzhang
Github user actuaryzhang commented on the issue:

https://github.com/apache/spark/pull/18831
  
Thanks both for the comments. Yes, I think it's be to keep this PR on 
offset and we can address the other improvements later. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18857: [SPARK-20963][SQL][FOLLOW-UP] Use UnresolvedSubqueryColu...

2017-08-05 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/18857
  
**[Test build #80294 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80294/testReport)**
 for PR 18857 at commit 
[`93d5b24`](https://github.com/apache/spark/commit/93d5b2402ba2c010b6dc425ee6eb7024b002381d).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18857: [SPARK-20963][SQL][FOLLOW-UP] Use UnresolvedSubqu...

2017-08-05 Thread maropu
GitHub user maropu opened a pull request:

https://github.com/apache/spark/pull/18857

[SPARK-20963][SQL][FOLLOW-UP] Use UnresolvedSubqueryColumnAliases for 
visitTableName

## What changes were proposed in this pull request?
This pr (follow-up of #18772) used `UnresolvedSubqueryColumnAliases` for 
`visitTableName` in `AstBuilder`, which is a new unresolved `LogicalPlan` 
implemented in #18185.

## How was this patch tested?
Existing tests



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/maropu/spark SPARK-20963-FOLLOWUP

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/18857.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #18857


commit 93d5b2402ba2c010b6dc425ee6eb7024b002381d
Author: Takeshi Yamamuro 
Date:   2017-07-31T14:30:49Z

Use UnresolvedSubqueryColumnAliases for visitTableName




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #12646: [SPARK-14878][SQL] Trim characters string functio...

2017-08-05 Thread gatorsmile
Github user gatorsmile commented on a diff in the pull request:

https://github.com/apache/spark/pull/12646#discussion_r131533855
  
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
@@ -2304,7 +2304,15 @@ object functions {
* @group string_funcs
* @since 1.5.0
*/
-  def ltrim(e: Column): Column = withExpr {StringTrimLeft(e.expr) }
+  def ltrim(e: Column): Column = withExpr {StringTrimLeft(e.expr)}
--- End diff --

You need to update the styles of all the newly added functions in this 
file. For example,
```Scala
def ltrim(e: Column): Column = withExpr {
  StringTrimLeft(e.expr)
}
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18576: [SPARK-21351][SQL] Update nullability based on children'...

2017-08-05 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/18576
  
**[Test build #80293 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80293/testReport)**
 for PR 18576 at commit 
[`aba4eba`](https://github.com/apache/spark/commit/aba4ebac5bf945e87a39004ebb5569b991debbf7).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18772: [SPARK-20963][SQL] Support column aliases for joi...

2017-08-05 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/spark/pull/18772


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18772: [SPARK-20963][SQL] Support column aliases for join relat...

2017-08-05 Thread maropu
Github user maropu commented on the issue:

https://github.com/apache/spark/pull/18772
  
Thanks! I'll make a pr later to fix 
[this](https://github.com/apache/spark/pull/18772#issuecomment-318967315) as 
follow-up.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18772: [SPARK-20963][SQL] Support column aliases for join relat...

2017-08-05 Thread gatorsmile
Github user gatorsmile commented on the issue:

https://github.com/apache/spark/pull/18772
  
Thanks! Merging to master.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18852: [SPARK-21588][SQL] SQLContext.getConf(key, null) should ...

2017-08-05 Thread gatorsmile
Github user gatorsmile commented on the issue:

https://github.com/apache/spark/pull/18852
  
LGTM pending Jenkins. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18790: [SPARK-21587][SS] Added pushdown through watermar...

2017-08-05 Thread gatorsmile
Github user gatorsmile commented on a diff in the pull request:

https://github.com/apache/spark/pull/18790#discussion_r131533778
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
 ---
@@ -872,6 +886,25 @@ object PushDownPredicate extends Rule[LogicalPlan] 
with PredicateHelper {
   pushDownPredicate(filter, u.child) { predicate =>
 u.withNewChildren(Seq(Filter(predicate, u.child)))
   }
+
+case filter @ Filter(condition, watermark: EventTimeWatermark) =>
--- End diff --

Why not changing `EventTimeWatermark ` to `UnaryNode`? Then, we do not need 
to write a separate case only for `EventTimeWatermark`. We can reuse the 
existing `pushDownPredicate`, right?

We also have the other rules that already consider `UnaryNode`, do you 
think it make sense to avoid duplicating the codes for `EventTimeWatermark ` 
only? 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18576: [SPARK-21351][SQL] Update nullability based on children'...

2017-08-05 Thread maropu
Github user maropu commented on the issue:

https://github.com/apache/spark/pull/18576
  
Yea, if we can do so, I feel it might be the best. I'll check if we can 
remove nullability update in `FilterExec`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18852: [SPARK-21588][SQL] SQLContext.getConf(key, null) should ...

2017-08-05 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/18852
  
**[Test build #80292 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80292/testReport)**
 for PR 18852 at commit 
[`4295cd3`](https://github.com/apache/spark/commit/4295cd3342178285c2465c080cafa8dd8b68ba16).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18772: [SPARK-20963][SQL] Support column aliases for join relat...

2017-08-05 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/18772
  
Merged build finished. Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18772: [SPARK-20963][SQL] Support column aliases for join relat...

2017-08-05 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/18772
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80290/
Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18772: [SPARK-20963][SQL] Support column aliases for join relat...

2017-08-05 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/18772
  
**[Test build #80290 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80290/testReport)**
 for PR 18772 at commit 
[`6c93f85`](https://github.com/apache/spark/commit/6c93f859b7ef2cc7f7cb49ab6fbd0397b43fda84).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18576: [SPARK-21351][SQL] Update nullability based on children'...

2017-08-05 Thread gatorsmile
Github user gatorsmile commented on the issue:

https://github.com/apache/spark/pull/18576
  
If we can update the nullability in Optimizer rules, do we still need to do 
it in `FilterExec`?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18856: [SPARKR][BUILD] AppVeyor change to latest R version

2017-08-05 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/18856
  
**[Test build #80291 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80291/testReport)**
 for PR 18856 at commit 
[`2449e7e`](https://github.com/apache/spark/commit/2449e7e5488e8f59860123b6681c90b9415b9a3c).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18856: [SPARKR][BUILD] AppVeyor change to latest R versi...

2017-08-05 Thread felixcheung
GitHub user felixcheung opened a pull request:

https://github.com/apache/spark/pull/18856

[SPARKR][BUILD] AppVeyor change to latest R version

## What changes were proposed in this pull request?

R version update

## How was this patch tested?

AppVeyor


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/felixcheung/spark rappveyorver

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/18856.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #18856


commit 2449e7e5488e8f59860123b6681c90b9415b9a3c
Author: Felix Cheung 
Date:   2017-08-06T02:24:42Z

change to latest R version




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18854: [SPARK-21629][SQL][WIP] Fix Or nullability

2017-08-05 Thread goldmedal
Github user goldmedal closed the pull request at:

https://github.com/apache/spark/pull/18854


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18854: [SPARK-21629][SQL][WIP] Fix Or nullability

2017-08-05 Thread goldmedal
Github user goldmedal commented on the issue:

https://github.com/apache/spark/pull/18854
  
@gatorsmile  @viirya Thanks a lot. You are right. I close it for now.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18576: [SPARK-21351][SQL] Update nullability based on children'...

2017-08-05 Thread maropu
Github user maropu commented on the issue:

https://github.com/apache/spark/pull/18576
  
@gatorsmile If you get time, could you also check this? Thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18772: [SPARK-20963][SQL] Support column aliases for join relat...

2017-08-05 Thread gatorsmile
Github user gatorsmile commented on the issue:

https://github.com/apache/spark/pull/18772
  
LGTM pending Jenkins. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18841: [SPARK-21635][SQL] ACOS(2) and ASIN(2) should be ...

2017-08-05 Thread viirya
Github user viirya commented on a diff in the pull request:

https://github.com/apache/spark/pull/18841#discussion_r131532373
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/mathExpressions.scala
 ---
@@ -170,29 +193,29 @@ case class Pi() extends LeafMathExpression(math.Pi, 
"PI")
 
 // scalastyle:off line.size.limit
 @ExpressionDescription(
-  usage = "_FUNC_(expr) - Returns the inverse cosine (a.k.a. arccosine) of 
`expr` if -1<=`expr`<=1 or NaN otherwise.",
+  usage = "_FUNC_(expr) - Returns the inverse cosine (a.k.a. arccosine) of 
`expr` if -1<=`expr`<=1 or NULL otherwise.",
   extended = """
 Examples:
   > SELECT _FUNC_(1);
0.0
   > SELECT _FUNC_(2);
-   NaN
+   NULL
--- End diff --

Suddenly changing this might break user codes. ACOS and ASIN exist for a 
while in SparkSQL,


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18841: [SPARK-21635][SQL] ACOS(2) and ASIN(2) should be ...

2017-08-05 Thread viirya
Github user viirya commented on a diff in the pull request:

https://github.com/apache/spark/pull/18841#discussion_r131532351
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/mathExpressions.scala
 ---
@@ -170,29 +193,29 @@ case class Pi() extends LeafMathExpression(math.Pi, 
"PI")
 
 // scalastyle:off line.size.limit
 @ExpressionDescription(
-  usage = "_FUNC_(expr) - Returns the inverse cosine (a.k.a. arccosine) of 
`expr` if -1<=`expr`<=1 or NaN otherwise.",
+  usage = "_FUNC_(expr) - Returns the inverse cosine (a.k.a. arccosine) of 
`expr` if -1<=`expr`<=1 or NULL otherwise.",
   extended = """
 Examples:
   > SELECT _FUNC_(1);
0.0
   > SELECT _FUNC_(2);
-   NaN
+   NULL
--- End diff --

@highfei2011 Yeah, I tend to not modify current behavior.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18854: [SPARK-21629][SQL][WIP] Fix Or nullability

2017-08-05 Thread viirya
Github user viirya commented on the issue:

https://github.com/apache/spark/pull/18854
  
Ok. I think this should not be a problem. We should close this.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18854: [SPARK-21629][SQL][WIP] Fix Or nullability

2017-08-05 Thread viirya
Github user viirya commented on a diff in the pull request:

https://github.com/apache/spark/pull/18854#discussion_r131532208
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/predicates.scala
 ---
@@ -396,6 +396,8 @@ case class Or(left: Expression, right: Expression) 
extends BinaryOperator with P
 
   override def sqlOperator: String = "OR"
 
+  override def nullable: Boolean = left.nullable && right.nullable
--- End diff --

Yeah, I forgot it. `BooleanSimplification` will do this.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18772: [SPARK-20963][SQL] Support column aliases for join relat...

2017-08-05 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/18772
  
**[Test build #80290 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80290/testReport)**
 for PR 18772 at commit 
[`6c93f85`](https://github.com/apache/spark/commit/6c93f859b7ef2cc7f7cb49ab6fbd0397b43fda84).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18772: [SPARK-20963][SQL] Support column aliases for join relat...

2017-08-05 Thread gatorsmile
Github user gatorsmile commented on the issue:

https://github.com/apache/spark/pull/18772
  
retest this please


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18664: [SPARK-21375][PYSPARK][SQL][WIP] Add Date and Tim...

2017-08-05 Thread gatorsmile
Github user gatorsmile commented on a diff in the pull request:

https://github.com/apache/spark/pull/18664#discussion_r131532115
  
--- Diff: python/pyspark/sql/tests.py ---
@@ -3036,6 +3052,9 @@ def test_toPandas_arrow_toggle(self):
 pdf = df.toPandas()
 self.spark.conf.set("spark.sql.execution.arrow.enable", "true")
 pdf_arrow = df.toPandas()
+# need to remove timezone for comparison
+pdf_arrow["7_timestamp_t"] = \
+pdf_arrow["7_timestamp_t"].apply(lambda ts: 
ts.tz_localize(None))
--- End diff --

without Arrow?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18668: [SPARK-21637][SPARK-21451][SQL]get `spark.hadoop....

2017-08-05 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/spark/pull/18668


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18668: [SPARK-21637][SPARK-21451][SQL]get `spark.hadoop.*` prop...

2017-08-05 Thread gatorsmile
Github user gatorsmile commented on the issue:

https://github.com/apache/spark/pull/18668
  
Thanks everyone! Merging it to master. 

If any other comment, we can address it in the follow-up PRs. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18854: [SPARK-21629][SQL][WIP] Fix Or nullability

2017-08-05 Thread gatorsmile
Github user gatorsmile commented on a diff in the pull request:

https://github.com/apache/spark/pull/18854#discussion_r131532024
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/predicates.scala
 ---
@@ -396,6 +396,8 @@ case class Or(left: Expression, right: Expression) 
extends BinaryOperator with P
 
   override def sqlOperator: String = "OR"
 
+  override def nullable: Boolean = left.nullable && right.nullable
--- End diff --

uh... I did not read the original JIRA. 

For foldable expressions, we will eventually evaluate them in the 
Optimizer. Maybe, here, we do not need to introduce the extra complexity. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18854: [SPARK-21629][SQL][WIP] Fix Or nullability

2017-08-05 Thread viirya
Github user viirya commented on a diff in the pull request:

https://github.com/apache/spark/pull/18854#discussion_r131531879
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/predicates.scala
 ---
@@ -396,6 +396,8 @@ case class Or(left: Expression, right: Expression) 
extends BinaryOperator with P
 
   override def sqlOperator: String = "OR"
 
+  override def nullable: Boolean = left.nullable && right.nullable
--- End diff --

Actually the current `def nullable: Boolean = left.nullable || 
right.nullable` for `Or` is not very accurate too.

At least when any of left/right is true, its `nullable` should be false.

Although we can't know if any of left/right is true under most of cases, 
but if they are foldable, then we can know that.

@gatorsmile @hvanhovell What do you think?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18854: [SPARK-21629][SQL][WIP] Fix Or nullability

2017-08-05 Thread viirya
Github user viirya commented on a diff in the pull request:

https://github.com/apache/spark/pull/18854#discussion_r131531690
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/predicates.scala
 ---
@@ -396,6 +396,8 @@ case class Or(left: Expression, right: Expression) 
extends BinaryOperator with P
 
   override def sqlOperator: String = "OR"
 
+  override def nullable: Boolean = left.nullable && right.nullable
--- End diff --

Oh, right. Yeah, I agree.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18815: [SPARK-21609][WEB-UI]In the Master ui add "log di...

2017-08-05 Thread guoxiaolongzte
Github user guoxiaolongzte closed the pull request at:

https://github.com/apache/spark/pull/18815


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18829: [SPARK-21620][WEB-UI][CORE]Add metrics url in spa...

2017-08-05 Thread guoxiaolongzte
Github user guoxiaolongzte closed the pull request at:

https://github.com/apache/spark/pull/18829


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18854: [SPARK-21629][SQL][WIP] Fix Or nullability

2017-08-05 Thread hvanhovell
Github user hvanhovell commented on a diff in the pull request:

https://github.com/apache/spark/pull/18854#discussion_r131530967
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/predicates.scala
 ---
@@ -396,6 +396,8 @@ case class Or(left: Expression, right: Expression) 
extends BinaryOperator with P
 
   override def sqlOperator: String = "OR"
 
+  override def nullable: Boolean = left.nullable && right.nullable
--- End diff --

Yeah that is fair point. I misread the code when I was going over it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18851: [SPARK-21644][SQL] LocalLimit.maxRows is defined ...

2017-08-05 Thread dongjoon-hyun
Github user dongjoon-hyun commented on a diff in the pull request:

https://github.com/apache/spark/pull/18851#discussion_r131530034
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicLogicalOperators.scala
 ---
@@ -643,6 +657,27 @@ case class Pivot(
   }
 }
 
+/**
+ * A constructor for creating a logical limit, which is split into two 
separate logical nodes:
+ * a [[LocalLimit]], which is a partition local limit, followed by a 
[[GlobalLimit]].
+ *
+ * This muds the water for clean logical/physical separation, and is done 
for better limit pushdown.
+ * In distributed query processing, a non-terminal global limit is 
actually an expensive operation
+ * because it requires coordination (in Spark this is done using a 
shuffle).
+ *
+ * In most cases when we want to push down limit, it is often better to 
only push some partition
+ * local limit. Consider the following:
+ *
+ *   GlobalLimit(Union(A, B)
--- End diff --

Missing ')' at the end: `GlobalLimit(Union(A, B)` -> `GlobalLimit(Union(A, 
B))`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18855: [Spark 3151][Block Manager] DiskStore.getBytes fails for...

2017-08-05 Thread eyalfa
Github user eyalfa commented on the issue:

https://github.com/apache/spark/pull/18855
  
@rxin, @JoshRosen , @cloud-fan ,
you seem to be the last guys to touch this class, can you please review?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18855: [Spark 3151][Block Manager] DiskStore.getBytes fails for...

2017-08-05 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/18855
  
Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18763: [SPARK-21306][ML] For branch 2.1, OneVsRest shoul...

2017-08-05 Thread facaiy
Github user facaiy commented on a diff in the pull request:

https://github.com/apache/spark/pull/18763#discussion_r131529768
  
--- Diff: python/pyspark/ml/classification.py ---
@@ -1423,7 +1425,18 @@ def _fit(self, dataset):
 
 numClasses = int(dataset.agg({labelCol: 
"max"}).head()["max("+labelCol+")"]) + 1
 
-multiclassLabeled = dataset.select(labelCol, featuresCol)
+weightCol = None
+if (self.isDefined(self.weightCol) and self.getWeightCol()):
+if isinstance(classifier, HasWeightCol):
+weightCol = self.getWeightCol()
+else:
+warnings.warn("weightCol is ignored, "
+  "as it is not supported by {} 
now.".format(classifier))
--- End diff --

Modified as suggested, thanks very much!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18855: [Spark 3151][Block Manager] DiskStore.getBytes fa...

2017-08-05 Thread eyalfa
GitHub user eyalfa opened a pull request:

https://github.com/apache/spark/pull/18855

[Spark 3151][Block Manager] DiskStore.getBytes fails for files larger than 
2GB

## What changes were proposed in this pull request?
introduced `DiskBlockData`, a new implementation of `BlockData` 
representing a whole file.
this is somehow related to 
[SPARK-6236](https://issues.apache.org/jira/browse/SPARK-6236) as well

This class follows the implementation of `EncryptedBlockData` just without 
the encryption. hence:
* it uses FileOutputStream (todo: encrypted version actually uses 
`Channels.newInputStream`, not sure if it's the right choice for this)
* `toNetty` is implemented in terms of 
`io.netty.channel.DefaultFileRegion#DefaultFileRegion`
* `toByteBuffer` fails for files larger than 2GB (same behavior of the 
original code, just postponed a bit), it also respects the same configuration 
keys defined by the original code to choose between memory mapping and simple 
file read.
(Please fill in changes proposed in this fix)

## How was this patch tested?
added test to DiskStoreSuite and MemoryManagerSuite

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/eyalfa/spark SPARK-3151

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/18855.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #18855


commit fc3f1d78e14a30dd2f71fc65ec59a2def5c1a0d4
Author: Eyal Farago 
Date:   2017-07-05T13:20:16Z

SPARK-6235__take1: introduce a failing test.

commit 84687380026a6a3bcded27be517094d3f690c3bb
Author: Eyal Farago 
Date:   2017-07-30T20:06:05Z

SPARK-6235__add_failing_tests: add failing tests for block manager suite.

commit 15804497a477b8f97c08adfad5f0519504dc82f2
Author: Eyal Farago 
Date:   2017-08-01T17:34:26Z

SPARK-6235__add_failing_tests: introduce a new BlockData implementation to 
represent a disk backed block data.

commit c5028f50698c4fe48a06f5dd683dbee42f7e6b2b
Author: Eyal Farago 
Date:   2017-08-05T19:57:41Z

SPARK-6235__add_failing_tests: styling

commit 908c7860688534d0bb77bcbebbd2e006a161fb74
Author: Eyal Farago 
Date:   2017-08-05T19:58:52Z

SPARK-6235__add_failing_tests: adapt DiskStoreSuite to the modifications in 
the tested class.

commit 67f4259ca16c3ca7c904c9ccc5de9acbc25d2271
Author: Eyal Farago 
Date:   2017-08-05T20:57:58Z

SPARK-6235__add_failing_tests: try to reduce actual memory footprint of the 
>2gb tests.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18764: [SPARK-21306][ML] For branch 2.0, OneVsRest shoul...

2017-08-05 Thread facaiy
Github user facaiy commented on a diff in the pull request:

https://github.com/apache/spark/pull/18764#discussion_r131529693
  
--- Diff: python/pyspark/ml/classification.py ---
@@ -1344,7 +1346,19 @@ def _fit(self, dataset):
 
 numClasses = int(dataset.agg({labelCol: 
"max"}).head()["max("+labelCol+")"]) + 1
 
-multiclassLabeled = dataset.select(labelCol, featuresCol)
+weightCol = None
+if (self.isDefined(self.weightCol) and self.getWeightCol()):
+if isinstance(classifier, HasWeightCol):
+weightCol = self.getWeightCol()
+else:
+warnings.warn("weightCol is ignored, "
+  "as it is not supported by {} now.".format(
--- End diff --

Thank you very much for help, @yanboliang !


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18790: [SPARK-21587][SS] Added pushdown through watermarks.

2017-08-05 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/18790
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80289/
Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18790: [SPARK-21587][SS] Added pushdown through watermarks.

2017-08-05 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/18790
  
Merged build finished. Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18790: [SPARK-21587][SS] Added pushdown through watermarks.

2017-08-05 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/18790
  
**[Test build #80289 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80289/testReport)**
 for PR 18790 at commit 
[`8c73117`](https://github.com/apache/spark/commit/8c73117857eb141c9700d73992634f037e5d6ee3).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds the following public classes _(experimental)_:
  * `class BroadcastPickleRegistry(threading.local):`
  * `case class EventTimeStats(var max: Long, var min: Long, var avg: 
Double, var count: Long) `


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18828: [SPARK-21619][SQL] Fail the execution of canonica...

2017-08-05 Thread dongjoon-hyun
Github user dongjoon-hyun commented on a diff in the pull request:

https://github.com/apache/spark/pull/18828#discussion_r131529489
  
--- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/execution/SparkPlanSuite.scala ---
@@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.execution
+
+import org.apache.spark.sql.QueryTest
+import org.apache.spark.sql.test.SharedSQLContext
+
+class SparkPlanSuite extends QueryTest with SharedSQLContext {
+
+  test("SPARK-21619 execution of a canonicalized plan should fail") {
+val plan = spark.range(10).queryExecution.executedPlan.canonicalized
+
+intercept[IllegalStateException] { plan.execute() }
+intercept[IllegalStateException] { plan.executeCollect() }
+intercept[IllegalStateException] { plan.executeCollectPublic() }
+intercept[IllegalStateException] { plan.executeToIterator() }
+intercept[IllegalStateException] { plan.executeBroadcast() }
+intercept[IllegalStateException] { plan.executeTake(1) }
--- End diff --

nit. There is an inconsistent corner case in `plan.executeTake`.
```scala
plan.executeTake(1)  -> raise exception
plan.executeTake(0)  -> no exception
plan.executeTake(-1)  -> raise exception
```



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18668: [SPARK-21637][SPARK-21451][SQL]get `spark.hadoop.*` prop...

2017-08-05 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/18668
  
Merged build finished. Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18668: [SPARK-21637][SPARK-21451][SQL]get `spark.hadoop.*` prop...

2017-08-05 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/18668
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80288/
Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18668: [SPARK-21637][SPARK-21451][SQL]get `spark.hadoop.*` prop...

2017-08-05 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/18668
  
**[Test build #80288 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80288/testReport)**
 for PR 18668 at commit 
[`46a955d`](https://github.com/apache/spark/commit/46a955d7d4a172dde53a504b70947bedce8c22d5).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18460: [SPARK-21247][SQL] Type comparision should respect case-...

2017-08-05 Thread dongjoon-hyun
Github user dongjoon-hyun commented on the issue:

https://github.com/apache/spark/pull/18460
  
Hi, @cloud-fan and @gatorsmile .
Could you review this PR?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18790: [SPARK-21587][SS] Added pushdown through watermarks.

2017-08-05 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/18790
  
**[Test build #80289 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80289/testReport)**
 for PR 18790 at commit 
[`8c73117`](https://github.com/apache/spark/commit/8c73117857eb141c9700d73992634f037e5d6ee3).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18790: [SPARK-21587][SS] Added pushdown through watermarks.

2017-08-05 Thread gatorsmile
Github user gatorsmile commented on the issue:

https://github.com/apache/spark/pull/18790
  
retest this please


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18668: [SPARK-21637][SPARK-21451][SQL]get `spark.hadoop.*` prop...

2017-08-05 Thread gatorsmile
Github user gatorsmile commented on the issue:

https://github.com/apache/spark/pull/18668
  
LGTM pending Jenkins.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18844: [SPARK-21640] Add errorifexists as a valid string...

2017-08-05 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/spark/pull/18844


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18844: [SPARK-21640] Add errorifexists as a valid string for Er...

2017-08-05 Thread gatorsmile
Github user gatorsmile commented on the issue:

https://github.com/apache/spark/pull/18844
  
Thanks! Merging to master


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18852: [SPARK-21588][SQL] SQLContext.getConf(key, null) should ...

2017-08-05 Thread gatorsmile
Github user gatorsmile commented on the issue:

https://github.com/apache/spark/pull/18852
  
LGTM except a comment in test cases.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18852: [SPARK-21588][SQL] SQLContext.getConf(key, null) ...

2017-08-05 Thread gatorsmile
Github user gatorsmile commented on a diff in the pull request:

https://github.com/apache/spark/pull/18852#discussion_r131527017
  
--- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/internal/SQLConfSuite.scala ---
@@ -270,4 +270,10 @@ class SQLConfSuite extends QueryTest with 
SharedSQLContext {
 val e2 = 
intercept[AnalysisException](spark.conf.unset(SCHEMA_STRING_LENGTH_THRESHOLD.key))
 assert(e2.message.contains("Cannot modify the value of a static 
config"))
   }
+
+  test("SPARK-21588 SQLContext.getConf(key, null) should return null") {
+assert(null == 
spark.conf.get("spark.sql.thriftServer.incrementalCollect", null))
+assert("" == spark.conf.get(
+  "spark.sql.thriftServer.incrementalCollect", ""))
--- End diff --

The test cases need to be improved. 
```Scala
withSQLConf(SQLConf.SHUFFLE_PARTITIONS.key -> "1") {
  assert("1" == spark.conf.get(SQLConf.SHUFFLE_PARTITIONS.key, null))
  assert("1" == spark.conf.get(SQLConf.SHUFFLE_PARTITIONS.key, 
""))
}

assert(spark.conf.getOption("spark.sql.nonexistent").isEmpty)
assert(null == spark.conf.get("spark.sql.nonexistent", null))
assert("" == spark.conf.get("spark.sql.nonexistent", 
""))
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18854: [SPARK-21629][SQL][WIP] Fix Or nullability

2017-08-05 Thread gatorsmile
Github user gatorsmile commented on a diff in the pull request:

https://github.com/apache/spark/pull/18854#discussion_r131526168
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/predicates.scala
 ---
@@ -396,6 +396,8 @@ case class Or(left: Expression, right: Expression) 
extends BinaryOperator with P
 
   override def sqlOperator: String = "OR"
 
+  override def nullable: Boolean = left.nullable && right.nullable
--- End diff --

Based on the semantics of ANSI SQL, the current solution is right. 

```SQL
NULL OR True => True
NULL OR False => NULL
NULL OR NULL => NULL
```

If you also agree on it, could you close it? Thanks!




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18828: [SPARK-21619][SQL] Fail the execution of canonicalized p...

2017-08-05 Thread gatorsmile
Github user gatorsmile commented on the issue:

https://github.com/apache/spark/pull/18828
  
There is another zero argument  
[ResetCommand](https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/command/SetCommand.scala#L155)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18749: [SPARK-21485][FOLLOWUP][SQL][DOCS] Describes exam...

2017-08-05 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/spark/pull/18749


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18749: [SPARK-21485][FOLLOWUP][SQL][DOCS] Describes examples an...

2017-08-05 Thread gatorsmile
Github user gatorsmile commented on the issue:

https://github.com/apache/spark/pull/18749
  
LGTM.

Thanks! Merging to master.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18668: [SPARK-21637][SPARK-21451][SQL]get `spark.hadoop.*` prop...

2017-08-05 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/18668
  
**[Test build #80288 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80288/testReport)**
 for PR 18668 at commit 
[`46a955d`](https://github.com/apache/spark/commit/46a955d7d4a172dde53a504b70947bedce8c22d5).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18668: [SPARK-21637][SPARK-21451][SQL]get `spark.hadoop.*` prop...

2017-08-05 Thread gatorsmile
Github user gatorsmile commented on the issue:

https://github.com/apache/spark/pull/18668
  
retest this please


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18831: [SPARK-21622][ML][SparkR] Support offset in SparkR GLM

2017-08-05 Thread yanboliang
Github user yanboliang commented on the issue:

https://github.com/apache/spark/pull/18831
  
@felixcheung Sorry for misunderstand, I agree we can support 
```df$myoffset``` as well, the requirement make sense for R users. Let's create 
a separate JIRA to track it and do this change for other similar arguments like 
```weightCol``` as well. Thanks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18841: [SPARK-21635][SQL] ACOS(2) and ASIN(2) should be ...

2017-08-05 Thread wangyum
Github user wangyum commented on a diff in the pull request:

https://github.com/apache/spark/pull/18841#discussion_r131525404
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/mathExpressions.scala
 ---
@@ -170,29 +193,29 @@ case class Pi() extends LeafMathExpression(math.Pi, 
"PI")
 
 // scalastyle:off line.size.limit
 @ExpressionDescription(
-  usage = "_FUNC_(expr) - Returns the inverse cosine (a.k.a. arccosine) of 
`expr` if -1<=`expr`<=1 or NaN otherwise.",
+  usage = "_FUNC_(expr) - Returns the inverse cosine (a.k.a. arccosine) of 
`expr` if -1<=`expr`<=1 or NULL otherwise.",
   extended = """
 Examples:
   > SELECT _FUNC_(1);
0.0
   > SELECT _FUNC_(2);
-   NaN
+   NULL
--- End diff --

Spark handles some `NaN`:

https://github.com/apache/spark/blob/v2.2.0/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/mathExpressions.scala#L84

I think Spark keeps pace with the database as much as possible, otherwise 
SQL migration is troublesome. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18831: [SPARK-21622][ML][SparkR] Support offset in SparkR GLM

2017-08-05 Thread felixcheung
Github user felixcheung commented on the issue:

https://github.com/apache/spark/pull/18831
  
To be clear, I'm not suggesting to rename the parameter. I'm suggest we 
should support the type being passed in as column like df$myoffset in addition 
to it being a string. This will be more R like




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18797: [SPARK-21523][ML] update breeze to 0.13.2 for an ...

2017-08-05 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/18797#discussion_r131524755
  
--- Diff: 
mllib/src/test/scala/org/apache/spark/mllib/optimization/LBFGSSuite.scala ---
@@ -191,8 +191,8 @@ class LBFGSSuite extends SparkFunSuite with 
MLlibTestSparkContext with Matchers
 // With smaller convergenceTol, it takes more steps.
 assert(lossLBFGS3.length > lossLBFGS2.length)
 
-// Based on observation, lossLBFGS3 runs 7 iterations, no 
theoretically guaranteed.
-assert(lossLBFGS3.length == 7)
+// Based on observation, lossLBFGS3 runs 6 iterations, no 
theoretically guaranteed.
+assert(lossLBFGS3.length == 6)
--- End diff --

OK by me. You could also make it a range. Or something really basic like "> 
0".


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



  1   2   3   >