[GitHub] spark pull request #16352: [SPARK-18947][SQL] SQLContext.tableNames should n...

2016-12-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/spark/pull/16352


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16352: [SPARK-18947][SQL] SQLContext.tableNames should n...

2016-12-20 Thread gatorsmile
Github user gatorsmile commented on a diff in the pull request:

https://github.com/apache/spark/pull/16352#discussion_r93390578
  
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/api/r/SQLUtils.scala 
---
@@ -276,11 +276,12 @@ private[sql] object SQLUtils extends Logging {
   }
 
   def getTableNames(sparkSession: SparkSession, databaseName: String): 
Array[String] = {
-databaseName match {
-  case n: String if n != null && n.trim.nonEmpty =>
-sparkSession.catalog.listTables(n).collect().map(_.name)
+val db = databaseName match {
+  case _ if databaseName != null && databaseName.trim.nonEmpty =>
+databaseName.trim
--- End diff --

: ) Yeah


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16352: [SPARK-18947][SQL] SQLContext.tableNames should n...

2016-12-20 Thread cloud-fan
Github user cloud-fan commented on a diff in the pull request:

https://github.com/apache/spark/pull/16352#discussion_r93388755
  
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/api/r/SQLUtils.scala 
---
@@ -276,11 +276,12 @@ private[sql] object SQLUtils extends Logging {
   }
 
   def getTableNames(sparkSession: SparkSession, databaseName: String): 
Array[String] = {
-databaseName match {
-  case n: String if n != null && n.trim.nonEmpty =>
-sparkSession.catalog.listTables(n).collect().map(_.name)
+val db = databaseName match {
+  case _ if databaseName != null && databaseName.trim.nonEmpty =>
+databaseName.trim
--- End diff --

ok let me keep the previous behavior, although it's weird(check 
`...trim.nonEmpty` but not use the trimmed database name)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16352: [SPARK-18947][SQL] SQLContext.tableNames should n...

2016-12-20 Thread gatorsmile
Github user gatorsmile commented on a diff in the pull request:

https://github.com/apache/spark/pull/16352#discussion_r93385438
  
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/api/r/SQLUtils.scala 
---
@@ -276,11 +276,12 @@ private[sql] object SQLUtils extends Logging {
   }
 
   def getTableNames(sparkSession: SparkSession, databaseName: String): 
Array[String] = {
-databaseName match {
-  case n: String if n != null && n.trim.nonEmpty =>
-sparkSession.catalog.listTables(n).collect().map(_.name)
+val db = databaseName match {
+  case _ if databaseName != null && databaseName.trim.nonEmpty =>
+databaseName.trim
--- End diff --

uh... not sure whether we should support triming. So far, when we do 
something like
```Scala
session.tableNames("default ")
```

It reports the error:
```
Database 'default ' not found;
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16352: [SPARK-18947][SQL] SQLContext.tableNames should n...

2016-12-20 Thread gatorsmile
Github user gatorsmile commented on a diff in the pull request:

https://github.com/apache/spark/pull/16352#discussion_r93300223
  
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala ---
@@ -747,7 +747,7 @@ class SQLContext private[sql](val sparkSession: 
SparkSession)
* @since 1.3.0
*/
   def tableNames(): Array[String] = {
-sparkSession.catalog.listTables().collect().map(_.name)
+
sessionState.catalog.listTables(sessionState.catalog.getCurrentDatabase).map(_.table).toArray
--- End diff --

The original one is using 
[`sparkSession.catalog.listTables`](https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/internal/CatalogImpl.scala#L83-L99);
 the new one is using 
[`sessionState.catalog.listTables`](https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala#L600-L632).
 The implementation of these two function calls are different. The later one is 
using 
[getTablesByPattern](https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java#L1371-L1381).

To be honest, I did not notice the difference at the beginning. : )


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16352: [SPARK-18947][SQL] SQLContext.tableNames should n...

2016-12-20 Thread cloud-fan
GitHub user cloud-fan opened a pull request:

https://github.com/apache/spark/pull/16352

[SPARK-18947][SQL] SQLContext.tableNames should not call Catalog.listTables

## What changes were proposed in this pull request?

It's a huge waste to call `Catalog.listTables` in `SQLContext.tableNames`, 
which only need the table names, while `Catalog.listTables` will get the table 
metadata for each table name.

## How was this patch tested?

N/A

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cloud-fan/spark minor

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/16352.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #16352


commit f12dc7924dd3e4847578992d20d36a26d3d02792
Author: Wenchen Fan 
Date:   2016-12-20T14:27:37Z

SQLContext.tableNames should not call Catalog.listTables




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org