[GitHub] spark pull request: [SPARK-9269][SQL] Add Set type matching to Arr...

2015-07-24 Thread alexliu68
Github user alexliu68 commented on the pull request:

https://github.com/apache/spark/pull/7628#issuecomment-124703728
  
Cassandra has Set data type. When read set data into Spark SQL, it errors 
out by no matching in ArrayConverter


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-9269][SQL] Add Set type matching to Arr...

2015-07-23 Thread alexliu68
GitHub user alexliu68 opened a pull request:

https://github.com/apache/spark/pull/7628

[SPARK-9269][SQL] Add Set type matching to ArrayConverter



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alexliu68/spark SPARK-SQL-9269

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/7628.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #7628


commit 0d1a4e299889f45c082f0ac20e10c3bf1537db2a
Author: Alex Liu 
Date:   2015-07-23T22:15:33Z

[SPARK-9269][SQL] Add Set type matching to ArrayConverter




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-6730][SQL] Allow using keyword as ident...

2015-04-15 Thread alexliu68
Github user alexliu68 commented on the pull request:

https://github.com/apache/spark/pull/5520#issuecomment-93444671
  
I vote to have keyword as identifier in OPTIONS. Existing code fails 
without giving meaningful error message if the keyword is on OPTIONS.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-5622][SQL] add connector configuration ...

2015-03-12 Thread alexliu68
Github user alexliu68 commented on the pull request:

https://github.com/apache/spark/pull/4406#issuecomment-78637718
  
Let's close it


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-5622][SQL] add connector configuration ...

2015-03-12 Thread alexliu68
Github user alexliu68 closed the pull request at:

https://github.com/apache/spark/pull/4406


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-5622][SQL] add connector configuration ...

2015-02-12 Thread alexliu68
Github user alexliu68 commented on the pull request:

https://github.com/apache/spark/pull/4406#issuecomment-74178413
  
The following log shows how HiveServer2 starts metastore

INFO  2015-02-12 15:37:56 org.apache.hive.service.AbstractService: 
HiveServer2: Async execution pool size 50
INFO  2015-02-12 15:37:56 org.apache.hive.service.AbstractService: 
Service:OperationManager is inited.
INFO  2015-02-12 15:37:56 org.apache.hive.service.AbstractService: Service: 
SessionManager is inited.
INFO  2015-02-12 15:37:56 org.apache.hive.service.AbstractService: Service: 
CLIService is inited.
INFO  2015-02-12 15:37:56 org.apache.hive.service.AbstractService: 
Service:ThriftBinaryCLIService is inited.
INFO  2015-02-12 15:37:56 org.apache.hive.service.AbstractService: Service: 
HiveServer2 is inited.
INFO  2015-02-12 15:37:56 org.apache.hive.service.AbstractService: 
Service:OperationManager is started.
INFO  2015-02-12 15:37:56 org.apache.hive.service.AbstractService: 
Service:SessionManager is started.
INFO  2015-02-12 15:37:56 org.apache.hive.service.AbstractService: 
Service:CLIService is started.
INFO  2015-02-12 15:37:56 org.apache.hadoop.hive.metastore.HiveMetaStore: 
0: Opening raw store with implemenation 
class:com.datastax.bdp.hadoop.hive.metastore.CassandraHiveMetaStore
DEBUG 2015-02-12 15:37:56 org.apache.hadoop.conf.Configuration: 
java.io.IOException: config(config)
at org.apache.hadoop.conf.Configuration.(Configuration.java:263)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getConf(HiveMetaStore.java:381)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:402)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:441)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:326)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.(HiveMetaStore.java:286)
at 
org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:54)
at 
org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:59)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore.newHMSHandler(HiveMetaStore.java:4060)
at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:121)
at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:104)
at org.apache.hive.service.cli.CLIService.start(CLIService.java:82)
at 
org.apache.hive.service.CompositeService.start(CompositeService.java:70)
at org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:73)
at 
org.apache.spark.sql.hive.thriftserver.HiveThriftServer2$.main(HiveThriftServer2.scala:69)
at 
org.apache.spark.sql.hive.thriftserver.HiveThriftServer2.main(HiveThriftServer2.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:358)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

DEBUG 2015-02-12 15:37:56 
com.datastax.bdp.hadoop.hive.metastore.CassandraHiveMetaStore: Creating 
CassandraHiveMetaStore


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-5622][SQL] add connector configuration ...

2015-02-12 Thread alexliu68
Github user alexliu68 commented on the pull request:

https://github.com/apache/spark/pull/4406#issuecomment-74175322
  
When Hive-thriftserver with DSE integration starts, it starts our custom 
Hive metastore which uses a Cassandra client to access Cassandra. The Cassandra 
client takes the username/password configuration settings to access Cassandra 
nodes.  Another use case is connecting to thrift server through Beeline and 
passing the username/password. In this use case, Cassandra client  has to use 
hiveconf to get the username/password per user session. That's the reason we 
can't simply read it through system properties.

RelationProvider is not involved during Hive metastore startup, so I am 
afraid that it can help for our case.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-5622][SQL] add connector configuration ...

2015-02-05 Thread alexliu68
GitHub user alexliu68 opened a pull request:

https://github.com/apache/spark/pull/4406

[SPARK-5622][SQL] add connector configuration to thrift-server



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alexliu68/spark SPARK-SQL-5622

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/4406.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4406


commit 482c08fd15bd24130bc34d33498557b86a242492
Author: Alex Liu 
Date:   2015-02-05T22:23:11Z

[SPARK-5622][SQL] add connector configuration to thrift-server




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-4943][SQL] Allow table name having dot ...

2015-01-13 Thread alexliu68
Github user alexliu68 commented on the pull request:

https://github.com/apache/spark/pull/3941#issuecomment-69823910
  
I only change selection to use tableIdentifier for joining tables. I keep 
createTable no change to minimize API changes. createTable could also use 
tableIdentifier to create catalog/cluster/database level table, but I leave it 
for future development if we need it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-4943][SQL] Allow table name having dot ...

2015-01-09 Thread alexliu68
Github user alexliu68 commented on a diff in the pull request:

https://github.com/apache/spark/pull/3941#discussion_r22755736
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Catalog.scala
 ---
@@ -115,43 +101,41 @@ class SimpleCatalog(val caseSensitive: Boolean) 
extends Catalog {
 trait OverrideCatalog extends Catalog {
 
   // TODO: This doesn't work when the database changes...
-  val overrides = new mutable.HashMap[(Option[String],String), 
LogicalPlan]()
+  val overrides = new mutable.HashMap[String, LogicalPlan]()
--- End diff --

restore it to (Option[String],String)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-4943][SQL] Allow table name having dot ...

2015-01-09 Thread alexliu68
Github user alexliu68 commented on a diff in the pull request:

https://github.com/apache/spark/pull/3941#discussion_r22755728
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/SqlParser.scala ---
@@ -178,10 +178,23 @@ class SqlParser extends AbstractSparkSQLParser {
 joinedRelation | relationFactor
 
   protected lazy val relationFactor: Parser[LogicalPlan] =
-( ident ~ (opt(AS) ~> opt(ident)) ^^ {
-case tableName ~ alias => UnresolvedRelation(None, tableName, 
alias)
+(
+  ident ~ ("." ~> ident)  ~ ("." ~> ident) ~ ("." ~> ident) ~ (opt(AS) 
~> opt(ident)) ^^ {
+case reserveName1 ~ reserveName2 ~ dbName ~ tableName ~ alias =>
+  UnresolvedRelation(IndexedSeq(tableName, dbName, reserveName2, 
reserveName1), alias)
   }
-| ("(" ~> start <~ ")") ~ (AS.? ~> ident) ^^ { case s ~ a => 
Subquery(a, s) }
+  | ident ~ ("." ~> ident) ~ ("." ~> ident) ~ (opt(AS) ~> opt(ident)) 
^^ {
+case reserveName1 ~ dbName ~ tableName ~ alias =>
+  UnresolvedRelation(IndexedSeq(tableName, dbName, reserveName1), 
alias)
+  }
+  | ident ~ ("." ~> ident) ~ (opt(AS) ~> opt(ident)) ^^ {
+  case dbName ~ tableName ~ alias =>
+UnresolvedRelation(IndexedSeq(tableName, dbName), alias)
+}
+  | ident ~ (opt(AS) ~> opt(ident)) ^^ {
+  case tableName ~ alias => 
UnresolvedRelation(IndexedSeq(tableName), alias)
--- End diff --

I change it to rep1sep(ident, ".")


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-4943][SQL] Allow table name having dot ...

2015-01-09 Thread alexliu68
Github user alexliu68 commented on a diff in the pull request:

https://github.com/apache/spark/pull/3941#discussion_r22746062
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Catalog.scala
 ---
@@ -28,77 +28,63 @@ trait Catalog {
 
   def caseSensitive: Boolean
 
-  def tableExists(db: Option[String], tableName: String): Boolean
+  def tableExists(tableIdentifier: Seq[String]): Boolean
 
   def lookupRelation(
-databaseName: Option[String],
-tableName: String,
-alias: Option[String] = None): LogicalPlan
+  tableIdentifier: Seq[String],
+  alias: Option[String] = None): LogicalPlan
 
-  def registerTable(databaseName: Option[String], tableName: String, plan: 
LogicalPlan): Unit
+  def registerTable(tableIdentifier: Seq[String], plan: LogicalPlan): Unit
 
-  def unregisterTable(databaseName: Option[String], tableName: String): 
Unit
+  def unregisterTable(tableIdentifier: Seq[String]): Unit
 
   def unregisterAllTables(): Unit
 
-  protected def processDatabaseAndTableName(
-  databaseName: Option[String],
-  tableName: String): (Option[String], String) = {
+  protected def processTableIdentifier(tableIdentifier: Seq[String]):
+  Seq[String] = {
 if (!caseSensitive) {
-  (databaseName.map(_.toLowerCase), tableName.toLowerCase)
+  tableIdentifier.map(_.toLowerCase)
 } else {
-  (databaseName, tableName)
+  tableIdentifier
 }
   }
 
-  protected def processDatabaseAndTableName(
-  databaseName: String,
-  tableName: String): (String, String) = {
-if (!caseSensitive) {
-  (databaseName.toLowerCase, tableName.toLowerCase)
-} else {
-  (databaseName, tableName)
-}
-  }
 }
 
 class SimpleCatalog(val caseSensitive: Boolean) extends Catalog {
   val tables = new mutable.HashMap[String, LogicalPlan]()
 
   override def registerTable(
-  databaseName: Option[String],
-  tableName: String,
+  tableIdentifier: Seq[String],
   plan: LogicalPlan): Unit = {
-val (dbName, tblName) = processDatabaseAndTableName(databaseName, 
tableName)
-tables += ((tblName, plan))
+val tableIdent = processTableIdentifier(tableIdentifier)
+tables += ((tableIdent.mkString("."), plan))
   }
 
-  override def unregisterTable(
-  databaseName: Option[String],
-  tableName: String) = {
-val (dbName, tblName) = processDatabaseAndTableName(databaseName, 
tableName)
-tables -= tblName
+  override def unregisterTable(tableIdentifier: Seq[String]) = {
+val tableIdent = processTableIdentifier(tableIdentifier)
+tables -= tableIdent.mkString(".")
   }
 
   override def unregisterAllTables() = {
 tables.clear()
   }
 
-  override def tableExists(db: Option[String], tableName: String): Boolean 
= {
-val (dbName, tblName) = processDatabaseAndTableName(db, tableName)
-tables.get(tblName) match {
+  override def tableExists(tableIdentifier: Seq[String]): Boolean = {
+val tableIdent = processTableIdentifier(tableIdentifier)
+tables.get(tableIdent.mkString(".")) match {
   case Some(_) => true
   case None => false
 }
   }
 
   override def lookupRelation(
-  databaseName: Option[String],
-  tableName: String,
+  tableIdentifier: Seq[String],
   alias: Option[String] = None): LogicalPlan = {
-val (dbName, tblName) = processDatabaseAndTableName(databaseName, 
tableName)
-val table = tables.getOrElse(tblName, sys.error(s"Table Not Found: 
$tableName"))
-val tableWithQualifiers = Subquery(tblName, table)
+val tableIdent = processTableIdentifier(tableIdentifier)
+val tableFullName = tableIdent.mkString(".")
+val table = tables.getOrElse(tableFullName, sys.error(s"Table Not 
Found: $tableFullName"))
+val tableWithQualifiers = Subquery(tableIdent.head, table)
--- End diff --

agreed


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-4943][SQL] Allow table name having dot ...

2015-01-09 Thread alexliu68
Github user alexliu68 commented on a diff in the pull request:

https://github.com/apache/spark/pull/3941#discussion_r22745569
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Catalog.scala
 ---
@@ -28,77 +28,63 @@ trait Catalog {
 
   def caseSensitive: Boolean
 
-  def tableExists(db: Option[String], tableName: String): Boolean
+  def tableExists(tableIdentifier: Seq[String]): Boolean
 
   def lookupRelation(
-databaseName: Option[String],
-tableName: String,
-alias: Option[String] = None): LogicalPlan
+  tableIdentifier: Seq[String],
+  alias: Option[String] = None): LogicalPlan
 
-  def registerTable(databaseName: Option[String], tableName: String, plan: 
LogicalPlan): Unit
+  def registerTable(tableIdentifier: Seq[String], plan: LogicalPlan): Unit
 
-  def unregisterTable(databaseName: Option[String], tableName: String): 
Unit
+  def unregisterTable(tableIdentifier: Seq[String]): Unit
 
   def unregisterAllTables(): Unit
 
-  protected def processDatabaseAndTableName(
-  databaseName: Option[String],
-  tableName: String): (Option[String], String) = {
+  protected def processTableIdentifier(tableIdentifier: Seq[String]):
+  Seq[String] = {
 if (!caseSensitive) {
-  (databaseName.map(_.toLowerCase), tableName.toLowerCase)
+  tableIdentifier.map(_.toLowerCase)
 } else {
-  (databaseName, tableName)
+  tableIdentifier
 }
   }
 
-  protected def processDatabaseAndTableName(
-  databaseName: String,
-  tableName: String): (String, String) = {
-if (!caseSensitive) {
-  (databaseName.toLowerCase, tableName.toLowerCase)
-} else {
-  (databaseName, tableName)
-}
-  }
 }
 
 class SimpleCatalog(val caseSensitive: Boolean) extends Catalog {
   val tables = new mutable.HashMap[String, LogicalPlan]()
 
   override def registerTable(
-  databaseName: Option[String],
-  tableName: String,
+  tableIdentifier: Seq[String],
   plan: LogicalPlan): Unit = {
-val (dbName, tblName) = processDatabaseAndTableName(databaseName, 
tableName)
-tables += ((tblName, plan))
+val tableIdent = processTableIdentifier(tableIdentifier)
+tables += ((tableIdent.mkString("."), plan))
   }
 
-  override def unregisterTable(
-  databaseName: Option[String],
-  tableName: String) = {
-val (dbName, tblName) = processDatabaseAndTableName(databaseName, 
tableName)
-tables -= tblName
+  override def unregisterTable(tableIdentifier: Seq[String]) = {
+val tableIdent = processTableIdentifier(tableIdentifier)
+tables -= tableIdent.mkString(".")
   }
 
   override def unregisterAllTables() = {
 tables.clear()
   }
 
-  override def tableExists(db: Option[String], tableName: String): Boolean 
= {
-val (dbName, tblName) = processDatabaseAndTableName(db, tableName)
-tables.get(tblName) match {
+  override def tableExists(tableIdentifier: Seq[String]): Boolean = {
+val tableIdent = processTableIdentifier(tableIdentifier)
+tables.get(tableIdent.mkString(".")) match {
   case Some(_) => true
   case None => false
 }
   }
 
   override def lookupRelation(
-  databaseName: Option[String],
-  tableName: String,
+  tableIdentifier: Seq[String],
   alias: Option[String] = None): LogicalPlan = {
-val (dbName, tblName) = processDatabaseAndTableName(databaseName, 
tableName)
-val table = tables.getOrElse(tblName, sys.error(s"Table Not Found: 
$tableName"))
-val tableWithQualifiers = Subquery(tblName, table)
+val tableIdent = processTableIdentifier(tableIdentifier)
+val tableFullName = tableIdent.mkString(".")
+val table = tables.getOrElse(tableFullName, sys.error(s"Table Not 
Found: $tableFullName"))
+val tableWithQualifiers = Subquery(tableIdent.head, table)
--- End diff --

By reversed order, we know database name will be at 2nd, so it's easy to 
access it by using lift(1)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-4943][SQL] Allow table name having dot ...

2015-01-09 Thread alexliu68
Github user alexliu68 commented on the pull request:

https://github.com/apache/spark/pull/3941#issuecomment-69403586
  
I use IndexedSeq instead Seq for IndexedSeq is faster to access element by 
lift. I can revert them back to Seq.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-4943][SQL] Allow table name having dot ...

2015-01-09 Thread alexliu68
Github user alexliu68 commented on a diff in the pull request:

https://github.com/apache/spark/pull/3941#discussion_r22744829
  
--- Diff: 
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala ---
@@ -251,6 +257,26 @@ private[hive] class HiveMetastoreCatalog(hive: 
HiveContext) extends Catalog with
 }
   }
 
+  protected def processDatabaseAndTableName(
+  databaseName: Option[String],
+  tableName: String): (Option[String], String) = {
+if (!caseSensitive) {
+  (databaseName.map(_.toLowerCase), tableName.toLowerCase)
+} else {
+  (databaseName, tableName)
+}
+  }
+
+  protected def processDatabaseAndTableName(
--- End diff --

It's only used by hive table creation


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-4943][SQL] Allow table name having dot ...

2015-01-09 Thread alexliu68
Github user alexliu68 commented on a diff in the pull request:

https://github.com/apache/spark/pull/3941#discussion_r22744754
  
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveQl.scala ---
@@ -386,6 +386,13 @@ private[hive] object HiveQl {
 (db, tableName)
   }
 
+  protected def extractTableIdent(tableNameParts: Node): 
IndexedSeq[String] = {
+tableNameParts.getChildren.map { case Token(part, Nil) => 
cleanIdentifier(part) } match {
+  case Seq(tableOnly) => IndexedSeq(tableOnly)
+  case Seq(databaseName, table) => IndexedSeq(table, databaseName)
--- End diff --

agreed


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-4943][SQL] Allow table name having dot ...

2015-01-09 Thread alexliu68
Github user alexliu68 commented on a diff in the pull request:

https://github.com/apache/spark/pull/3941#discussion_r22744744
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Catalog.scala
 ---
@@ -28,77 +28,63 @@ trait Catalog {
 
   def caseSensitive: Boolean
 
-  def tableExists(db: Option[String], tableName: String): Boolean
+  def tableExists(tableIdentifier: Seq[String]): Boolean
 
   def lookupRelation(
-databaseName: Option[String],
-tableName: String,
-alias: Option[String] = None): LogicalPlan
+  tableIdentifier: Seq[String],
+  alias: Option[String] = None): LogicalPlan
 
-  def registerTable(databaseName: Option[String], tableName: String, plan: 
LogicalPlan): Unit
+  def registerTable(tableIdentifier: Seq[String], plan: LogicalPlan): Unit
 
-  def unregisterTable(databaseName: Option[String], tableName: String): 
Unit
+  def unregisterTable(tableIdentifier: Seq[String]): Unit
 
   def unregisterAllTables(): Unit
 
-  protected def processDatabaseAndTableName(
-  databaseName: Option[String],
-  tableName: String): (Option[String], String) = {
+  protected def processTableIdentifier(tableIdentifier: Seq[String]):
+  Seq[String] = {
 if (!caseSensitive) {
-  (databaseName.map(_.toLowerCase), tableName.toLowerCase)
+  tableIdentifier.map(_.toLowerCase)
 } else {
-  (databaseName, tableName)
+  tableIdentifier
 }
   }
 
-  protected def processDatabaseAndTableName(
-  databaseName: String,
-  tableName: String): (String, String) = {
-if (!caseSensitive) {
-  (databaseName.toLowerCase, tableName.toLowerCase)
-} else {
-  (databaseName, tableName)
-}
-  }
 }
 
 class SimpleCatalog(val caseSensitive: Boolean) extends Catalog {
   val tables = new mutable.HashMap[String, LogicalPlan]()
 
   override def registerTable(
-  databaseName: Option[String],
-  tableName: String,
+  tableIdentifier: Seq[String],
   plan: LogicalPlan): Unit = {
-val (dbName, tblName) = processDatabaseAndTableName(databaseName, 
tableName)
-tables += ((tblName, plan))
+val tableIdent = processTableIdentifier(tableIdentifier)
+tables += ((tableIdent.mkString("."), plan))
   }
 
-  override def unregisterTable(
-  databaseName: Option[String],
-  tableName: String) = {
-val (dbName, tblName) = processDatabaseAndTableName(databaseName, 
tableName)
-tables -= tblName
+  override def unregisterTable(tableIdentifier: Seq[String]) = {
+val tableIdent = processTableIdentifier(tableIdentifier)
+tables -= tableIdent.mkString(".")
   }
 
   override def unregisterAllTables() = {
 tables.clear()
   }
 
-  override def tableExists(db: Option[String], tableName: String): Boolean 
= {
-val (dbName, tblName) = processDatabaseAndTableName(db, tableName)
-tables.get(tblName) match {
+  override def tableExists(tableIdentifier: Seq[String]): Boolean = {
+val tableIdent = processTableIdentifier(tableIdentifier)
+tables.get(tableIdent.mkString(".")) match {
   case Some(_) => true
   case None => false
 }
   }
 
   override def lookupRelation(
-  databaseName: Option[String],
-  tableName: String,
+  tableIdentifier: Seq[String],
   alias: Option[String] = None): LogicalPlan = {
-val (dbName, tblName) = processDatabaseAndTableName(databaseName, 
tableName)
-val table = tables.getOrElse(tblName, sys.error(s"Table Not Found: 
$tableName"))
-val tableWithQualifiers = Subquery(tblName, table)
+val tableIdent = processTableIdentifier(tableIdentifier)
+val tableFullName = tableIdent.mkString(".")
+val table = tables.getOrElse(tableFullName, sys.error(s"Table Not 
Found: $tableFullName"))
+val tableWithQualifiers = Subquery(tableIdent.head, table)
--- End diff --

I store the seq in reversed order, so table name is at head.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-4943][SQL] Allow table name having dot ...

2015-01-09 Thread alexliu68
Github user alexliu68 commented on a diff in the pull request:

https://github.com/apache/spark/pull/3941#discussion_r22744697
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Catalog.scala
 ---
@@ -28,77 +28,63 @@ trait Catalog {
 
   def caseSensitive: Boolean
 
-  def tableExists(db: Option[String], tableName: String): Boolean
+  def tableExists(tableIdentifier: Seq[String]): Boolean
 
   def lookupRelation(
-databaseName: Option[String],
-tableName: String,
-alias: Option[String] = None): LogicalPlan
+  tableIdentifier: Seq[String],
+  alias: Option[String] = None): LogicalPlan
--- End diff --

agreed


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-4943][SQL] Allow table name having dot ...

2015-01-09 Thread alexliu68
Github user alexliu68 commented on a diff in the pull request:

https://github.com/apache/spark/pull/3941#discussion_r22744696
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Catalog.scala
 ---
@@ -28,77 +28,63 @@ trait Catalog {
 
   def caseSensitive: Boolean
 
-  def tableExists(db: Option[String], tableName: String): Boolean
+  def tableExists(tableIdentifier: Seq[String]): Boolean
 
   def lookupRelation(
-databaseName: Option[String],
-tableName: String,
-alias: Option[String] = None): LogicalPlan
+  tableIdentifier: Seq[String],
+  alias: Option[String] = None): LogicalPlan
 
-  def registerTable(databaseName: Option[String], tableName: String, plan: 
LogicalPlan): Unit
+  def registerTable(tableIdentifier: Seq[String], plan: LogicalPlan): Unit
 
-  def unregisterTable(databaseName: Option[String], tableName: String): 
Unit
+  def unregisterTable(tableIdentifier: Seq[String]): Unit
 
   def unregisterAllTables(): Unit
 
-  protected def processDatabaseAndTableName(
-  databaseName: Option[String],
-  tableName: String): (Option[String], String) = {
+  protected def processTableIdentifier(tableIdentifier: Seq[String]):
+  Seq[String] = {
--- End diff --

agreed


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-4943][SQL] Allow table name having dot ...

2015-01-07 Thread alexliu68
GitHub user alexliu68 opened a pull request:

https://github.com/apache/spark/pull/3941

[SPARK-4943][SQL] Allow table name having dot to support db/catalog ...

...r

The pull only fixes the parsing error and changes API to use 
tableIdentifier. Joining different catalog datasource related change is not 
done in this pull.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alexliu68/spark SPARK-SQL-4943-3

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/3941.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3941


commit 365299739c92132cd34110a00e860e218e7ca9c6
Author: Alex Liu 
Date:   2015-01-08T05:09:17Z

[SPARK-4943][SQL] Allow table name having dot to support db/catalog ...




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-4943][SQL] Allow table name having dot ...

2014-12-31 Thread alexliu68
Github user alexliu68 commented on the pull request:

https://github.com/apache/spark/pull/3848#issuecomment-68470193
  
close the pull for further discussion in Jira ticket.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-4943][SQL] Allow table name having dot ...

2014-12-31 Thread alexliu68
Github user alexliu68 closed the pull request at:

https://github.com/apache/spark/pull/3848


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-4943][SQL] Allow table name having dot ...

2014-12-30 Thread alexliu68
Github user alexliu68 commented on the pull request:

https://github.com/apache/spark/pull/3848#issuecomment-68409413
  
I rebase the commit to simply fixing the parsing issue and concatenate 
cluster name, database name and table name into a single string with dot in 
between. Now it can parse [clusterName].[databaseName].[tableName]  type full 
table name and pass it as tableName to catalog.

The following example query works for Cassandra integration

e.g. Select table.column from cluster.database.table

I leave the refactoring to better support full table name to future work.

Ideally Spark SQL should be able to join data across catalog, cluster, 
database and table. There are four levels of data joining, catalog level, 
cluster level, database level, and table level. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-4943][SQL] Allow table name having dot ...

2014-12-30 Thread alexliu68
Github user alexliu68 commented on the pull request:

https://github.com/apache/spark/pull/3848#issuecomment-68392112
  
Strings separated by space? It would work, let me test it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-4943][SQL] Allow table name having dot ...

2014-12-30 Thread alexliu68
Github user alexliu68 commented on the pull request:

https://github.com/apache/spark/pull/3848#issuecomment-68390959
  
It can apply to any type of datasources. e.g. HBase, Oracle, MongoDB.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-4943][SQL] Allow table name having dot ...

2014-12-30 Thread alexliu68
Github user alexliu68 commented on the pull request:

https://github.com/apache/spark/pull/3848#issuecomment-68390702
  
It's not for a single system. It's more a general approach. e.g. you have 
many mySql clusters and each cluster has databases and each database has 
tables. 

Right now it can't parse table name having Dot. Let each system parse table 
name into cluster/database/table is a little risk for parsing error may occur 
if the Spark parser changes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-4943][SQL] Allow table name having dot ...

2014-12-30 Thread alexliu68
Github user alexliu68 commented on the pull request:

https://github.com/apache/spark/pull/3848#issuecomment-68389753
  
Support table full name in the format of 
.. so that we can join data from 
different tables and tables can be from different databases and clusters.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-4861][SQL] Allow table name having dot ...

2014-12-30 Thread alexliu68
GitHub user alexliu68 opened a pull request:

https://github.com/apache/spark/pull/3848

[SPARK-4861][SQL] Allow table name having dot to support full table name...

... with cluster and database names

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alexliu68/spark SPARK-SQL-4943-2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/3848.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3848


commit d3a07e774a328994653a1cb9091950decc1289b8
Author: Alex Liu 
Date:   2014-12-30T19:36:26Z

[SPARK-4861][SQL] Allow table name having dot to support full table name 
with cluster and database names




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-4925][SQL] Publish Spark SQL hive-thrif...

2014-12-29 Thread alexliu68
Github user alexliu68 commented on the pull request:

https://github.com/apache/spark/pull/3766#issuecomment-68317258
  
I removed it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-4925][SQL] Publish Spark SQL hive-thrif...

2014-12-22 Thread alexliu68
Github user alexliu68 commented on the pull request:

https://github.com/apache/spark/pull/3766#issuecomment-67912523
  
We integrate Spark hive-thriftserver with Cassandra by downloading the 
hive-thriftserver by build.xml and copying it to class path. 
Currently/temporarily we publish it to our private repository, but we hope we 
don't need maintain the artifact at our private repo and have to build it each 
time for  a new release. Publish it to public maven repository will help a lot 
for us.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-3816][SQL] Publish Spark SQL hive-thrif...

2014-12-22 Thread alexliu68
GitHub user alexliu68 opened a pull request:

https://github.com/apache/spark/pull/3766

[SPARK-3816][SQL] Publish Spark SQL hive-thriftserver maven artifact



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alexliu68/spark SPARK-SQL-4925

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/3766.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3766


commit 467f570817a8f5a19323633781b5f5a588d507cb
Author: Alex Liu 
Date:   2014-12-23T01:05:40Z

[SPARK-3816][SQL] Publish Spark SQL hive-thriftserver maven artifact




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-4354][SQL] Swallow NoSuchObjectExceptio...

2014-11-14 Thread alexliu68
Github user alexliu68 commented on the pull request:

https://github.com/apache/spark/pull/3211#issuecomment-63149213
  
It looks like a Hive bug which log some error messages. 
https://issues.apache.org/jira/browse/SPARK-4345


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-4354][SQL] Swallow NoSuchObjectExceptio...

2014-11-14 Thread alexliu68
Github user alexliu68 commented on the pull request:

https://github.com/apache/spark/pull/3211#issuecomment-63147819
  
close it for now.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-4354][SQL] Swallow NoSuchObjectExceptio...

2014-11-14 Thread alexliu68
Github user alexliu68 closed the pull request at:

https://github.com/apache/spark/pull/3211


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-4354][SQL] Swallow NoSuchObjectExceptio...

2014-11-14 Thread alexliu68
Github user alexliu68 commented on a diff in the pull request:

https://github.com/apache/spark/pull/3211#discussion_r20375582
  
--- Diff: 
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/commands.scala ---
@@ -55,7 +56,12 @@ case class DropTable(tableName: String, ifExists: 
Boolean) extends LeafNode with
 
   override protected lazy val sideEffectResult: Seq[Row] = {
 val ifExistsClause = if (ifExists) "IF EXISTS " else ""
-hiveContext.runSqlHive(s"DROP TABLE $ifExistsClause$tableName")
+try {
+  hiveContext.runSqlHive(s"DROP TABLE $ifExistsClause$tableName")
+} catch {
+  case ne: NoSuchObjectException => //ignore
+  case e: Exception => throw e
--- End diff --

http://www.scala-lang.org/old/node/255.html, I think we can't omit the 
second clause


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-4354][SQL] Swallow NoSuchObjectExceptio...

2014-11-11 Thread alexliu68
GitHub user alexliu68 opened a pull request:

https://github.com/apache/spark/pull/3211

[SPARK-4354][SQL] Swallow NoSuchObjectException exception when drop a no...

...ne-exist hive table

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alexliu68/spark SPARK-SQL-4345

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/3211.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3211


commit 439ab0e3d6d8c930551775a328b8d678771f6c3b
Author: Alex Liu 
Date:   2014-11-11T22:05:23Z

[SPARK-4354][SQL] Swallow NoSuchObjectException exception when drop a 
none-exist hive table




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-3816][SQL] Add table properties from st...

2014-10-26 Thread alexliu68
Github user alexliu68 commented on the pull request:

https://github.com/apache/spark/pull/2677#issuecomment-60535574
  
It's rebased.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-3816][SQL] Add table properties from st...

2014-10-10 Thread alexliu68
Github user alexliu68 commented on the pull request:

https://github.com/apache/spark/pull/2677#issuecomment-58681796
  
Sure, I am pretty busy on my daily work right now. I will find time to add 
unit test. I fix this issue when I integrate CassandraStorageHandler.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-3816][SQL] Add table properties from st...

2014-10-09 Thread alexliu68
Github user alexliu68 commented on the pull request:

https://github.com/apache/spark/pull/2677#issuecomment-58528866
  
I add comment to code


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-3816][SQL] Add configureOutputJobProper...

2014-10-06 Thread alexliu68
GitHub user alexliu68 reopened a pull request:

https://github.com/apache/spark/pull/2677

[SPARK-3816][SQL] Add configureOutputJobPropertiesForStorageHandler to j...

...ob conf in SparkHadoopWriter class

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alexliu68/spark SPARK-SQL-3816

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/2677.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2677


commit e62af9fecd009a396a9ea2a362170977653472bb
Author: Alex Liu 
Date:   2014-10-06T16:11:37Z

[SPARK-3816][SQL] Add configureOutputJobPropertiesForStorageHandler to job 
conf in SparkHiveWriterContainer class




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-3816][SQL] Add configureOutputJobProper...

2014-10-06 Thread alexliu68
Github user alexliu68 closed the pull request at:

https://github.com/apache/spark/pull/2677


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-3816][SQL] Add configureOutputJobProper...

2014-10-06 Thread alexliu68
GitHub user alexliu68 opened a pull request:

https://github.com/apache/spark/pull/2677

[SPARK-3816][SQL] Add configureOutputJobPropertiesForStorageHandler to j...

...ob conf in SparkHadoopWriter class

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alexliu68/spark SPARK-SQL-3816

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/2677.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2677


commit 14e3c63e49fab82a5ef386bb714984eca29f3bdc
Author: Alex Liu 
Date:   2014-10-06T16:03:30Z

[SPARK-3816][SQL] Add configureOutputJobPropertiesForStorageHandler to job 
conf in SparkHadoopWriter class




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: SPARK-SQL-2846 add configureInputJobProperties...

2014-08-13 Thread alexliu68
GitHub user alexliu68 opened a pull request:

https://github.com/apache/spark/pull/1927

SPARK-SQL-2846 add configureInputJobPropertiesForStorageHandler to initi...

...al job conf

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alexliu68/spark SPARK-SQL-2846

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/1927.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1927


commit e4bdc4cd07c5bfa675ff954242cc559d94c23906
Author: Alex Liu 
Date:   2014-08-13T18:23:12Z

SPARK-SQL-2846 add configureInputJobPropertiesForStorageHandler to initial 
job conf




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org