[jira] [Commented] (SPARK-19186) Hash symbol in middle of Sybase database table name causes Spark Exception

2017-01-12 Thread Adrian Schulewitz (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-19186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15821383#comment-15821383
 ] 

Adrian Schulewitz commented on SPARK-19186:
---

Hi,

I tried enclosing (i) the table name in the dbtable option and the name in the 
sql call inside back ticks (ii) just the name of the table in the sql call.

Both still failed but with differing error messages.

Is there a different way to quote the table name ?

Thanks

Adrian


.option("dbtable", "`CTP#ADR_TYPE_DBF`")
val resultsDF = sess.sql("SELECT * FROM `CTP#ADR_TYPE_DBF`")

Exception in thread "main" java.sql.SQLException: Incorrect syntax near '`'.

at 
net.sourceforge.jtds.jdbc.SQLDiagnostic.addDiagnostic(SQLDiagnostic.java:372)
at net.sourceforge.jtds.jdbc.TdsCore.tdsErrorToken(TdsCore.java:2988)
at net.sourceforge.jtds.jdbc.TdsCore.nextToken(TdsCore.java:2421)
at net.sourceforge.jtds.jdbc.TdsCore.getMoreResults(TdsCore.java:671)
at 
net.sourceforge.jtds.jdbc.JtdsStatement.executeSQLQuery(JtdsStatement.java:505)
at 
net.sourceforge.jtds.jdbc.JtdsPreparedStatement.executeQuery(JtdsPreparedStatement.java:1029)
at 
org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:62)
at 
org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.(JDBCRelation.scala:113)
at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:45)
at 
org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:330)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:125)
at 
com.anz.murex.hcp.poc.hcp.api.MurexDatamartSqlReader$.main(MurexDatamartSqlReader.scala:86)
at 
com.anz.murex.hcp.poc.hcp.api.MurexDatamartSqlReader.main(MurexDatamartSqlReader.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
17/01/13 09:15:58 INFO SparkContext: Invoking stop() from shutdown hook


.option("dbtable", "CTP#ADR_TYPE_DBF")
val resultsDF = sess.sql("SELECT * FROM `CTP#ADR_TYPE_DBF`")

17/01/13 09:17:53 INFO SparkSqlParser: Parsing command: trades
17/01/13 09:17:54 INFO SparkSqlParser: Parsing command: SELECT * FROM 
`CTP#ADR_TYPE_DBF`
Exception in thread "main" org.apache.spark.sql.AnalysisException: Table or 
view not found: CTP#ADR_TYPE_DBF; line 1 pos 14
at 
org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.org$apache$spark$sql$catalyst$analysis$Analyzer$ResolveRelations$$lookupTableFromCatalog(Analyzer.scala:459)
at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:478)
at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:463)
at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:61)
at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:61)
at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:60)
at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:58)
at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:58)
at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$5.apply(TreeNode.scala:331)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:188)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformChildren(TreeNode.scala:329)
at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:58)
at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:463)
at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:453)
at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:85)
at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:82)
at 
scala.collection.LinearSeqOptimize

[jira] [Created] (SPARK-19186) Hash symbol in middle of Sybase database table name causes Spark Exception

2017-01-11 Thread Adrian Schulewitz (JIRA)
Adrian Schulewitz created SPARK-19186:
-

 Summary: Hash symbol in middle of Sybase database table name 
causes Spark Exception
 Key: SPARK-19186
 URL: https://issues.apache.org/jira/browse/SPARK-19186
 Project: Spark
  Issue Type: Bug
Affects Versions: 2.1.0
Reporter: Adrian Schulewitz


If I use a table name without a '#' symbol in the middle then no exception 
occurs but with one an exception is thrown. According to Sybase 15 
documentation a '#' is a legal character.

val testSql = "SELECT * FROM CTP#ADR_TYPE_DBF"

val conf = new SparkConf().setAppName("MUREX DMart Simple Reader via 
SQL").setMaster("local[2]")

val sess = SparkSession
  .builder()
  .appName("MUREX DMart Simple SQL Reader")
  .config(conf)
  .getOrCreate()

import sess.implicits._

val df = sess.read
.format("jdbc")
.option("url", 
"jdbc:jtds:sybase://auq7064s.unix.anz:4020/mxdmart56")
.option("driver", "net.sourceforge.jtds.jdbc.Driver")
.option("dbtable", "CTP#ADR_TYPE_DBF")
.option("UDT_DEALCRD_REP", "mxdmart56")
.option("user", "INSTAL")
.option("password", "INSTALL")
.load()

df.createOrReplaceTempView("trades")

val resultsDF = sess.sql(testSql)
resultsDF.show()

17/01/12 14:30:01 INFO SharedState: Warehouse path is 
'file:/C:/DEVELOPMENT/Projects/MUREX/trunk/murex-eom-reporting/spark-warehouse/'.
17/01/12 14:30:04 INFO SparkSqlParser: Parsing command: trades
17/01/12 14:30:04 INFO SparkSqlParser: Parsing command: SELECT * FROM 
CTP#ADR_TYPE_DBF
Exception in thread "main" org.apache.spark.sql.catalyst.parser.ParseException: 
extraneous input '#' expecting {, ',', 'SELECT', 'FROM', 'ADD', 'AS', 
'ALL', 'DISTINCT', 'WHERE', 'GROUP', 'BY', 'GROUPING', 'SETS', 'CUBE', 
'ROLLUP', 'ORDER', 'HAVING', 'LIMIT', 'AT', 'OR', 'AND', 'IN', NOT, 'NO', 
'EXISTS', 'BETWEEN', 'LIKE', RLIKE, 'IS', 'NULL', 'TRUE', 'FALSE', 'NULLS', 
'ASC', 'DESC', 'FOR', 'INTERVAL', 'CASE', 'WHEN', 'THEN', 'ELSE', 'END', 
'JOIN', 'CROSS', 'OUTER', 'INNER', 'LEFT', 'RIGHT', 'FULL', 'NATURAL', 
'LATERAL', 'WINDOW', 'OVER', 'PARTITION', 'RANGE', 'ROWS', 'UNBOUNDED', 
'PRECEDING', 'FOLLOWING', 'CURRENT', 'FIRST', 'LAST', 'ROW', 'WITH', 'VALUES', 
'CREATE', 'TABLE', 'VIEW', 'REPLACE', 'INSERT', 'DELETE', 'INTO', 'DESCRIBE', 
'EXPLAIN', 'FORMAT', 'LOGICAL', 'CODEGEN', 'CAST', 'SHOW', 'TABLES', 'COLUMNS', 
'COLUMN', 'USE', 'PARTITIONS', 'FUNCTIONS', 'DROP', 'UNION', 'EXCEPT', 'MINUS', 
'INTERSECT', 'TO', 'TABLESAMPLE', 'STRATIFY', 'ALTER', 'RENAME', 'ARRAY', 
'MAP', 'STRUCT', 'COMMENT', 'SET', 'RESET', 'DATA', 'START', 'TRANSACTION', 
'COMMIT', 'ROLLBACK', 'MACRO', 'IF', 'DIV', 'PERCENT', 'BUCKET', 'OUT', 'OF', 
'SORT', 'CLUSTER', 'DISTRIBUTE', 'OVERWRITE', 'TRANSFORM', 'REDUCE', 'USING', 
'SERDE', 'SERDEPROPERTIES', 'RECORDREADER', 'RECORDWRITER', 'DELIMITED', 
'FIELDS', 'TERMINATED', 'COLLECTION', 'ITEMS', 'KEYS', 'ESCAPED', 'LINES', 
'SEPARATED', 'FUNCTION', 'EXTENDED', 'REFRESH', 'CLEAR', 'CACHE', 'UNCACHE', 
'LAZY', 'FORMATTED', 'GLOBAL', TEMPORARY, 'OPTIONS', 'UNSET', 'TBLPROPERTIES', 
'DBPROPERTIES', 'BUCKETS', 'SKEWED', 'STORED', 'DIRECTORIES', 'LOCATION', 
'EXCHANGE', 'ARCHIVE', 'UNARCHIVE', 'FILEFORMAT', 'TOUCH', 'COMPACT', 
'CONCATENATE', 'CHANGE', 'CASCADE', 'RESTRICT', 'CLUSTERED', 'SORTED', 'PURGE', 
'INPUTFORMAT', 'OUTPUTFORMAT', DATABASE, DATABASES, 'DFS', 'TRUNCATE', 
'ANALYZE', 'COMPUTE', 'LIST', 'STATISTICS', 'PARTITIONED', 'EXTERNAL', 
'DEFINED', 'REVOKE', 'GRANT', 'LOCK', 'UNLOCK', 'MSCK', 'REPAIR', 'RECOVER', 
'EXPORT', 'IMPORT', 'LOAD', 'ROLE', 'ROLES', 'COMPACTIONS', 'PRINCIPALS', 
'TRANSACTIONS', 'INDEX', 'INDEXES', 'LOCKS', 'OPTION', 'ANTI', 'LOCAL', 
'INPATH', 'CURRENT_DATE', 'CURRENT_TIMESTAMP', IDENTIFIER, 
BACKQUOTED_IDENTIFIER}(line 1, pos 17)

== SQL ==
SELECT * FROM CTP#ADR_TYPE_DBF
-^^^

at 
org.apache.spark.sql.catalyst.parser.ParseException.withCommand(ParseDriver.scala:197)
at 
org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parse(ParseDriver.scala:99)
at 
org.apache.spark.sql.execution.SparkSqlParser.parse(SparkSqlParser.scala:45)
at 
org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parsePlan(ParseDriver.scala:53)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
at 
com.anz.murex.hcp.poc.hcp.api.MurexDatamartSqlReader$.main(MurexDatamartSqlReader.scala:94)
at 
com.anz.murex.hcp.poc.hcp.api.MurexDatamartSqlReader.main(MurexDatamartSqlReader.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(De