Daniel Darabos created SPARK-19209:
--------------------------------------

             Summary: "No suitable driver" on first try
                 Key: SPARK-19209
                 URL: https://issues.apache.org/jira/browse/SPARK-19209
             Project: Spark
          Issue Type: Bug
          Components: SQL
    Affects Versions: 2.1.0
            Reporter: Daniel Darabos


This is a regression from Spark 2.0.2. Observe!

{code}
$ ~/spark-2.0.2/bin/spark-shell --jars 
stage/lib/org.xerial.sqlite-jdbc-3.8.11.2.jar --driver-class-path 
stage/lib/org.xerial.sqlite-jdbc-3.8.11.2.jar
[...]
scala> spark.read.format("jdbc").option("url", 
"jdbc:sqlite:").option("dbtable", "x").load
java.sql.SQLException: [SQLITE_ERROR] SQL error or missing database (no such 
table: x)
{code}

This is the "good" exception. Now with Spark 2.1.0:

{code}
$ ~/spark-2.1.0/bin/spark-shell --jars 
stage/lib/org.xerial.sqlite-jdbc-3.8.11.2.jar --driver-class-path 
stage/lib/org.xerial.sqlite-jdbc-3.8.11.2.jar
[...]
scala> spark.read.format("jdbc").option("url", 
"jdbc:sqlite:").option("dbtable", "x").load
java.sql.SQLException: No suitable driver
  at java.sql.DriverManager.getDriver(DriverManager.java:315)
  at 
org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$7.apply(JDBCOptions.scala:84)
  at 
org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$7.apply(JDBCOptions.scala:84)
  at scala.Option.getOrElse(Option.scala:121)
  at 
org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:83)
  at 
org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:34)
  at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:32)
  at 
org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:330)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:125)
  ... 48 elided

scala> spark.read.format("jdbc").option("url", 
"jdbc:sqlite:").option("dbtable", "x").load
java.sql.SQLException: [SQLITE_ERROR] SQL error or missing database (no such 
table: x)
{code}

Simply re-executing the same command a second time "fixes" the {{No suitable 
driver}} error.

My guess is this is fallout from https://github.com/apache/spark/pull/15292 
which changed the JDBC driver management code. But this code is so hard to 
understand for me, I could be totally wrong.

This is nothing more than a nuisance for {{spark-shell}} usage, but it is more 
painful to work around for applications.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to