[ https://issues.apache.org/jira/browse/SPARK-19209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16347700#comment-16347700 ]
Tony Xu edited comment on SPARK-19209 at 1/31/18 10:26 PM: ----------------------------------------------------------- This seems to be a forgotten issue but I'm still experiencing it in Spark 2.2.1 Could this issue be related to the Driver itself? For example, I tried using the MySQL JDBC driver and that seems to work fine on the first try. However, when I try using Snowflake's JDBC driver, I run into this exact issue. I'm not sure what the difference between these two drivers are but it might be worth digging into was (Author: txu0393): This seems like a forgotten issue but I'm still experiencing it in Spark 2.2.1 Could this issue be related to the Driver itself? For example, I tried using the MySQL JDBC driver and that seems to work fine on the first try. However, when I try using Snowflake's JDBC driver, I run into this exact issue. I'm not sure what the difference between these two drivers are but it might be worth digging into > "No suitable driver" on first try > --------------------------------- > > Key: SPARK-19209 > URL: https://issues.apache.org/jira/browse/SPARK-19209 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 2.1.0 > Reporter: Daniel Darabos > Priority: Critical > > This is a regression from Spark 2.0.2. Observe! > {code} > $ ~/spark-2.0.2/bin/spark-shell --jars org.xerial.sqlite-jdbc-3.8.11.2.jar > --driver-class-path org.xerial.sqlite-jdbc-3.8.11.2.jar > [...] > scala> spark.read.format("jdbc").option("url", > "jdbc:sqlite:").option("dbtable", "x").load > java.sql.SQLException: [SQLITE_ERROR] SQL error or missing database (no such > table: x) > {code} > This is the "good" exception. Now with Spark 2.1.0: > {code} > $ ~/spark-2.1.0/bin/spark-shell --jars org.xerial.sqlite-jdbc-3.8.11.2.jar > --driver-class-path org.xerial.sqlite-jdbc-3.8.11.2.jar > [...] > scala> spark.read.format("jdbc").option("url", > "jdbc:sqlite:").option("dbtable", "x").load > java.sql.SQLException: No suitable driver > at java.sql.DriverManager.getDriver(DriverManager.java:315) > at > org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$7.apply(JDBCOptions.scala:84) > at > org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$7.apply(JDBCOptions.scala:84) > at scala.Option.getOrElse(Option.scala:121) > at > org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:83) > at > org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:34) > at > org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:32) > at > org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:330) > at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152) > at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:125) > ... 48 elided > scala> spark.read.format("jdbc").option("url", > "jdbc:sqlite:").option("dbtable", "x").load > java.sql.SQLException: [SQLITE_ERROR] SQL error or missing database (no such > table: x) > {code} > Simply re-executing the same command a second time "fixes" the {{No suitable > driver}} error. > My guess is this is fallout from https://github.com/apache/spark/pull/15292 > which changed the JDBC driver management code. But this code is so hard to > understand for me, I could be totally wrong. > This is nothing more than a nuisance for {{spark-shell}} usage, but it is > more painful to work around for applications. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org