[ 
https://issues.apache.org/jira/browse/SPARK-10457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mariano Simone updated SPARK-10457:
-----------------------------------
    Description: 
I'm getting this error everytime I try to create a dataframe using jdbc:

java.sql.SQLException: No suitable driver found for 
jdbc:mysql://localhost:3306/test

What I have so far:

standart sbt project.

Added the dep. on mysql-connector to build.sbt like this:
"mysql"            %  "mysql-connector-java"  % "5.1.36"

The code that creates the df:
    val url   = "jdbc:mysql://localhost:3306/test"
    val table = "test_table"

    val properties = new Properties
    properties.put("user", "123")
    properties.put("password", "123")
    properties.put("driver", "com.mysql.jdbc.Driver")

    val tiers  = sqlContext.read.jdbc(url, table, properties)

I also loaded the jar like this:
streamingContext.sparkContext.addJar("mysql-connector-java-5.1.36.jar")

This is the back trace of the exception being thrown:

15/09/04 18:37:40 ERROR JobScheduler: Error running job streaming job 
1441402660000 ms.0
java.sql.SQLException: No suitable driver found for 
jdbc:mysql://localhost:3306/test
        at java.sql.DriverManager.getConnection(DriverManager.java:689)
        at java.sql.DriverManager.getConnection(DriverManager.java:208)
        at org.apache.spark.sql.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:118)
        at org.apache.spark.sql.jdbc.JDBCRelation.<init>(JDBCRelation.scala:128)
        at org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:200)
        at org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:130)
        at com.playtika.etl.Application$.processRDD(Application.scala:69)
        at 
com.playtika.etl.Application$$anonfun$processStream$1.apply(Application.scala:52)
        at 
com.playtika.etl.Application$$anonfun$processStream$1.apply(Application.scala:51)
        at 
org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:42)
        at 
org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:40)
        at 
org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:40)
        at 
org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:399)
        at 
org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:40)
        at 
org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:40)
        at 
org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:40)
        at scala.util.Try$.apply(Try.scala:161)
        at org.apache.spark.streaming.scheduler.Job.run(Job.scala:34)
        at 
org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:193)
        at 
org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:193)
        at 
org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:193)
        at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
        at 
org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:192)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

Let me know if more data is needed.


  was:
I'm getting this error everytime I try to create a dataframe using jdbc:

java.sql.SQLException: No suitable driver found for 
jdbc:mysql://localhost:3306/test

What I have so far:

standart sbt project.

Added the dep. on mysql-connector to build.sbt like this:
"mysql"            %  "mysql-connector-java"  % "5.1.36"

The code that creates the df:
    val url   = "jdbc:mysql://localhost:3306/test"
    val table = "test_table"

    val properties = new Properties
    properties.put("user", "123")
    properties.put("password", "123")
    properties.put("driver", "com.mysql.jdbc.Driver")

    val tiers  = sqlContext.read.jdbc(url, table, properties)

I also loaded the jar like this:
streamingContext.sparkContext.addJar("mysql-connector-java-5.1.36.jar")

This is the back trace of the exception being thrown:

15/09/04 18:37:40 ERROR JobScheduler: Error running job streaming job 
1441402660000 ms.0
java.sql.SQLException: No suitable driver found for 
jdbc:mysql://localhost:3306/test
        at java.sql.DriverManager.getConnection(DriverManager.java:689)
        at java.sql.DriverManager.getConnection(DriverManager.java:208)
        at org.apache.spark.sql.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:118)
        at org.apache.spark.sql.jdbc.JDBCRelation.<init>(JDBCRelation.scala:128)
        at org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:200)
        at org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:130)
        at com.playtika.etl.Application$.processRDD(Application.scala:69)
        at 
com.playtika.etl.Application$$anonfun$processStream$1.apply(Application.scala:52)
        at 
com.playtika.etl.Application$$anonfun$processStream$1.apply(Application.scala:51)
        at 
org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:42)
        at 
org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:40)
        at 
org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:40)
        at 
org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:399)
        at 
org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:40)
        at 
org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:40)
        at 
org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:40)
        at scala.util.Try$.apply(Try.scala:161)
        at org.apache.spark.streaming.scheduler.Job.run(Job.scala:34)
        at 
org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:193)
        at 
org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:193)
        at 
org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:193)
        at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
        at 
org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:192)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)





> Unable to connect to MySQL with the DataFrame API
> -------------------------------------------------
>
>                 Key: SPARK-10457
>                 URL: https://issues.apache.org/jira/browse/SPARK-10457
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.4.1
>         Environment: Linux singularity 3.13.0-63-generic #103-Ubuntu SMP Fri 
> Aug 14 21:42:59 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
> Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_60)
>  "org.apache.spark" %% "spark-core"            % "1.4.1" % "provided",
>   "org.apache.spark" %  "spark-sql_2.10"        % "1.4.1" % "provided",
>   "org.apache.spark" %  "spark-streaming_2.10"  % "1.4.1" % "provided",
>   "org.apache.spark" %% "spark-streaming-kafka" % "1.4.1",
>   "mysql"            %  "mysql-connector-java"  % "5.1.36"
>            Reporter: Mariano Simone
>
> I'm getting this error everytime I try to create a dataframe using jdbc:
> java.sql.SQLException: No suitable driver found for 
> jdbc:mysql://localhost:3306/test
> What I have so far:
> standart sbt project.
> Added the dep. on mysql-connector to build.sbt like this:
> "mysql"            %  "mysql-connector-java"  % "5.1.36"
> The code that creates the df:
>     val url   = "jdbc:mysql://localhost:3306/test"
>     val table = "test_table"
>     val properties = new Properties
>     properties.put("user", "123")
>     properties.put("password", "123")
>     properties.put("driver", "com.mysql.jdbc.Driver")
>     val tiers  = sqlContext.read.jdbc(url, table, properties)
> I also loaded the jar like this:
> streamingContext.sparkContext.addJar("mysql-connector-java-5.1.36.jar")
> This is the back trace of the exception being thrown:
> 15/09/04 18:37:40 ERROR JobScheduler: Error running job streaming job 
> 1441402660000 ms.0
> java.sql.SQLException: No suitable driver found for 
> jdbc:mysql://localhost:3306/test
>       at java.sql.DriverManager.getConnection(DriverManager.java:689)
>       at java.sql.DriverManager.getConnection(DriverManager.java:208)
>       at org.apache.spark.sql.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:118)
>       at org.apache.spark.sql.jdbc.JDBCRelation.<init>(JDBCRelation.scala:128)
>       at org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:200)
>       at org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:130)
>       at com.playtika.etl.Application$.processRDD(Application.scala:69)
>       at 
> com.playtika.etl.Application$$anonfun$processStream$1.apply(Application.scala:52)
>       at 
> com.playtika.etl.Application$$anonfun$processStream$1.apply(Application.scala:51)
>       at 
> org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:42)
>       at 
> org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:40)
>       at 
> org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:40)
>       at 
> org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:399)
>       at 
> org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:40)
>       at 
> org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:40)
>       at 
> org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:40)
>       at scala.util.Try$.apply(Try.scala:161)
>       at org.apache.spark.streaming.scheduler.Job.run(Job.scala:34)
>       at 
> org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:193)
>       at 
> org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:193)
>       at 
> org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:193)
>       at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
>       at 
> org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:192)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> Let me know if more data is needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to