Try providing correct driver name through property variable in the jdbc
call.
On Thu., 22 Dec. 2016 at 8:40 am, Mich Talebzadeh <mich.talebza...@gmail.com>
wrote:

> This works with Spark 2 with Oracle jar file added to
>
>
>
>
>
> $SPARK_HOME/conf/ spark -defaults.conf
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> spark.driver.extraClassPath
>
> /home/hduser/jars/ojdbc6.jar
>
>
>
> spark.executor.extraClassPath
>
> /home/hduser/jars/ojdbc6.jar
>
>
>
>
>
>
>
>
>
> and you get
>
>
>
>
>
>  cala> val s = HiveContext.read.format("jdbc").options(
>      | Map("url" -> _ORACLEserver,
>      | "dbtable" -> "(SELECT to_char(ID) AS ID, to_char(CLUSTERED) AS
> CLUSTERED, to_char(SCATTERED) AS SCATTERED, to_char(RANDOMISED) AS
> RANDOMISED, RANDOM_STRING, SMALL_VC, PADDING FROM scratchpad.dummy)",
>      | "partitionColumn" -> "ID",
>      | "lowerBound" -> "1",
>      | "upperBound" -> "100000000",
>      | "numPartitions" -> "10",
>      | "user" -> _username,
>      | "password" -> _password)).load
> s: org.apache.spark.sql.DataFrame = [ID: string, CLUSTERED: string ... 5
> more fields]
>
>
>
>
> that works.
> However, with CDH 5.5.2 (Spark 1.5) it fails with error
>
>
>
>
>
>
>
> *java.sql.SQLException:No suitable driver*
>
>
>
>
>
>
> at java.sql.DriverManager.getDriver(DriverManager.java:315)
>
>
>
>
>
>
> at
> org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$2.apply(JdbcUtils.scala:54)
>
>
>
>
>
>
> at
>
>
> org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$2.apply(JdbcUtils.scala:54)
>
>
>
>
>
>
> at scala.Option.getOrElse(Option.scala:121)
>
>
>
>
>
>
> at
>
>
> org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.createConnectionFactory(JdbcUtils.scala:53)
>
>
>
>
>
>
> at
>
>
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:123)
>
>
>
>
>
>
> at
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.<init>(JDBCRelation.scala:117)
>
>
>
>
>
>
> at
>
>
> org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:53)
>
>
>
>
>
>
> at
>
>
> org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:315)
>
>
>
>
>
>
> at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:149)
>
>
>
>
>
>
> at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:122)
>
>
>
>
>
> Any ideas?
>
> Thanks
>
>
>
>
>
>
>
>
> Dr Mich Talebzadeh
>
>
>
>
>
>
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
>
>
>
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction
>
> of data or any other property which may arise from relying on this
> email's technical content is explicitly disclaimed.
>
> The author will in no case be liable for any monetary damages arising from
> such
>
> loss, damage or destruction.
>
>
>
>
>
>
>
>
>
>
>

Reply via email to