I remember facing similar issues while table had few particular data type,
Numerical fields if I remember correctly....if possible, please validate
data types in your select statement, and preferably do not use * or use
some type conversion....

On Thu, Jul 20, 2017 at 4:10 PM, Cassa L <lcas...@gmail.com> wrote:

> Hi,
> I am trying to use Spark to read from Oracle (12.1) table using Spark 2.0.
> My table has JSON data.  I am getting below exception in my code. Any clue?
>
> >>>>>
> java.sql.SQLException: Unsupported type -101
>
> at org.apache.spark.sql.execution.datasources.jdbc.
> JdbcUtils$.org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$
> getCatalystType(JdbcUtils.scala:233)
> at org.apache.spark.sql.execution.datasources.jdbc.
> JdbcUtils$$anonfun$8.apply(JdbcUtils.scala:290)
> at org.apache.spark.sql.execution.datasources.jdbc.
> JdbcUtils$$anonfun$8.apply(JdbcUtils.scala:290)
> at scala.Option.getOrElse(Option.scala:121)
> at
>
> ==========
> My code is very simple.
>
> SparkSession spark = SparkSession
>         .builder()
>         .appName("Oracle Example")
>         .master("local[4]")
>         .getOrCreate();
>
> final Properties connectionProperties = new Properties();
> connectionProperties.put("user", *"some_user"*));
> connectionProperties.put("password", "some_pwd"));
>
> final String dbTable =
>         "(select *  from  MySampleTable)";
>
> Dataset<Row> jdbcDF = spark.read().jdbc(*URL*, dbTable, connectionProperties);
>
>


-- 
Best Regards,
Ayan Guha

Reply via email to