I am using 2.2.0. I resolved the problem by removing SELECT * and adding
column names to the SELECT statement. That works. I'm wondering why SELECT
* will not work.

Regards,
Leena

On Fri, Jul 21, 2017 at 8:21 AM, Xiao Li <gatorsm...@gmail.com> wrote:

> Could you try 2.2? We fixed multiple Oracle related issues in the latest
> release.
>
> Thanks
>
> Xiao
>
>
> On Wed, 19 Jul 2017 at 11:10 PM Cassa L <lcas...@gmail.com> wrote:
>
>> Hi,
>> I am trying to use Spark to read from Oracle (12.1) table using Spark
>> 2.0. My table has JSON data.  I am getting below exception in my code. Any
>> clue?
>>
>> >>>>>
>> java.sql.SQLException: Unsupported type -101
>>
>> at org.apache.spark.sql.execution.datasources.jdbc.
>> JdbcUtils$.org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$
>> getCatalystType(JdbcUtils.scala:233)
>> at org.apache.spark.sql.execution.datasources.jdbc.
>> JdbcUtils$$anonfun$8.apply(JdbcUtils.scala:290)
>> at org.apache.spark.sql.execution.datasources.jdbc.
>> JdbcUtils$$anonfun$8.apply(JdbcUtils.scala:290)
>> at scala.Option.getOrElse(Option.scala:121)
>> at
>>
>> ==========
>> My code is very simple.
>>
>> SparkSession spark = SparkSession
>>         .builder()
>>         .appName("Oracle Example")
>>         .master("local[4]")
>>         .getOrCreate();
>>
>> final Properties connectionProperties = new Properties();
>> connectionProperties.put("user", *"some_user"*));
>> connectionProperties.put("password", "some_pwd"));
>>
>> final String dbTable =
>>         "(select *  from  MySampleTable)";
>>
>> Dataset<Row> jdbcDF = spark.read().jdbc(*URL*, dbTable, 
>> connectionProperties);
>>
>>

Reply via email to