Well the stack trace is quite explicit:

You may have a look at your Oracle table to check wether it's not using
exotic types (types that are NOT in the SQL ANSI standard, sometimes
vendors like Oracle et al. propose custom types)

And I think it's more a Spark SQL issue rather than Zeppelin issue.

On Fri, Jul 17, 2015 at 8:37 PM, Sambit Tripathy (RBEI/EDS1) <
sambit.tripa...@in.bosch.com> wrote:

>  Hi,
>
> I am trying to load an Oracle dataset using the spark interpreter
>
>
>
> val jdbcDF = sqlContext.load("jdbc", Map( "url" ->
> "jdbc:oracle:thin:user/user_...@xyz.xyz.com:1521:sid", "dbtable" ->
> "user.TABLE_NAME","driver" -> "oracle.jdbc.OracleDriver"))
>
>
> ..and this is throwing me the following error
>
> java.sql.SQLException: Unsupported type -101
>         at
> org.apache.spark.sql.jdbc.JDBCRDD$.getCatalystType(JDBCRDD.scala:78)
>         at
> org.apache.spark.sql.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:112)
>         at
> org.apache.spark.sql.jdbc.JDBCRelation.<init>(JDBCRelation.scala:133)
>         at
> org.apache.spark.sql.jdbc.DefaultSource.createRelation(JDBCRelation.scala:121)
>         at
> org.apache.spark.sql.sources.ResolvedDataSource$.apply(ddl.scala:219)
>         at org.apache.spark.sql.SQLContext.load(SQLContext.scala:697)
>         at
> $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:40)
>         at
> $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:45)
>         at
> $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:47)
>         at
> $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:49)
>         at
> $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:51)
>         at
> $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:53)
>         at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:55)
>         at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:57)
>         at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:59)
>         at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:61)
>         at $iwC$$iwC$$iwC$$iwC.<init>(<console>:63)
>         at $iwC$$iwC$$iwC.<init>(<console>:65)
>         at $iwC$$iwC.<init>(<console>:67)
>         at $iwC.<init>(<console>:69)
>         at <init>(<console>:71)
>         at .<init>(<console>:75)
>         at .<clinit>(<console>)
>         at .<init>(<console>:7)
>         at .<clinit>(<console>)
>         at $print(<console>)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
>         at
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1338)
>         at
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
>         at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
>         at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
>         at
> org.apache.zeppelin.spark.SparkInterpreter.interpretInput(SparkInterpreter.java:563)
>         at
> org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:539)
>         at
> org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:532)
>         at
> org.apache.zeppelin.interpreter.ClassloaderInterpreter.interpret(ClassloaderInterpreter.java:57)
>         at
> org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93)
>         at
> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:277)
>         at org.apache.zeppelin.scheduler.Job.run(Job.java:170)
>         at
> org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:118)
>         at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
>         at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
>
> I am not sure if this is happening because of the spark interpreter or
> spark itself. Any pointers, so that I can follow the right channel.
>
> Apologies if this is the not the right place for this question.
>
> Regards,
> Sambit.
>
>

Reply via email to