Re: sparksql - HiveConf not found during task deserialization

2015-04-29 Thread Manku Timma
The issue is solved. There was a problem in my hive codebase. Once that was
fixed, -Phive-provided spark is working fine against my hive jars.

On 27 April 2015 at 08:00, Manku Timma  wrote:

> Made some progress on this. Adding hive jars to the system classpath is
> needed. But looks like it needs to be towards the end of the system
> classes. Manually adding the hive classpath into
> Client.populateHadoopClasspath solved the issue. But a new issue has come
> up. It looks like some hive initialization needs to happen on the executors
> but is getting missed out.
>
> 15/04/25 07:40:25 ERROR executor.Executor: Exception in task 0.1 in stage
> 1.0 (TID 23)
> java.lang.RuntimeException:
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive.get() called without
> a hive db setup
>   at
> org.apache.hadoop.hive.ql.plan.PlanUtils.configureJobPropertiesForStorageHandler(PlanUtils.java:841)
>   at
> org.apache.hadoop.hive.ql.plan.PlanUtils.configureInputJobPropertiesForStorageHandler(PlanUtils.java:776)
>   at
> org.apache.spark.sql.hive.HadoopTableReader$.initializeLocalJobConfFunc(TableReader.scala:253)
>   at
> org.apache.spark.sql.hive.HadoopTableReader$$anonfun$11.apply(TableReader.scala:229)
>   at
> org.apache.spark.sql.hive.HadoopTableReader$$anonfun$11.apply(TableReader.scala:229)
>   at
> org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:172)
>   at
> org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:172)
>   at scala.Option.map(Option.scala:145)
>   at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:172)
>   at org.apache.spark.rdd.HadoopRDD$$anon$1.(HadoopRDD.scala:216)
>   at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:212)
>   at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
>   at
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
>   at
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
>   at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
>   at
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
>   at
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
>   at
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
>   at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
>   at org.apache.spark.scheduler.Task.run(Task.scala:64)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:206)
>   at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive.get()
> called without a hive db setup
>   at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:211)
>   at
> org.apache.hadoop.hive.ql.plan.PlanUtils.configureJobPropertiesForStorageHandler(PlanUtils.java:797)
>   ... 37 more
>
>
> On 25 April 2015 at 09:31, Manku Timma  wrote:
>
>> Setting SPARK_CLASSPATH is triggering other errors. Not working.
>>
>>
>> On 25 April 2015 at 09:16, Manku Timma  wrote:
>>
>>> Actually found the culprit. The JavaSerializerInstance.deserialize is
>>> called with a classloader (of type MutableURLClassLoader) which has access
>>> to all the hive classes. But internally it triggers a call to loadClass but
>>> with the default classloader. Below is the stacktrace (line numbers in the
>>> JavaSerialization.scala will be a bit off due to my debugging statements).
>>>
>>> I will try out the SPARK_CLASSPATH setting. But I was wondering if this
>>> has something to do with the way spark-project.hive jars are created v/s
>>> the way open source apache-hive jars are created. Is this documented
>>> somewhere? The only info I see is Patrick Wendell's comment in
>>> https://github.com/apache/spark/pull/2241 (grep for "published a
>>> modified version").
>>>
>>> 15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler: Uncaught
>>> exception in thread Thread[Executor task launch worker-3,5,main

Re: sparksql - HiveConf not found during task deserialization

2015-04-26 Thread Manku Timma
Made some progress on this. Adding hive jars to the system classpath is
needed. But looks like it needs to be towards the end of the system
classes. Manually adding the hive classpath into
Client.populateHadoopClasspath solved the issue. But a new issue has come
up. It looks like some hive initialization needs to happen on the executors
but is getting missed out.

15/04/25 07:40:25 ERROR executor.Executor: Exception in task 0.1 in stage
1.0 (TID 23)
java.lang.RuntimeException:
org.apache.hadoop.hive.ql.metadata.HiveException: Hive.get() called without
a hive db setup
  at
org.apache.hadoop.hive.ql.plan.PlanUtils.configureJobPropertiesForStorageHandler(PlanUtils.java:841)
  at
org.apache.hadoop.hive.ql.plan.PlanUtils.configureInputJobPropertiesForStorageHandler(PlanUtils.java:776)
  at
org.apache.spark.sql.hive.HadoopTableReader$.initializeLocalJobConfFunc(TableReader.scala:253)
  at
org.apache.spark.sql.hive.HadoopTableReader$$anonfun$11.apply(TableReader.scala:229)
  at
org.apache.spark.sql.hive.HadoopTableReader$$anonfun$11.apply(TableReader.scala:229)
  at
org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:172)
  at
org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:172)
  at scala.Option.map(Option.scala:145)
  at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:172)
  at org.apache.spark.rdd.HadoopRDD$$anon$1.(HadoopRDD.scala:216)
  at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:212)
  at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
  at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
  at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
  at
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
  at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
  at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
  at
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
  at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
  at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
  at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87)
  at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
  at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
  at
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
  at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
  at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
  at
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
  at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
  at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
  at
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
  at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
  at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
  at org.apache.spark.scheduler.Task.run(Task.scala:64)
  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:206)
  at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive.get()
called without a hive db setup
  at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:211)
  at
org.apache.hadoop.hive.ql.plan.PlanUtils.configureJobPropertiesForStorageHandler(PlanUtils.java:797)
  ... 37 more


On 25 April 2015 at 09:31, Manku Timma  wrote:

> Setting SPARK_CLASSPATH is triggering other errors. Not working.
>
>
> On 25 April 2015 at 09:16, Manku Timma  wrote:
>
>> Actually found the culprit. The JavaSerializerInstance.deserialize is
>> called with a classloader (of type MutableURLClassLoader) which has access
>> to all the hive classes. But internally it triggers a call to loadClass but
>> with the default classloader. Below is the stacktrace (line numbers in the
>> JavaSerialization.scala will be a bit off due to my debugging statements).
>>
>> I will try out the SPARK_CLASSPATH setting. But I was wondering if this
>> has something to do with the way spark-project.hive jars are created v/s
>> the way open source apache-hive jars are created. Is this documented
>> somewhere? The only info I see is Patrick Wendell's comment in
>> https://github.com/apache/spark/pull/2241 (grep for "published a
>> modified version").
>>
>> 15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler: Uncaught
>> exception in thread Thread[Executor task launch worker-3,5,main]
>> java.lang.NoClassDefFoundError: org/apache/hadoop/hive/conf/HiveConf
>>
>> 15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
>> java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>> 15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
>> java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>> 15/04/25 01:41:04 ERROR util.Spark

Re: sparksql - HiveConf not found during task deserialization

2015-04-24 Thread Manku Timma
Setting SPARK_CLASSPATH is triggering other errors. Not working.


On 25 April 2015 at 09:16, Manku Timma  wrote:

> Actually found the culprit. The JavaSerializerInstance.deserialize is
> called with a classloader (of type MutableURLClassLoader) which has access
> to all the hive classes. But internally it triggers a call to loadClass but
> with the default classloader. Below is the stacktrace (line numbers in the
> JavaSerialization.scala will be a bit off due to my debugging statements).
>
> I will try out the SPARK_CLASSPATH setting. But I was wondering if this
> has something to do with the way spark-project.hive jars are created v/s
> the way open source apache-hive jars are created. Is this documented
> somewhere? The only info I see is Patrick Wendell's comment in
> https://github.com/apache/spark/pull/2241 (grep for "published a modified
> version").
>
> 15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler: Uncaught
> exception in thread Thread[Executor task launch worker-3,5,main]
> java.lang.NoClassDefFoundError: org/apache/hadoop/hive/conf/HiveConf
>
> 15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
> java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> 15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
> java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> 15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
> java.security.AccessController.doPrivileged(Native Method)
> 15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
> java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> 15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
> java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> 15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
> 15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
> java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> 15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
> java.lang.Class.getDeclaredFields0(Native Method)
> 15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
> java.lang.Class.privateGetDeclaredFields(Class.java:2436)
> 15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
> java.lang.Class.getDeclaredField(Class.java:1946)
> 15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
> java.io.ObjectStreamClass.getDeclaredSUID(ObjectStreamClass.java:1659)
> 15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
> java.io.ObjectStreamClass.access$700(ObjectStreamClass.java:72)
> 15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
> java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:480)
> 15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
> java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:468)
> 15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
> java.security.AccessController.doPrivileged(Native Method)
> 15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
> java.io.ObjectStreamClass.(ObjectStreamClass.java:468)
> 15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
> java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:365)
> 15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
> java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:602)
> 15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
> java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1622)
> 15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
> java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)
> 15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
> 15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
> java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
> 15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
> 15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
> 15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
> 15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
> java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
> 15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
> 15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
> 15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
> 15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
> java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
> 15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
> java.io.ObjectInputStream.d

Re: sparksql - HiveConf not found during task deserialization

2015-04-24 Thread Manku Timma
Actually found the culprit. The JavaSerializerInstance.deserialize is
called with a classloader (of type MutableURLClassLoader) which has access
to all the hive classes. But internally it triggers a call to loadClass but
with the default classloader. Below is the stacktrace (line numbers in the
JavaSerialization.scala will be a bit off due to my debugging statements).

I will try out the SPARK_CLASSPATH setting. But I was wondering if this has
something to do with the way spark-project.hive jars are created v/s the
way open source apache-hive jars are created. Is this documented somewhere?
The only info I see is Patrick Wendell's comment in
https://github.com/apache/spark/pull/2241 (grep for "published a modified
version").

15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler: Uncaught
exception in thread Thread[Executor task launch worker-3,5,main]
java.lang.NoClassDefFoundError: org/apache/hadoop/hive/conf/HiveConf

15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
java.net.URLClassLoader$1.run(URLClassLoader.java:366)
15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
java.net.URLClassLoader$1.run(URLClassLoader.java:355)
15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
java.security.AccessController.doPrivileged(Native Method)
15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
java.net.URLClassLoader.findClass(URLClassLoader.java:354)
15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
java.lang.ClassLoader.loadClass(ClassLoader.java:425)
15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
java.lang.ClassLoader.loadClass(ClassLoader.java:358)
15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
java.lang.Class.getDeclaredFields0(Native Method)
15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
java.lang.Class.privateGetDeclaredFields(Class.java:2436)
15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
java.lang.Class.getDeclaredField(Class.java:1946)
15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
java.io.ObjectStreamClass.getDeclaredSUID(ObjectStreamClass.java:1659)
15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
java.io.ObjectStreamClass.access$700(ObjectStreamClass.java:72)
15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:480)
15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:468)
15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
java.security.AccessController.doPrivileged(Native Method)
15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
java.io.ObjectStreamClass.(ObjectStreamClass.java:468)
15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:365)
15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:602)
15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1622)
15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)
15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
15/04/25 01:41:04 ERROR util.SparkUncaughtExceptionHandler:
java.io.ObjectInputStream.readOrdi

Re: sparksql - HiveConf not found during task deserialization

2015-04-22 Thread Akhil Das
I see, now try a bit tricky approach, Add the hive jar to the
SPARK_CLASSPATH (in conf/spark-env.sh file on all machines) and make sure
that jar is available on all the machines in the cluster in the same path.

Thanks
Best Regards

On Wed, Apr 22, 2015 at 11:24 AM, Manku Timma 
wrote:

> Akhil, Thanks for the suggestions.
> I tried out sc.addJar, --jars, --conf spark.executor.extraClassPath and
> none of them helped. I added stuff into compute-classpath.sh. That did not
> change anything. I checked the classpath of the running executor and made
> sure that the hive jars are in that dir. For me the most confusing thing is
> that the executor can actually create HiveConf objects but when it cannot
> find that when the task deserializer is at work.
>
> On 20 April 2015 at 14:18, Akhil Das  wrote:
>
>> Can you try sc.addJar("/path/to/your/hive/jar"), i think it will resolve
>> it.
>>
>> Thanks
>> Best Regards
>>
>> On Mon, Apr 20, 2015 at 12:26 PM, Manku Timma 
>> wrote:
>>
>>> Akhil,
>>> But the first case of creating HiveConf on the executor works fine (map
>>> case). Only the second case fails. I was suspecting some foul play with
>>> classloaders.
>>>
>>> On 20 April 2015 at 12:20, Akhil Das  wrote:
>>>
 Looks like a missing jar, try to print the classpath and make sure the
 hive jar is present.

 Thanks
 Best Regards

 On Mon, Apr 20, 2015 at 11:52 AM, Manku Timma 
 wrote:

> I am using spark-1.3 with hadoop-provided and hive-provided and
> hive-0.13.1 profiles. I am running a simple spark job on a yarn cluster by
> adding all hadoop2 and hive13 jars to the spark classpaths.
>
> If I remove the hive-provided while building spark, I dont face any
> issue. But with hive-provided I am getting a
> "java.lang.NoClassDefFoundError: org/apache/hadoop/hive/conf/HiveConf" in
> the yarn executor.
>
> Code is below:
> import org.apache.spark._
> import org.apache.spark.sql._
> import org.apache.hadoop.hive.conf.HiveConf
>
> object Simple {
>   def main(args: Array[String]) = {
> val sc = new SparkContext(new SparkConf())
> val sqlC = new  org.apache.spark.sql.hive.HiveContext(sc)
>
> val x = sc.parallelize(1 to 2).map(x =>
>   { val h = new HiveConf; h.getBoolean("hive.test", false) })
> x.collect.foreach(x => println(s"-  $x
> "))
>
> val result = sqlC.sql("""
>   select * from products_avro order by month, name, price
>   """)
> result.collect.foreach(println)
>   }
> }
>
> The first job (involving map) runs fine. HiveConf is instantiated and
> the conf variable is looked up etc. But the second job (involving the
> select * query) throws the class not found exception.
>
> The task deserializer is the one throwing the exception. It is unable
> to find the class in its classpath. Not sure what is different from the
> first job which also involved HiveConf.
>
> 157573 [task-result-getter-3] 2015/04/20 11:01:48:287 WARN
> TaskSetManager: Lost task 0.2 in stage 2.0 (TID 4, localhost):
> java.lang.NoClassDefFoundError: org/apache/hadoop/hive/conf/HiveConf
> at java.lang.Class.getDeclaredFields0(Native Method)
> at java.lang.Class.privateGetDeclaredFields(Class.java:2436)
> at java.lang.Class.getDeclaredField(Class.java:1946)
> at
> java.io.ObjectStreamClass.getDeclaredSUID(ObjectStreamClass.java:1659)
> at java.io.ObjectStreamClass.access$700(ObjectStreamClass.java:72)
> at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:480)
> at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:468)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.io.ObjectStreamClass.(ObjectStreamClass.java:468)
> at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:365)
> at
> java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:602)
> at
> java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1622)
> at
> java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)
> at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
> at
> java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
> at
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
> at
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
> at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
> at
> java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
> at
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
> at
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
> at
> ja

Re: sparksql - HiveConf not found during task deserialization

2015-04-21 Thread Manku Timma
Akhil, Thanks for the suggestions.
I tried out sc.addJar, --jars, --conf spark.executor.extraClassPath and
none of them helped. I added stuff into compute-classpath.sh. That did not
change anything. I checked the classpath of the running executor and made
sure that the hive jars are in that dir. For me the most confusing thing is
that the executor can actually create HiveConf objects but when it cannot
find that when the task deserializer is at work.

On 20 April 2015 at 14:18, Akhil Das  wrote:

> Can you try sc.addJar("/path/to/your/hive/jar"), i think it will resolve
> it.
>
> Thanks
> Best Regards
>
> On Mon, Apr 20, 2015 at 12:26 PM, Manku Timma 
> wrote:
>
>> Akhil,
>> But the first case of creating HiveConf on the executor works fine (map
>> case). Only the second case fails. I was suspecting some foul play with
>> classloaders.
>>
>> On 20 April 2015 at 12:20, Akhil Das  wrote:
>>
>>> Looks like a missing jar, try to print the classpath and make sure the
>>> hive jar is present.
>>>
>>> Thanks
>>> Best Regards
>>>
>>> On Mon, Apr 20, 2015 at 11:52 AM, Manku Timma 
>>> wrote:
>>>
 I am using spark-1.3 with hadoop-provided and hive-provided and
 hive-0.13.1 profiles. I am running a simple spark job on a yarn cluster by
 adding all hadoop2 and hive13 jars to the spark classpaths.

 If I remove the hive-provided while building spark, I dont face any
 issue. But with hive-provided I am getting a
 "java.lang.NoClassDefFoundError: org/apache/hadoop/hive/conf/HiveConf" in
 the yarn executor.

 Code is below:
 import org.apache.spark._
 import org.apache.spark.sql._
 import org.apache.hadoop.hive.conf.HiveConf

 object Simple {
   def main(args: Array[String]) = {
 val sc = new SparkContext(new SparkConf())
 val sqlC = new  org.apache.spark.sql.hive.HiveContext(sc)

 val x = sc.parallelize(1 to 2).map(x =>
   { val h = new HiveConf; h.getBoolean("hive.test", false) })
 x.collect.foreach(x => println(s"-  $x
 "))

 val result = sqlC.sql("""
   select * from products_avro order by month, name, price
   """)
 result.collect.foreach(println)
   }
 }

 The first job (involving map) runs fine. HiveConf is instantiated and
 the conf variable is looked up etc. But the second job (involving the
 select * query) throws the class not found exception.

 The task deserializer is the one throwing the exception. It is unable
 to find the class in its classpath. Not sure what is different from the
 first job which also involved HiveConf.

 157573 [task-result-getter-3] 2015/04/20 11:01:48:287 WARN
 TaskSetManager: Lost task 0.2 in stage 2.0 (TID 4, localhost):
 java.lang.NoClassDefFoundError: org/apache/hadoop/hive/conf/HiveConf
 at java.lang.Class.getDeclaredFields0(Native Method)
 at java.lang.Class.privateGetDeclaredFields(Class.java:2436)
 at java.lang.Class.getDeclaredField(Class.java:1946)
 at
 java.io.ObjectStreamClass.getDeclaredSUID(ObjectStreamClass.java:1659)
 at java.io.ObjectStreamClass.access$700(ObjectStreamClass.java:72)
 at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:480)
 at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:468)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.io.ObjectStreamClass.(ObjectStreamClass.java:468)
 at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:365)
 at
 java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:602)
 at
 java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1622)
 at
 java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)
 at
 java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
 at
 java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
 at
 java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
 at
 java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
 at
 java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
 at
 java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
 at
 java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
 at
 java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
 at
 java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
 at
 java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
 at
 java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
 at
 java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
 at
 java.io.ObjectInputStream.readOrdinaryObject(ObjectInputSt

Re: sparksql - HiveConf not found during task deserialization

2015-04-20 Thread Akhil Das
Can you try sc.addJar("/path/to/your/hive/jar"), i think it will resolve it.

Thanks
Best Regards

On Mon, Apr 20, 2015 at 12:26 PM, Manku Timma 
wrote:

> Akhil,
> But the first case of creating HiveConf on the executor works fine (map
> case). Only the second case fails. I was suspecting some foul play with
> classloaders.
>
> On 20 April 2015 at 12:20, Akhil Das  wrote:
>
>> Looks like a missing jar, try to print the classpath and make sure the
>> hive jar is present.
>>
>> Thanks
>> Best Regards
>>
>> On Mon, Apr 20, 2015 at 11:52 AM, Manku Timma 
>> wrote:
>>
>>> I am using spark-1.3 with hadoop-provided and hive-provided and
>>> hive-0.13.1 profiles. I am running a simple spark job on a yarn cluster by
>>> adding all hadoop2 and hive13 jars to the spark classpaths.
>>>
>>> If I remove the hive-provided while building spark, I dont face any
>>> issue. But with hive-provided I am getting a
>>> "java.lang.NoClassDefFoundError: org/apache/hadoop/hive/conf/HiveConf" in
>>> the yarn executor.
>>>
>>> Code is below:
>>> import org.apache.spark._
>>> import org.apache.spark.sql._
>>> import org.apache.hadoop.hive.conf.HiveConf
>>>
>>> object Simple {
>>>   def main(args: Array[String]) = {
>>> val sc = new SparkContext(new SparkConf())
>>> val sqlC = new  org.apache.spark.sql.hive.HiveContext(sc)
>>>
>>> val x = sc.parallelize(1 to 2).map(x =>
>>>   { val h = new HiveConf; h.getBoolean("hive.test", false) })
>>> x.collect.foreach(x => println(s"-  $x
>>> "))
>>>
>>> val result = sqlC.sql("""
>>>   select * from products_avro order by month, name, price
>>>   """)
>>> result.collect.foreach(println)
>>>   }
>>> }
>>>
>>> The first job (involving map) runs fine. HiveConf is instantiated and
>>> the conf variable is looked up etc. But the second job (involving the
>>> select * query) throws the class not found exception.
>>>
>>> The task deserializer is the one throwing the exception. It is unable to
>>> find the class in its classpath. Not sure what is different from the first
>>> job which also involved HiveConf.
>>>
>>> 157573 [task-result-getter-3] 2015/04/20 11:01:48:287 WARN
>>> TaskSetManager: Lost task 0.2 in stage 2.0 (TID 4, localhost):
>>> java.lang.NoClassDefFoundError: org/apache/hadoop/hive/conf/HiveConf
>>> at java.lang.Class.getDeclaredFields0(Native Method)
>>> at java.lang.Class.privateGetDeclaredFields(Class.java:2436)
>>> at java.lang.Class.getDeclaredField(Class.java:1946)
>>> at
>>> java.io.ObjectStreamClass.getDeclaredSUID(ObjectStreamClass.java:1659)
>>> at java.io.ObjectStreamClass.access$700(ObjectStreamClass.java:72)
>>> at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:480)
>>> at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:468)
>>> at java.security.AccessController.doPrivileged(Native Method)
>>> at java.io.ObjectStreamClass.(ObjectStreamClass.java:468)
>>> at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:365)
>>> at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:602)
>>> at
>>> java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1622)
>>> at
>>> java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)
>>> at
>>> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
>>> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
>>> at
>>> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
>>> at
>>> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
>>> at
>>> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
>>> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
>>> at
>>> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
>>> at
>>> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
>>> at
>>> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
>>> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
>>> at
>>> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
>>> at
>>> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
>>> at
>>> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
>>> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
>>> at
>>> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
>>> at
>>> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
>>> at
>>> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
>>> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
>>> at
>>> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
>>> at
>>> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
>>> at
>>> java.io.Obje

Re: sparksql - HiveConf not found during task deserialization

2015-04-19 Thread Manku Timma
Akhil,
But the first case of creating HiveConf on the executor works fine (map
case). Only the second case fails. I was suspecting some foul play with
classloaders.

On 20 April 2015 at 12:20, Akhil Das  wrote:

> Looks like a missing jar, try to print the classpath and make sure the
> hive jar is present.
>
> Thanks
> Best Regards
>
> On Mon, Apr 20, 2015 at 11:52 AM, Manku Timma 
> wrote:
>
>> I am using spark-1.3 with hadoop-provided and hive-provided and
>> hive-0.13.1 profiles. I am running a simple spark job on a yarn cluster by
>> adding all hadoop2 and hive13 jars to the spark classpaths.
>>
>> If I remove the hive-provided while building spark, I dont face any
>> issue. But with hive-provided I am getting a
>> "java.lang.NoClassDefFoundError: org/apache/hadoop/hive/conf/HiveConf" in
>> the yarn executor.
>>
>> Code is below:
>> import org.apache.spark._
>> import org.apache.spark.sql._
>> import org.apache.hadoop.hive.conf.HiveConf
>>
>> object Simple {
>>   def main(args: Array[String]) = {
>> val sc = new SparkContext(new SparkConf())
>> val sqlC = new  org.apache.spark.sql.hive.HiveContext(sc)
>>
>> val x = sc.parallelize(1 to 2).map(x =>
>>   { val h = new HiveConf; h.getBoolean("hive.test", false) })
>> x.collect.foreach(x => println(s"-  $x
>> "))
>>
>> val result = sqlC.sql("""
>>   select * from products_avro order by month, name, price
>>   """)
>> result.collect.foreach(println)
>>   }
>> }
>>
>> The first job (involving map) runs fine. HiveConf is instantiated and the
>> conf variable is looked up etc. But the second job (involving the select *
>> query) throws the class not found exception.
>>
>> The task deserializer is the one throwing the exception. It is unable to
>> find the class in its classpath. Not sure what is different from the first
>> job which also involved HiveConf.
>>
>> 157573 [task-result-getter-3] 2015/04/20 11:01:48:287 WARN
>> TaskSetManager: Lost task 0.2 in stage 2.0 (TID 4, localhost):
>> java.lang.NoClassDefFoundError: org/apache/hadoop/hive/conf/HiveConf
>> at java.lang.Class.getDeclaredFields0(Native Method)
>> at java.lang.Class.privateGetDeclaredFields(Class.java:2436)
>> at java.lang.Class.getDeclaredField(Class.java:1946)
>> at
>> java.io.ObjectStreamClass.getDeclaredSUID(ObjectStreamClass.java:1659)
>> at java.io.ObjectStreamClass.access$700(ObjectStreamClass.java:72)
>> at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:480)
>> at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:468)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at java.io.ObjectStreamClass.(ObjectStreamClass.java:468)
>> at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:365)
>> at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:602)
>> at
>> java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1622)
>> at
>> java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)
>> at
>> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
>> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
>> at
>> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
>> at
>> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
>> at
>> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
>> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
>> at
>> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
>> at
>> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
>> at
>> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
>> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
>> at
>> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
>> at
>> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
>> at
>> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
>> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
>> at
>> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
>> at
>> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
>> at
>> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
>> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
>> at
>> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
>> at
>> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
>> at
>> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
>> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
>> at
>> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
>> at
>> java.io.ObjectInputStream.readSe

Re: sparksql - HiveConf not found during task deserialization

2015-04-19 Thread Akhil Das
Looks like a missing jar, try to print the classpath and make sure the hive
jar is present.

Thanks
Best Regards

On Mon, Apr 20, 2015 at 11:52 AM, Manku Timma 
wrote:

> I am using spark-1.3 with hadoop-provided and hive-provided and
> hive-0.13.1 profiles. I am running a simple spark job on a yarn cluster by
> adding all hadoop2 and hive13 jars to the spark classpaths.
>
> If I remove the hive-provided while building spark, I dont face any issue.
> But with hive-provided I am getting a "java.lang.NoClassDefFoundError:
> org/apache/hadoop/hive/conf/HiveConf" in the yarn executor.
>
> Code is below:
> import org.apache.spark._
> import org.apache.spark.sql._
> import org.apache.hadoop.hive.conf.HiveConf
>
> object Simple {
>   def main(args: Array[String]) = {
> val sc = new SparkContext(new SparkConf())
> val sqlC = new  org.apache.spark.sql.hive.HiveContext(sc)
>
> val x = sc.parallelize(1 to 2).map(x =>
>   { val h = new HiveConf; h.getBoolean("hive.test", false) })
> x.collect.foreach(x => println(s"-  $x
> "))
>
> val result = sqlC.sql("""
>   select * from products_avro order by month, name, price
>   """)
> result.collect.foreach(println)
>   }
> }
>
> The first job (involving map) runs fine. HiveConf is instantiated and the
> conf variable is looked up etc. But the second job (involving the select *
> query) throws the class not found exception.
>
> The task deserializer is the one throwing the exception. It is unable to
> find the class in its classpath. Not sure what is different from the first
> job which also involved HiveConf.
>
> 157573 [task-result-getter-3] 2015/04/20 11:01:48:287 WARN TaskSetManager:
> Lost task 0.2 in stage 2.0 (TID 4, localhost):
> java.lang.NoClassDefFoundError: org/apache/hadoop/hive/conf/HiveConf
> at java.lang.Class.getDeclaredFields0(Native Method)
> at java.lang.Class.privateGetDeclaredFields(Class.java:2436)
> at java.lang.Class.getDeclaredField(Class.java:1946)
> at
> java.io.ObjectStreamClass.getDeclaredSUID(ObjectStreamClass.java:1659)
> at java.io.ObjectStreamClass.access$700(ObjectStreamClass.java:72)
> at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:480)
> at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:468)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.io.ObjectStreamClass.(ObjectStreamClass.java:468)
> at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:365)
> at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:602)
> at
> java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1622)
> at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)
> at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
> at
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
> at
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
> at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
> at
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
> at
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
> at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
> at
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
> at
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
> at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
> at
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
> at
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
> at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
> at
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
> at
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
> at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
> at
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
> at
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
> at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
> at
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
> at
> java.io.ObjectInputStream.readSerialD