HiveContext on Spark 1.6 Linkage Error:ClassCastException
Hello guys, hope all of you are ok. I am trying to use HiveContext on Spark 1.6, I am developing using Eclipse and I placed the hive-site.xml in the classPath, so doing I use the Hive instance running on my cluster instead of creating a local metastore and a local warehouse. So far so good, in this scenario select * and insert into query work ok, but the problem arise when trying to drop table and/or create new ones. Provided that is not a permission problem, my issue is: ClassCastException: attempting to cast jar file://.../com/sun/jersey/jersey-core/1.9/jersey-core-1.9.jar!javax/ws/rs/ext/RunTimeDelegate.class to jar cast jar file://.../com/sun/jersey/jersey-core/1.9/jersey-core-1.9.jar!javax/ws/rs/ext/RunTimeDelegate.class. As you can see, it is attempting to cast the same jar, and it throws the exception, I think because the same jar has been loaded before from a different classloader, in fact one is loaded by org.apache.spark.sql.hive.client.IsolatedClientLoader and the other one by sun.misc.Launcher.$AppClassLoader. Any suggestion to fix this issue? The same happens when building the jar and running it with spark-submit (yarn RM). Cheers, best CONFIDENTIALITY WARNING. This message and the information contained in or attached to it are private and confidential and intended exclusively for the addressee. everis informs to whom it may receive it in error that it contains privileged information and its use, copy, reproduction or distribution is prohibited. If you are not an intended recipient of this E-mail, please notify the sender, delete it and do not read, act upon, print, disclose, copy, retain or redistribute any portion of this E-mail.
HiveContext on Spark 1.6 Linkage Error:ClassCastException
Hello guys, hope all of you are ok. I am trying to use HiveContext on Spark 1.6, I am developing using Eclipse and I placed the hive-site.xml in the classPath, so doing I use the Hive instance running on my cluster instead of creating a local metastore and a local warehouse. So far so good, in this scenario select * and insert into query work ok, but the problem arise when trying to drop table and/or create new ones. Provided that is not a permission problem, my issue is: ClassCastException: attempting to cast jar file://.../com/sun/jersey/jersey-core/1.9/jersey-core-1.9.jar!javax/ws/rs/ext/RunTimeDelegate.class to jar cast jar file://.../com/sun/jersey/jersey-core/1.9/jersey-core-1.9.jar!javax/ws/rs/ext/RunTimeDelegate.class. As you can see, it is attempting to cast the same jar, and it throws the exception, I think because the same jar has been loaded before from a different classloader, in fact one is loaded by org.apache.spark.sql.hive.client.IsolatedClientLoader and the other one by sun.misc.Launcher.$AppClassLoader. Any suggestion to fix this issue? The same happens when building the jar and running it with spark-submit (yarn RM). Cheers, best CONFIDENTIALITY WARNING. This message and the information contained in or attached to it are private and confidential and intended exclusively for the addressee. everis informs to whom it may receive it in error that it contains privileged information and its use, copy, reproduction or distribution is prohibited. If you are not an intended recipient of this E-mail, please notify the sender, delete it and do not read, act upon, print, disclose, copy, retain or redistribute any portion of this E-mail.
Kryo and Spark 1.6.0 - Does it require a default empty constructor?
Hi, I have a doubt about Kryo and Spark 1.6.0. I read that for using Kryo, the class that you want to serialize must have a default constructor. I created a simple class avoiding to insert such a constructor and If I try to serialize manually, it does not work. But If I use that class in Spark and then I collect for forcing serialization it works. Any idea? Cheers CONFIDENTIALITY WARNING. This message and the information contained in or attached to it are private and confidential and intended exclusively for the addressee. everis informs to whom it may receive it in error that it contains privileged information and its use, copy, reproduction or distribution is prohibited. If you are not an intended recipient of this E-mail, please notify the sender, delete it and do not read, act upon, print, disclose, copy, retain or redistribute any portion of this E-mail.
RE: Kryo On Spark 1.6.0 [Solution in this email]
Yes sure, you can find it here: http://stackoverflow.com/questions/34736587/kryo-serializer-causing-exception-on-underlying-scala-class-wrappedarray hope it works, I did not try, I am using Java. To be precise I found the solution for my problem: To sum up, I had problems in registering the following class in Java: “scala.collection.mutable.WrappedArray$ofRef” The tip is: Class a = Class.forName(“scala.collection.mutable.WrappedArray$ofRef”) and then put a in the array of classes you are passing to the method registerKryoClasses() From: Yang Cao [mailto:cybea...@gmail.com] Sent: 10 January 2017 15:12 To: Enrico DUrso Cc: user@spark.apache.org Subject: Re: Kryo On Spark 1.6.0 If you don’t mind, could please share me with the scala solution? I tried to use kryo but seamed not work at all. I hope to get some practical example. THX On 2017年1月10日, at 19:10, Enrico DUrso mailto:enrico.du...@everis.com>> wrote: Hi, I am trying to use Kryo on Spark 1.6.0. I am able to register my own classes and it works, but when I set “spark.kryo.registrationRequired “ to true, I get an error about a scala class: “Class is not registered: scala.collection.mutable.WrappedArray$ofRef”. Any of you has already solved this issue in Java? I found the code to solve it in Scala, but unable to register this class in Java. Cheers, enrico CONFIDENTIALITY WARNING. This message and the information contained in or attached to it are private and confidential and intended exclusively for the addressee. everis informs to whom it may receive it in error that it contains privileged information and its use, copy, reproduction or distribution is prohibited. If you are not an intended recipient of this E-mail, please notify the sender, delete it and do not read, act upon, print, disclose, copy, retain or redistribute any portion of this E-mail. CONFIDENTIALITY WARNING. This message and the information contained in or attached to it are private and confidential and intended exclusively for the addressee. everis informs to whom it may receive it in error that it contains privileged information and its use, copy, reproduction or distribution is prohibited. If you are not an intended recipient of this E-mail, please notify the sender, delete it and do not read, act upon, print, disclose, copy, retain or redistribute any portion of this E-mail.
RE: Kryo On Spark 1.6.0
Hi, I agree with you Richard. The point is that, looks like some classes which are used internally by Spark are not registered (for instance, the one I mentioned in the previous email is something I am not directly using). For those classes the serialization performance will be poor in according to how Spark works. How can I register all those classes? cheers, From: Richard Startin [mailto:richardstar...@outlook.com] Sent: 10 January 2017 11:18 To: Enrico DUrso; user@spark.apache.org Subject: Re: Kryo On Spark 1.6.0 Hi Enrico, Only set spark.kryo.registrationRequired if you want to forbid any classes you have not explicitly registered - see http://spark.apache.org/docs/latest/configuration.html. Configuration - Spark 2.0.2 Documentation<http://spark.apache.org/docs/latest/configuration.html> spark.apache.org Spark Configuration. Spark Properties. Dynamically Loading Spark Properties; Viewing Spark Properties; Available Properties. Application Properties; Runtime Environment To enable kryo, you just need spark.serializer=org.apache.spark.serializer.KryoSerializer. There is some info here - http://spark.apache.org/docs/latest/tuning.html Cheers, Richard https://richardstartin.com/ From: Enrico DUrso mailto:enrico.du...@everis.com>> Sent: 10 January 2017 11:10 To: user@spark.apache.org<mailto:user@spark.apache.org> Subject: Kryo On Spark 1.6.0 Hi, I am trying to use Kryo on Spark 1.6.0. I am able to register my own classes and it works, but when I set "spark.kryo.registrationRequired " to true, I get an error about a scala class: "Class is not registered: scala.collection.mutable.WrappedArray$ofRef". Any of you has already solved this issue in Java? I found the code to solve it in Scala, but unable to register this class in Java. Cheers, enrico CONFIDENTIALITY WARNING. This message and the information contained in or attached to it are private and confidential and intended exclusively for the addressee. everis informs to whom it may receive it in error that it contains privileged information and its use, copy, reproduction or distribution is prohibited. If you are not an intended recipient of this E-mail, please notify the sender, delete it and do not read, act upon, print, disclose, copy, retain or redistribute any portion of this E-mail. CONFIDENTIALITY WARNING. This message and the information contained in or attached to it are private and confidential and intended exclusively for the addressee. everis informs to whom it may receive it in error that it contains privileged information and its use, copy, reproduction or distribution is prohibited. If you are not an intended recipient of this E-mail, please notify the sender, delete it and do not read, act upon, print, disclose, copy, retain or redistribute any portion of this E-mail.
Kryo On Spark 1.6.0
Hi, I am trying to use Kryo on Spark 1.6.0. I am able to register my own classes and it works, but when I set "spark.kryo.registrationRequired " to true, I get an error about a scala class: "Class is not registered: scala.collection.mutable.WrappedArray$ofRef". Any of you has already solved this issue in Java? I found the code to solve it in Scala, but unable to register this class in Java. Cheers, enrico CONFIDENTIALITY WARNING. This message and the information contained in or attached to it are private and confidential and intended exclusively for the addressee. everis informs to whom it may receive it in error that it contains privileged information and its use, copy, reproduction or distribution is prohibited. If you are not an intended recipient of this E-mail, please notify the sender, delete it and do not read, act upon, print, disclose, copy, retain or redistribute any portion of this E-mail.
[ On the use of Spark as 'storage system']
Hello, I had a discussion today with a colleague who was saying the following: "We can use Spark as fast serving layer in our architecture, that is we can compute an RDD or even a dataset using Spark SQL, then we can cache it and offering to the front end layer an access to our application in order to show them the content of the RDD/Content." This way of using Spark is for me something new, has anyone of you experience in this use case? Cheers, Enrico CONFIDENTIALITY WARNING. This message and the information contained in or attached to it are private and confidential and intended exclusively for the addressee. everis informs to whom it may receive it in error that it contains privileged information and its use, copy, reproduction or distribution is prohibited. If you are not an intended recipient of this E-mail, please notify the sender, delete it and do not read, act upon, print, disclose, copy, retain or redistribute any portion of this E-mail.