[ 
https://issues.apache.org/jira/browse/SPARK-7819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14572235#comment-14572235
 ] 

Nathan McCarthy commented on SPARK-7819:
----------------------------------------

Building of spark 1.4 branch, built 2 days ago. I can try building from the RC4 
tag.

Spark-submit command;

/apps/spark/spark-1.4.0-SNAPSHOT-bin-mapr4.0.2_yarn_j6_2.10/bin/spark-submit 
--class com.myapp.TestMain --master yarn-client --conf 
spark.sql.hive.metastore.sharedPrefixes=com.mysql.jdbc,org.postgresql,com.microsoft.sqlserver,oracle.jdbc,com.mapr.fs.shim.LibraryLoader,com.mapr.security.JNISecurity,com.mapr.fs.jni
  ~/app-jar_2.10-0.1.0-SNAPSHOT.jar 

Stack;
{code}
15/06/04 06:03:13 INFO metastore: Connected to metastore.
15/06/04 06:03:13 INFO SessionState: No Tez session required at this point. 
hive.execution.engine=mr.
java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at com.mapr.fs.ShimLoader.loadNativeLibrary(ShimLoader.java:323)
        at com.mapr.fs.ShimLoader.load(ShimLoader.java:198)
        at 
org.apache.hadoop.conf.CoreDefaultProperties.<clinit>(CoreDefaultProperties.java:59)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:274)
        at 
org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:1857)
        at 
org.apache.hadoop.conf.Configuration.getProperties(Configuration.java:2072)
        at 
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2282)
        at 
org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2234)
        at 
org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2151)
        at org.apache.hadoop.conf.Configuration.set(Configuration.java:1002)
        at org.apache.hadoop.conf.Configuration.set(Configuration.java:974)
        at org.apache.hadoop.mapred.JobConf.setJar(JobConf.java:518)
        at org.apache.hadoop.mapred.JobConf.setJarByClass(JobConf.java:536)
        at org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:430)
        at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:1366)
        at org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:1332)
        at 
org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:99)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at 
org.apache.spark.sql.hive.client.IsolatedClientLoader.liftedTree1$1(IsolatedClientLoader.scala:170)
        at 
org.apache.spark.sql.hive.client.IsolatedClientLoader.<init>(IsolatedClientLoader.scala:166)
        at 
org.apache.spark.sql.hive.HiveContext.metadataHive$lzycompute(HiveContext.scala:212)
        at 
org.apache.spark.sql.hive.HiveContext.metadataHive(HiveContext.scala:175)
        at org.apache.spark.sql.hive.HiveContext.setConf(HiveContext.scala:358)
        at 
org.apache.spark.sql.SQLContext$$anonfun$3.apply(SQLContext.scala:186)
        at 
org.apache.spark.sql.SQLContext$$anonfun$3.apply(SQLContext.scala:185)
        at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
        at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
        at org.apache.spark.sql.SQLContext.<init>(SQLContext.scala:185)
        at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:71)
        at 
au.com.quantium.personalisation.sampling.TestMain$delayedInit$body.apply(TestMain.scala:26)
        at scala.Function0$class.apply$mcV$sp(Function0.scala:40)
        at 
scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
        at scala.App$$anonfun$main$1.apply(App.scala:71)
        at scala.App$$anonfun$main$1.apply(App.scala:71)
        at scala.collection.immutable.List.foreach(List.scala:318)
        at 
scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:32)
        at scala.App$class.main(App.scala:71)
        at 
au.com.quantium.personalisation.sampling.TestMain$.main(TestMain.scala:11)
        at 
au.com.quantium.personalisation.sampling.TestMain.main(TestMain.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:664)
        at 
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.UnsatisfiedLinkError: Native Library 
/tmp/mapr-nathanm-libMapRClient.1.4.0-SNAPSHOT.so already loaded in another 
classloader
        at java.lang.ClassLoader.loadLibrary1(ClassLoader.java:1931)
        at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1890)
        at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1851)
        at java.lang.Runtime.load0(Runtime.java:795)
        at java.lang.System.load(System.java:1062)
        at com.mapr.fs.shim.LibraryLoader.load(LibraryLoader.java:29)
        ... 56 more
Exception in thread "main" java.lang.ExceptionInInitializerError
        at com.mapr.fs.ShimLoader.load(ShimLoader.java:215)
        at 
org.apache.hadoop.conf.CoreDefaultProperties.<clinit>(CoreDefaultProperties.java:59)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:274)
        at 
org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:1857)
        at 
org.apache.hadoop.conf.Configuration.getProperties(Configuration.java:2072)
        at 
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2282)
        at 
org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2234)
        at 
org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2151)
        at org.apache.hadoop.conf.Configuration.set(Configuration.java:1002)
        at org.apache.hadoop.conf.Configuration.set(Configuration.java:974)
        at org.apache.hadoop.mapred.JobConf.setJar(JobConf.java:518)
        at org.apache.hadoop.mapred.JobConf.setJarByClass(JobConf.java:536)
        at org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:430)
        at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:1366)
        at org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:1332)
        at 
org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:99)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at 
org.apache.spark.sql.hive.client.IsolatedClientLoader.liftedTree1$1(IsolatedClientLoader.scala:170)
        at 
org.apache.spark.sql.hive.client.IsolatedClientLoader.<init>(IsolatedClientLoader.scala:166)
        at 
org.apache.spark.sql.hive.HiveContext.metadataHive$lzycompute(HiveContext.scala:212)
        at 
org.apache.spark.sql.hive.HiveContext.metadataHive(HiveContext.scala:175)
        at org.apache.spark.sql.hive.HiveContext.setConf(HiveContext.scala:358)
        at 
org.apache.spark.sql.SQLContext$$anonfun$3.apply(SQLContext.scala:186)
        at 
org.apache.spark.sql.SQLContext$$anonfun$3.apply(SQLContext.scala:185)
        at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
        at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
        at org.apache.spark.sql.SQLContext.<init>(SQLContext.scala:185)
        at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:71)
        at 
au.com.quantium.personalisation.sampling.TestMain$delayedInit$body.apply(TestMain.scala:26)
        at scala.Function0$class.apply$mcV$sp(Function0.scala:40)
        at 
scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
        at scala.App$$anonfun$main$1.apply(App.scala:71)
        at scala.App$$anonfun$main$1.apply(App.scala:71)
        at scala.collection.immutable.List.foreach(List.scala:318)
        at 
scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:32)
        at scala.App$class.main(App.scala:71)
        at 
au.com.quantium.personalisation.sampling.TestMain$.main(TestMain.scala:11)
        at 
au.com.quantium.personalisation.sampling.TestMain.main(TestMain.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:664)
        at 
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at com.mapr.fs.ShimLoader.loadNativeLibrary(ShimLoader.java:323)
        at com.mapr.fs.ShimLoader.load(ShimLoader.java:198)
        ... 50 more
Caused by: java.lang.UnsatisfiedLinkError: Native Library 
/tmp/mapr-nathanm-libMapRClient.1.4.0-SNAPSHOT.so already loaded in another 
classloader
        at java.lang.ClassLoader.loadLibrary1(ClassLoader.java:1931)
        at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1890)
        at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1851)
        at java.lang.Runtime.load0(Runtime.java:795)
        at java.lang.System.load(System.java:1062)
        at com.mapr.fs.shim.LibraryLoader.load(LibraryLoader.java:29)
        ... 56 more
{code}

> Isolated Hive Client Loader appears to cause Native Library 
> libMapRClient.4.0.2-mapr.so already loaded in another classloader error
> -----------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-7819
>                 URL: https://issues.apache.org/jira/browse/SPARK-7819
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.4.0
>            Reporter: Fi
>            Priority: Critical
>         Attachments: invalidClassException.log, stacktrace.txt, test.py
>
>
> In reference to the pull request: https://github.com/apache/spark/pull/5876
> I have been running the Spark 1.3 branch for some time with no major hiccups, 
> and recently switched to the Spark 1.4 branch.
> I build my spark distribution with the following build command:
> {noformat}
> make-distribution.sh --tgz --skip-java-test --with-tachyon -Phive 
> -Phive-0.13.1 -Pmapr4 -Pspark-ganglia-lgpl -Pkinesis-asl -Phive-thriftserver
> {noformat}
> When running a python script containing a series of smoke tests I use to 
> validate the build, I encountered an error under the following conditions:
> * start a spark context
> * start a hive context
> * run any hive query
> * stop the spark context
> * start a second spark context
> * run any hive query
> ** ERROR
> From what I can tell, the Isolated Class Loader is hitting a MapR class that 
> is loading its native library (presumedly as part of a static initializer).
> Unfortunately, the JVM prohibits this the second time around.
> I would think that shutting down the SparkContext would clear out any 
> vestigials of the JVM, so I'm surprised that this would even be a problem.
> Note: all other smoke tests we are running passes fine.
> I will attach the stacktrace and a python script reproducing the issue (at 
> least for my environment and build).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to