Re: how to know the Spark worker Mechanism
Ok. I don't put it in the path. Because this is not a lib I want to use permanently. here is my code in RDD. val fileaddr = SparkFiles.get("segment.so"); System.load(fileaddr); val config = SparkFiles.get("qsegconf.ini") val segment = new Segment//this is the native class segment.init(config);//here failed if driver doesn't load this lib I just use the system.load to load this lib. But now I also call some functions in the lib to change some objects in the lib. It turns out the fatal error. And after I first load it in driver, this works again in standalone mode. I want to know how the job running from the driver to worker, or how the worker memory is loaded to the native lib. Then I take another sample lib. If the function not change the objects in the lib, the lib can be only loaded in workers. And another thing I want to know is whether there is something like the cache arche in hadoop streaming. -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/how-to-know-the-Spark-worker-Mechanism-tp19141p19146.html Sent from the Apache Spark User List mailing list archive at Nabble.com. - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org
how to know the Spark worker Mechanism
I'm a newbee in Spark. I know that what the work should do is written in RDD. But I want to make the worker load a native lib and I can do something to change the content of the lib in memory. So how can I do. I can do it on driver, but not worker. I always get a fatal error. The jvm report A fatal error C [libstdc++.so.6+0x64d24] std::_Rb_tree_rotate_left(std::_Rb_tree_node_base*, std::_Rb_tree_node_base*&)+0x4 -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/how-to-know-the-Spark-worker-Mechanism-tp19141.html Sent from the Apache Spark User List mailing list archive at Nabble.com. - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org
Re: how to use JNI in spark?
You just need to add --driver-library-path the directory in you submit command. And in your worker node, add the lib in the right work directory -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/how-to-use-JNI-in-spark-tp530p18551.html Sent from the Apache Spark User List mailing list archive at Nabble.com. - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org