Ok. I don't put it in the path. Because this is not a lib I want to use
permanently. 
here is my code in RDD.
            val fileaddr = SparkFiles.get("segment.so");
            System.load(fileaddr);
            val config = SparkFiles.get("qsegconf.ini")
            val segment = new Segment//this is the native class
            segment.init(config);//here failed if driver doesn't load this
lib

I just use the system.load to load this lib. But now I also  call some
functions in the lib to change some objects in the lib. It turns out the
fatal error. And after I first load it in driver, this works again in
standalone mode. I want to know how the job running from the driver to
worker, or how the worker memory is loaded to the native lib. Then I take
another sample lib. If the function not change the objects in the lib, the
lib can be only loaded in workers. And another thing I want to know is
whether there is something like the cache arche in hadoop streaming.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/how-to-know-the-Spark-worker-Mechanism-tp19141p19146.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to