I am quite new with pyspark. In my application with pyspark, I want to
achieve following things:

    -- Create a RDD using python list and partition it into some partitions.
    -- Now use rdd.foreachPartition(func)
    -- Here, the function "func" performs an iterative operation which,
reads content of saved file into a local variable (for e.g. numpy array),
performs some updates using the rdd partion data and again saves the content
of variable to some common file system.

I am not able to figure out how to read and write a variable inside a worker
process to some common shared system which is accessible to all processes??




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Save-to-distributed-file-system-from-worker-processes-tp25342.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to