You may want to look into using the pipe command ..
http://blog.madhukaraphatak.com/pipe-in-spark/
http://spark.apache.org/docs/0.6.0/api/core/spark/rdd/PipedRDD.html
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Create-RDD-from-output-of-unix-command
so
from my point of view your assumption is wrong.
You can also save your data in any respository in some structured form. This
will give you more exposure of Spark behavior.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Create-RDD-from-output-of-unix
in any respository in some structured form.
This
will give you more exposure of Spark behavior.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Create-RDD-from-output-of-unix-command-tp23723p23830.html
Sent from the Apache Spark User List mailing list
.nabble.com/Create-RDD-from-output-of-unix-command-tp23723.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h
is not a good idea? Thanks!
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Create-RDD-from-output-of-unix-command-tp23723.html
Sent from the Apache Spark User List mailing list archive at Nabble.com