Your question is very interesting. What I suggest is, that copy your output
in some text file. Read text file in your code and apply RDD. Just consider
wordcount example by Spark. I love this example with Java client. Well,
Spark is an analytical engine and it has a slogan to analyze big big data so
from my point of view your assumption is wrong. 

You can also save your data in any respository in some structured form. This
will give you more exposure of Spark behavior.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Create-RDD-from-output-of-unix-command-tp23723p23830.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to