This is stand-alone spark cluster. My understanding is spark is an
execution engine and not a storage layer.
Spark processes data in memory but when someone refers to a spark table
created through sparksql(df/rdd) what exactly are they referring to?

Could it be a Hive table? If yes, is it the same hive store that spark uses?
Is it a table in memory? If yes, how can an external app

Spark version with hadoop : spark-2.0.2-bin-hadoop2.7

Thanks and appreciate your help!!
Ajay.

Reply via email to