Hi, Is it possible to load a spark-shell, in which we do any number of operations in a dataframe, then register it as a temporary table and get to see it through thriftserver? ps. or even better, submit a full job and store the dataframe in thriftserver in-memory before the job completes.
I have been trying this without success, bee does not see the dataframes of the spark-shell's hive context. If any of you confirms this possibility, I will try further ahead. So far it only seems to be able to manually read from persistent tables. Thanks for any insights, Saif