> [Venkat] Are you saying - pull in the SharkServer2 code in my standalone
> spark application (as a part of the standalone application process), pass
> in
> the spark context of the standalone app to SharkServer2 Sparkcontext at
> startup and viola we get a SQL/JDBC interfaces for the RDDs of t
1) If I have a standalone spark application that has already built a RDD,
how can SharkServer2 or for that matter Shark access 'that' RDD and do
queries on it. All the examples I have seen for Shark, the RDD (tables) are
created within Shark's spark context and processed.
This is not possible out
On Thu, May 29, 2014 at 3:26 PM, Venkat Subramanian
wrote:
>
> 1) If I have a standalone spark application that has already built a RDD,
> how can SharkServer2 or for that matter Shark access 'that' RDD and do
> queries on it. All the examples I have seen for Shark, the RDD (tables) are
> created
Thanks Michael.
OK will try SharkServer2..
But I have some basic questions on a related area:
1) If I have a standalone spark application that has already built a RDD,
how can SharkServer2 or for that matter Shark access 'that' RDD and do
queries on it. All the examples I have seen for Shark, the