This is experimental, but you can start the JDBC server from within your
own programs
<https://github.com/apache/spark/blob/master/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/HiveThriftServer2.scala#L45>
by
passing it the HiveContext.


On Fri, Dec 19, 2014 at 6:04 AM, shahab <shahab.mok...@gmail.com> wrote:
>
> Hi,
>
> According to Spark documentation the data sharing between two different
> Spark contexts is not possible.
>
> So I just wonder if it is possible to first run a job that loads some data
> from DB into Schema RDDs, then  cache it and  next register it as a temp
> table (let's say Table_1), now I would like to open a JDBC connection
> (assuming that I have setup JDBC server  on the same cluster, so it is
> connected to same Master) and perform a SQL query on Table_1 .
>
> Is the above scenario feasible in Spark? or simply these two tasks belong
> to two different Spark contexts and therefore not runnable?
>
>
> best,
> /Shahab
>

Reply via email to