Hi,

According to Spark documentation the data sharing between two different
Spark contexts is not possible.

So I just wonder if it is possible to first run a job that loads some data
from DB into Schema RDDs, then  cache it and  next register it as a temp
table (let's say Table_1), now I would like to open a JDBC connection
(assuming that I have setup JDBC server  on the same cluster, so it is
connected to same Master) and perform a SQL query on Table_1 .

Is the above scenario feasible in Spark? or simply these two tasks belong
to two different Spark contexts and therefore not runnable?


best,
/Shahab

Reply via email to