In theory, maybe a Jupyter notebook or something similar could achieve
this? e.g. running some Jypyter kernel inside Spark driver, then another
Python process could connect to that kernel.

But in the end, this is like Spark Connect :)


On Mon, Dec 12, 2022 at 2:55 PM Kevin Su <pings...@gmail.com> wrote:

> Also, is there any way to workaround this issue without using Spark
> connect?
>
> Kevin Su <pings...@gmail.com> 於 2022年12月12日 週一 下午2:52寫道:
>
>> nvm, I found the ticket.
>> Also, is there any way to workaround this issue without using Spark
>> connect?
>>
>> Kevin Su <pings...@gmail.com> 於 2022年12月12日 週一 下午2:42寫道:
>>
>>> Thanks for the quick response? Do we have any PR or Jira ticket for it?
>>>
>>> Reynold Xin <r...@databricks.com> 於 2022年12月12日 週一 下午2:39寫道:
>>>
>>>> Spark Connect :)
>>>>
>>>> (It’s work in progress)
>>>>
>>>>
>>>> On Mon, Dec 12 2022 at 2:29 PM, Kevin Su <pings...@gmail.com> wrote:
>>>>
>>>>> Hey there, How can I get the same spark context in two different
>>>>> python processes?
>>>>> Let’s say I create a context in Process A, and then I want to use
>>>>> python subprocess B to get the spark context created by Process A. How can
>>>>> I achieve that?
>>>>>
>>>>> I've
>>>>> tried pyspark.sql.SparkSession.builder.appName("spark").getOrCreate(), but
>>>>> it will create a new spark context.
>>>>>
>>>>

Reply via email to