Sounds good, thanks for clarifying !

Regards,
Mridul

On Thu, Mar 23, 2023 at 9:09 AM Herman van Hovell <her...@databricks.com>
wrote:

> The goal of adding this, is to make it easy for a user to connect a scala
> REPL to a Spark Connect server. Just like Spark shell makes it easy to work
> with a regular Spark environment.
>
> It is not meant as a Spark shell replacement. They represent two different
> modes of working with Spark, and they have very different API surfaces
> (Connect being a subset of what regular Spark has to offer). I do think we
> should consider using ammonite for Spark shell at some point, since this
> has better UX and does not require us to fork a REPL. That discussion is
> for another day though.
>
> I guess you can use it as an example of building an integration. In itself
> I wouldn't call it that because I think this a key part of getting started
> with connect, and/or doing debugging.
>
> On Thu, Mar 23, 2023 at 4:00 AM Mridul Muralidharan <mri...@gmail.com>
> wrote:
>
>>
>> What is unclear to me is why we are introducing this integration, how
>> users will leverage it.
>>
>> * Are we replacing spark-shell with it ?
>> Given the existing gaps, this is not the case.
>>
>> * Is it an example to showcase how to build an integration ?
>> That could be interesting, and we can add it to external/
>>
>> Anything else I am missing ?
>>
>> Regards,
>> Mridul
>>
>>
>>
>> On Wed, Mar 22, 2023 at 6:58 PM Herman van Hovell <her...@databricks.com>
>> wrote:
>>
>>> Ammonite is maintained externally by Li Haoyi et al. We are including it
>>> as a 'provided' dependency. The integration bits and pieces (1 file) are
>>> included in Apache Spark.
>>>
>>> On Wed, Mar 22, 2023 at 7:53 PM Mridul Muralidharan <mri...@gmail.com>
>>> wrote:
>>>
>>>>
>>>> Will this be maintained externally or included into Apache Spark ?
>>>>
>>>> Regards ,
>>>> Mridul
>>>>
>>>>
>>>>
>>>> On Wed, Mar 22, 2023 at 6:50 PM Herman van Hovell
>>>> <her...@databricks.com.invalid> wrote:
>>>>
>>>>> Hi All,
>>>>>
>>>>> For Spark Connect Scala Client we are working on making the REPL
>>>>> experience a bit nicer <https://github.com/apache/spark/pull/40515>.
>>>>> In a nutshell we want to give users a turn key scala REPL, that works even
>>>>> if you don't have a Spark distribution on your machine (through
>>>>> coursier <https://get-coursier.io/>). We are using Ammonite
>>>>> <https://ammonite.io/> instead of the standard scala REPL for this,
>>>>> the main reason for going with Ammonite is that it is easier to customize,
>>>>> and IMO has a superior user experience.
>>>>>
>>>>> Does anyone object to doing this?
>>>>>
>>>>> Kind regards,
>>>>> Herman
>>>>>
>>>>>

Reply via email to