What is unclear to me is why we are introducing this integration, how users
will leverage it.

* Are we replacing spark-shell with it ?
Given the existing gaps, this is not the case.

* Is it an example to showcase how to build an integration ?
That could be interesting, and we can add it to external/

Anything else I am missing ?

Regards,
Mridul



On Wed, Mar 22, 2023 at 6:58 PM Herman van Hovell <her...@databricks.com>
wrote:

> Ammonite is maintained externally by Li Haoyi et al. We are including it
> as a 'provided' dependency. The integration bits and pieces (1 file) are
> included in Apache Spark.
>
> On Wed, Mar 22, 2023 at 7:53 PM Mridul Muralidharan <mri...@gmail.com>
> wrote:
>
>>
>> Will this be maintained externally or included into Apache Spark ?
>>
>> Regards ,
>> Mridul
>>
>>
>>
>> On Wed, Mar 22, 2023 at 6:50 PM Herman van Hovell
>> <her...@databricks.com.invalid> wrote:
>>
>>> Hi All,
>>>
>>> For Spark Connect Scala Client we are working on making the REPL
>>> experience a bit nicer <https://github.com/apache/spark/pull/40515>. In
>>> a nutshell we want to give users a turn key scala REPL, that works even if
>>> you don't have a Spark distribution on your machine (through coursier
>>> <https://get-coursier.io/>). We are using Ammonite
>>> <https://ammonite.io/> instead of the standard scala REPL for this, the
>>> main reason for going with Ammonite is that it is easier to customize, and
>>> IMO has a superior user experience.
>>>
>>> Does anyone object to doing this?
>>>
>>> Kind regards,
>>> Herman
>>>
>>>

Reply via email to