I haven't read the code yet, but if it is what I think it is, this is
SUPER, UBER, HUGELY useful.

On a related note, I asked about this on the Scala dev list but never got a
satisfactory answer ....
https://groups.google.com/forum/#!msg/scala-internals/_cZ1pK7q6cU/xyBQA0DdcYwJ


On Wed, Aug 13, 2014 at 2:21 PM, Robert C Senkbeil <rcsen...@us.ibm.com>
wrote:

>
>
> I've created a new pull request, which can be found at
> https://github.com/apache/spark/pull/1929. Since Spark is using Scala
> 2.10.3 and there is a known issue with Scala 2.10.x not supporting the :cp
> command (https://issues.scala-lang.org/browse/SI-6502), the Spark shell
> does not have the ability to add jars to the classpath after it has been
> started.
>
> The advantage of dynamically adding the jars versus restarting the
> shell/interpreter (global) is that you can keep your shell's current state
> (you don't lose your RDDs or anything). The previously-supported Scala
> 2.9.x implementation wiped the interpreter and replayed all of the
> commands. This isn't ideal for Spark since some operations can still be
> quite heavy. Furthermore, if some operations involved loading external
> data, there is the potential for said data to have changed if you replay
> the commands.
>
> Signed,
> Chip Senkbeil

Reply via email to