I've created a new pull request, which can be found at
https://github.com/apache/spark/pull/1929. Since Spark is using Scala
2.10.3 and there is a known issue with Scala 2.10.x not supporting the :cp
command (https://issues.scala-lang.org/browse/SI-6502), the Spark shell
does not have the ability to add jars to the classpath after it has been
started.

The advantage of dynamically adding the jars versus restarting the
shell/interpreter (global) is that you can keep your shell's current state
(you don't lose your RDDs or anything). The previously-supported Scala
2.9.x implementation wiped the interpreter and replayed all of the
commands. This isn't ideal for Spark since some operations can still be
quite heavy. Furthermore, if some operations involved loading external
data, there is the potential for said data to have changed if you replay
the commands.

Signed,
Chip Senkbeil

Reply via email to