The header should be sent from driver to workers already by spark. And
therefore in sparkshell it works.
In scala IDE, the code inside an app class. Then you need to check if the
app class is serializable.
On Tue, Sep 22, 2015 at 9:13 AM Alexis Gillain <
alexis.gill...@googlemail.com> wrote:
>
Howdy,
I'm a relative novice at Spark/Scala and I'm puzzled by some behavior that
I'm seeing in 2 of my local Spark/Scala environments (Scala for Jupyter and
Scala IDE) but not the 3rd (Spark Shell). The following code throws the
following stack trace error in the former 2 environments but
As Igor said header must be available on each partition so the solution is
broadcasting it.
About the difference between repl and scala IDE, it may come from the
sparkContext setup as REPL define one by default.
2015-09-22 8:41 GMT+08:00 Igor Berman :
> Try to broadcasr
Which release are you using ?
>From the line number in ClosureCleaner, it seems you're using 1.4.x
Cheers
On Mon, Sep 21, 2015 at 4:07 PM, Balaji Vijayan
wrote:
> Howdy,
>
> I'm a relative novice at Spark/Scala and I'm puzzled by some behavior that
> I'm seeing in
Try to broadcasr header
On Sep 22, 2015 08:07, "Balaji Vijayan" wrote:
> Howdy,
>
> I'm a relative novice at Spark/Scala and I'm puzzled by some behavior that
> I'm seeing in 2 of my local Spark/Scala environments (Scala for Jupyter and
> Scala IDE) but not the 3rd