On Sun, Jun 8, 2014 at 10:00 AM, Nick Pentreath <nick.pentre...@gmail.com>
 wrote:

> When you use match, the match must be exhaustive. That is, a match error
> is thrown if the match fails.


Ahh, right. That makes sense. Scala is applying its "strong typing" rules
here instead of "no ceremony"... but isn't the idea that type errors should
get picked up at compile time? I suppose the compiler can't tell there's
not complete coverage, but it seems strange to throw that at runtime when
it is literally the 'default case'.

I think I need a good "Scala Programming Guide"... any suggestions? I've
read and watch the usual resources and videos, but it feels like a shotgun
approach and I've clearly missed a lot.

On Mon, Jun 9, 2014 at 3:26 AM, Mark Hamstra <m...@clearstorydata.com>
wrote:
>
> And you probably want to push down that filter into the cluster --
> collecting all of the elements of an RDD only to not use or filter out some
> of them isn't an efficient usage of expensive (at least in terms of
> time/performance) network resources.  There may also be a good opportunity
> to use the partial function form of collect to push even more processing
> into the cluster.
>

I almost certainly do :-) And I am really looking forward to spending time
optimizing the code, but I keep getting caught up on deployment issues,
uberjars, missing /mnt/spark directories, only being able to submit from
the master, and being thoroughly confused about sample code from three
versions ago.

I'm even thinking of learning maven, if it means I never have to use sbt
again. Does it mean that?

-- 
Jeremy Lee  BCompSci(Hons)
  The Unorthodox Engineers

Reply via email to