Hey guyz,
I've got this issue (see bottom) with Spark, deployed in Standalone mode on
a local docker environment.
I know that I need to raise the ulimit (only 1024 now) but in the meantime
I was just wondering how this could happen.
My gut feeling is because I'm mounting a lot in memory and Spark
,
Sourav
On Fri, Feb 21, 2014 at 3:02 PM, andy petrella andy.petre...@gmail.comwrote:
Hey guyz,
I've got this issue (see bottom) with Spark, deployed in Standalone mode
on a local docker environment.
I know that I need to raise the ulimit (only 1024 now) but in the
meantime I was just
19, 2013 12:46 AM, andy petrella andy.petre...@gmail.com wrote:
Maybe I'm wrong, but this use case could be a good fit for
Shapelesshttps://github.com/milessabin/shapeless'
records.
Shapeless' records are like, so to say, lisp's record but typed! In that
sense, they're more closer
Maybe I'm wrong, but this use case could be a good fit for
Shapelesshttps://github.com/milessabin/shapeless'
records.
Shapeless' records are like, so to say, lisp's record but typed! In that
sense, they're more closer to Haskell's record notation, but imho less
powerful, since the access will be
Hello Rob,
As you may know I have a long experience in Geospatial data, and I'm now
investigating Spark... So I'll be very interested further answers but also
to participate to going forward on this great idea!
For instance, I'd say that implementing classical geospatial algorithms
like
Hello there,
I hope I get the whole thing correctly... but I think I did something quite
similar that polls the yahoo api for data, which I push into a DStream:
https://github.com/andypetrella/spark-bd/blob/master/src/main/scala/yahoo.scala#L41
Any comments or even concerns (if they are