Might I ask why vfs? I'm new to vfs and not sure wether or not it predates
the hadoop file system interfaces (HCFS).
After all spark natively supports any HCFS by leveraging the hadoop
FileSystem api and class loaders and so on.
So simply putting those resources on your classpath should be suffi
mment-72114226
> >>> >>
> >>> >> and in recent conversations I didn't hear dissent to the idea of
> >>> >> removing this.
> >>> >>
> >>> >> Is this still useful enough to fix up? All else equal I'd like to
> >>> >> start to walk back some of the complexity of the build, but I
> >>> >> don't know how all-else-equal it is. Certainly, it sounds like
> >>> >> nobody intends these to be used to actually deploy Spark.
> >>> >>
> >>> >> I don't doubt it's useful to someone, but can they maintain the
> >>> >> packaging logic elsewhere?
> >>> >>
> >>> >> --
> >>> >> --- To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org For
> >>> >> additional commands, e-mail: dev-h...@spark.apache.org
> >>> >>
> >>> >>
> >>>
> >>> -
> >>> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org For
> >>> additional commands, e-mail: dev-h...@spark.apache.org
> >>>
> >>
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org For additional
> commands, e-mail: dev-h...@spark.apache.org
> >
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> For additional commands, e-mail: dev-h...@spark.apache.org
>
>
--
jay vyas
from and who is maintaining its
release?
--
jay vyas
PS
I've had some conversations with will benton as well about this, and its
clear that some modifications to akka are needed, or else a protobug error
occurs, which amount to serialization incompatibilities, hence if one wants
to build
> > If we use something like Vagrant, we may even be able to make it so
> that
> > a
> > > single Vagrantfile creates equivalent development environments across
> OS
> > X,
> > > Linux, and Windows, without having to do much (or any) OS-specific
> work.
> > >
> > > I imagine for committers and regular contributors, this exercise may
> seem
> > > pointless, since y'all are probably already very comfortable with your
> > > workflow.
> > >
> > > I wonder, though, if any of you think this would be worthwhile as a
> > > improvement to the "new Spark developer" experience.
> > >
> > > Nick
> > >
> >
>
>
--
jay vyas
Hi folks.
In the end, I found that the problem was that I was using IP Addresses
instead of hostnames.
I guess, maybe, reverse dns is a requirement for spark slave -> master
communications... ?
On Fri, Dec 19, 2014 at 7:21 PM, jay vyas
wrote:
> Hi spark. Im trying to understand th
--
jay vyas
th a scala singleton,
which i guess is readily serializable.
So its clear that spark needs to serialize objects which carry the driver
methods for an app, in order to run... but I'm wondering,,, maybe there is
a way to change or update the spark API to catch unserializable spark apps
at compile
I tried the scala eclipse ide but in scala 2.10 I ran into some weird issues
http://stackoverflow.com/questions/24253084/scalaide-and-cryptic-classnotfound-errors
... So I switched to IntelliJ and was much more satisfied...
I've written a post on how I use fedora,sbt, and intellij for spark apps
, Mubarak Seyed wrote:
>
> What is your ulimit value?
>
>
>> On Tue, Aug 26, 2014 at 5:49 PM, jay vyas
>> wrote:
>> Hi spark.
>>
>> I've been trying to build spark, but I've been getting lots of oome
>> exceptions.
>
o hard code the "get_mem_opts" function, which is in the
sbt-launch-lib.bash file, to
have various very high parameter sizes (i.e. -Xms5g") with high
MaxPermSize, etc... and to no avail.
Any thoughts on this would be appreciated.
I know of others having the same problem as well.
Thanks!
--
jay vyas
10 matches
Mail list logo