I have encountered the similar error too at spark 1.4.0.
The same code can be run on spark 1.3.1.
My code is(it can be run on spark-shell):
===
// hc is a instance of HiveContext
val df = hc.sql("select * from test limit 10")
val sb = new mutable.StringBuilder
Yeah so Steve, hopefully it's self evident, but that is a perfect
example of the kind of annoying stuff we don't want to force users to
deal with by forcing an upgrade to 2.X. Compare the pain from Spark
users of trying to reason about what to do (and btw it seems like the
answer is simply that the
In spark 1.4.0, I find that the Address is ip (it was hostname in v1.3.0), why?
who did it?
This is a good start, if you haven't seen this already
https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark
Thanks
Best Regards
On Sat, Jun 13, 2015 at 8:46 AM, srinivasraghavansr71 <
sreenivas.raghav...@gmail.com> wrote:
> Hi everyone,
> I am interest to cont
> On 12 Jun 2015, at 17:12, Patrick Wendell wrote:
>
> For instance at Databricks we use
> the FileSystem library for talking to S3... every time we've tried to
> upgrade to Hadoop 2.X there have been significant regressions in
> performance and we've had to downgrade. That's purely anecdotal,
Perfect! I'll start working on it
2015-06-13 2:23 GMT+02:00 Amit Ramesh :
>
> Hi Juan,
>
> I have created a ticket for this:
> https://issues.apache.org/jira/browse/SPARK-8337
>
> Thanks!
> Amit
>
>
> On Fri, Jun 12, 2015 at 3:17 PM, Juan Rodríguez Hortalá <
> juan.rodriguez.hort...@gmail.com> wr
The deeplearning4j project provides neural net algorithms for Spark ML. You
may consider it sample code for extending Spark with new ML algorithms.
http://deeplearning4j.org/sparkmlhttps://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j-scaleout/spark/dl4j-spark-ml
-Eron
> D