We've found that Raspberry Pi is not enough for Hadoop/Spark mainly
because the memory consumption. What we've built is a cluster form
with 22 Cubieboards, each contains 1 GB RAM.
Best regards,
-chanwit
--
Chanwit Kaewkasi
linkedin.com/in/chanwit
On Thu, Sep 11, 2014 at 8:04 PM, Sandeep Singh
first report.
Best regards,
-chanwit
--
Chanwit Kaewkasi
linkedin.com/in/chanwit
On Wed, May 28, 2014 at 1:13 AM, Chanwit Kaewkasi chan...@gmail.com wrote:
May be that's explaining mine too.
Thank you very much, Aaron !!
Best regards,
-chanwit
--
Chanwit Kaewkasi
linkedin.com/in/chanwit
Congratulations !!
-chanwit
--
Chanwit Kaewkasi
linkedin.com/in/chanwit
On Fri, May 30, 2014 at 5:12 PM, Patrick Wendell pwend...@gmail.com wrote:
I'm thrilled to announce the availability of Spark 1.0.0! Spark 1.0.0
is a milestone release as the first in the 1.0 line of releases,
providing
May be that's explaining mine too.
Thank you very much, Aaron !!
Best regards,
-chanwit
--
Chanwit Kaewkasi
linkedin.com/in/chanwit
On Wed, May 28, 2014 at 12:47 AM, Aaron Davidson ilike...@gmail.com wrote:
Spark should effectively turn Akka's failure detector off, because we
historically
is wrong).
Best regards,
-chanwit
--
Chanwit Kaewkasi
linkedin.com/in/chanwit
Hi all,
Can Spark (0.9.x) utilize the caching feature in HDFS 2.3 via
sc.textFile() and other HDFS-related APIs?
http://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html
Best regards,
-chanwit
--
Chanwit Kaewkasi
linkedin.com/in/chanwit
Great to know that! Thank you, Matei.
Best regards,
-chanwit
--
Chanwit Kaewkasi
linkedin.com/in/chanwit
On Tue, May 13, 2014 at 2:14 AM, Matei Zaharia matei.zaha...@gmail.com wrote:
That API is something the HDFS administrator uses outside of any application
to tell HDFS to cache certain