From: Williams, Ken Williams
ken.willi...@windlogics.commailto:ken.willi...@windlogics.com
Date: Thursday, March 19, 2015 at 10:59 AM
To: Spark list user@spark.apache.orgmailto:user@spark.apache.org
Subject: JAVA_HOME problem with upgrade to 1.3.0
[…]
Finally, I go and check the YARN
I’m trying to upgrade a Spark project, written in Scala, from Spark 1.2.1 to
1.3.0, so I changed my `build.sbt` like so:
-libraryDependencies += org.apache.spark %% spark-core % 1.2.1 %
“provided
+libraryDependencies += org.apache.spark %% spark-core % 1.3.0 %
provided
then make an
From: Ted Yu yuzhih...@gmail.commailto:yuzhih...@gmail.com
Date: Thursday, March 19, 2015 at 11:05 AM
JAVA_HOME, an environment variable, should be defined on the node where
appattempt_1420225286501_4699_02 ran.
Has this behavior changed in 1.3.0 since 1.2.1 though? Using 1.2.1 and
I've cloned the github repo and I'm building Spark on a pretty beefy machine
(24 CPUs, 78GB of RAM) and it takes a pretty long time.
For instance, today I did a 'git pull' for the first time in a week or two, and
then doing 'sbt/sbt assembly' took 43 minutes of wallclock time (88 minutes of
I'm trying to get my feet wet with Spark. I've done some simple stuff in the
shell in standalone mode, and now I'm trying to connect to HDFS resources, but
I'm running into a problem.
I synced to git's master branch (c399baa - SPARK-1456 Remove view bounds on
Ordered in favor of a context
the Hadoop command-line tools do, but
that's not so important.
-Ken
-Original Message-
From: Williams, Ken [mailto:ken.willi...@windlogics.com]
Sent: Monday, April 21, 2014 2:04 PM
To: Spark list
Subject: Problem connecting to HDFS in Spark shell
I'm trying to get my feet wet with Spark
-Original Message-
From: Marcelo Vanzin [mailto:van...@cloudera.com]
Hi Ken,
On Mon, Apr 21, 2014 at 1:39 PM, Williams, Ken
ken.willi...@windlogics.com wrote:
I haven't figured out how to let the hostname default to the host
mentioned in our /etc/hadoop/conf/hdfs-site.xml like