when i try to open sequence file:
val t2 = sc.sequenceFile("/user/hdfs/e1Mseq", classOf[String],
classOf[String])
t2.groupByKey().take(5)
I get:
org.apache.spark.SparkException: Job aborted: Task 25.0:0 had a not
serializable result: java.io.NotSerializableException:
org.apache.hadoop.io.Text
ano
I think this is caused by not setting yarn.application.classpath in your
yarn-site.xml.
-Sandy
On Sat, Mar 8, 2014 at 2:24 AM, Venkata siva kamesh Bhallamudi <
kam.iit...@gmail.com> wrote:
> Hi All,
> I am new to Spark and running pi example on Yarn Cluster. I am getting
> the following except
I found updated materials (ampcamp4 from Feb 2014) and tried the
instructions at
http://ampcamp.berkeley.edu/4/exercises/launching-a-bdas-cluster-on-ec2.html
.
We have some similar errors even on the latest
cp: cannot create regular file `/root/mesos-ec2/': Is a directory
RSYNC'ing /root/mesos
The spark-training scripts are not presently working 100%: the errors
displayed when starting the slaves are shown below.
Possibly a newer location for the files exists (I pulled from
https://github.com/amplab/training-scripts an it is nearly 6 months old)
cp: cannot create regular file `/root/
On 07/03/2014 19:08, Ognen Duzlevski wrote:
I have had the most awful time figuring out these "looped" things. It
seems like it is next to impossible to run a .filter() operation in a
for loop, it seems to work if you yield .filter()
The equivalent of a filter in a for statement is an 'if'. Sc
Hi All,
I am new to Spark and running pi example on Yarn Cluster. I am getting the
following exception
Exception in thread "main" java.lang.NullPointerException
at
scala.collection.mutable.ArrayOps$ofRef$.length$extension(ArrayOps.scala:114)
at scala.collection.mutable.ArrayOps$ofRef.leng