Sorry, what’s the full context for this? Do you have a stack trace? My guess is
that Spark isn’t on your classpath, or maybe you only have an old version of it
on there.
Matei
On Nov 27, 2013, at 6:04 PM, Walrus theCat wrote:
> To clarify, I just undid that "var... field.." thing described ab
To clarify, I just undid that "var... field.." thing described above, and
it throws the same error.
On Wed, Nov 27, 2013 at 5:53 PM, Walrus theCat wrote:
> Hi all,
>
> This exception gets thrown when I assign a value to the variable holding
> my SparkContext. I initialize it as a var holding a
Hi all,
This exception gets thrown when I assign a value to the variable holding my
SparkContext. I initialize it as a var holding a null value (so it can be
a field), and then give it a value in my main method. This worked with the
previous version of Spark, but is not working on Spark 0.8.0.
I get a java index out of bounds exception when i run part 2 of the
documentation here:
http://ampcamp.berkeley.edu/3/exercises/mli-document-categorization.html
Specifically the first line throws an error... i have tried splitting the
methods and running them separately.. but still it gives me t
The job get stuck meaning it halts for some time. Doesn’t do any processing.
CPU usage goes to 0%. After some time the processing resumes and CPU goes up.
This cycle continues as job progresses till it completes.
But today while I am running some other spark jobs it isn’t happening. The job
is
Vijay - you said the job gets stuck but you also said it eventually
completes. What do you mean by stuck? Do you mean that there are
periods of low CPU utilization?
If you can run jstack during one of the periods and post the output
that would be most helpful.
On Wed, Nov 27, 2013 at 1:04 AM, Vij
The server has 100+ gb of memory. Virtual memory for my job is 60gb and
reserved is 20-30 gb. So there is plenty of memory to spare even when job
is stuck. I am not sure if it is GC because there is still lot of memory
which job could have used. The jobs memory consumption remains same after
it res
How about memory usage, any GC problem? When you mention get stuck, you mean 0%
or 1200% CPU while no progress?
Raymond
From: Vijay Gaikwad [mailto:vijay...@gmail.com]
Sent: Wednesday, November 27, 2013 2:54 PM
To: user@spark.incubator.apache.org
Subject: Re: local[k] job gets stuck - spark 0.8