Hi Tim,
Thanks for the reply!
Yes, I have checked the system config and my admin is happy with me running
with 2gigs heap space.
Thanks for all the help!
Warm regards
Arko
On Sat, Sep 7, 2013 at 1:25 PM, Tim Robertson wrote:
> That's right.
> You can verify it when you run your job by lookin
That's right.
You can verify it when you run your job by looking at the "job file" link
at the top. That shows you all the params used to start the job.
Just be aware to make sure you don't put your cluster into an unstable
state when you do that. E.g. look at how many mappers / reducers that ca
Hi Harsh,
Thanks for your reply!
I have implemented the tool interface to incorporate your suggestion.
So when I run my job can I just pass -Dmapred.child.java.opts=Xmx2000m?
Thanks & regards
Arko
On Sat, Sep 7, 2013 at 4:54 AM, Harsh J wrote:
> You can pass that config set as part of your
You can pass that config set as part of your job (jobConf.set(…) or
job.getConfiguration().set(…)). Alternatively, if you implement Tool,
and use its grabbed Configruation, you can also pass it via
-Dname=value argument when running the job (the option has to precede
any custom options).
On Sat, S
Hello All,
I am running my job on a Hadoop Cluster and it fails due to insufficient
Java Heap Memory.
I searched in google, and found that I need to add the following into the
conf files:
mapred.child.java.opts
-Xmx2000m
However, I don't want to request the administrator to change