Vinod,
Thanks for your reply.
1. If I understand you correct you are asking me to change the memory
allocation for each map and reduce tasks , isnt this related to the
physical memory which is not an issue(with in limits) in my application ?
The problem I am facing is with the virtual memory.
2.
:* S.L [mailto:simpleliving...@gmail.com]
> *Sent:* Wednesday, January 01, 2014 9:51 PM
> *To:* user@hadoop.apache.org
> *Subject:* Unable to change the virtual memory to be more than the
> default 2.1 GB
>
>
>
> Hello Folks,
>
> I am running hadoop 2.2 in a pseudo-
You need to change the application configuration itself to tell YARN that each
task needs more than the default. I see that this is a mapreduce app, so you
have to change the per-application configuration: mapreduce.map.memory.mb and
mapreduce.reduce.memory.mb in either mapred-site.xml or via th
user@hadoop.apache.org
Subject: Unable to change the virtual memory to be more than the default 2.1
GB
Hello Folks,
I am running hadoop 2.2 in a pseudo-distributed mode on a laptop with 8GB
RAM.
Whenever I submit a job I get an error that says that the that the virtual
memory usage exceeded , like below.
I
Hello Folks,
I am running hadoop 2.2 in a pseudo-distributed mode on a laptop with 8GB
RAM.
Whenever I submit a job I get an error that says that the that the virtual
memory usage exceeded , like below.
I have changed the ratio yarn.nodenamager.vmem-pmem-ratio in yarn-site.xml
to 10 , however th