Yes, I think the JVM uses way more memory than just its heap. Now some of it
might be just reserved memory, but not actually used (not sure how to tell
the difference). There are also things like thread stacks, jit compiler
cache, direct nio byte buffers etc. that take up process space outside of
the Java heap. But none of that should imho add up to Gigabytes...

-- Stefan 


> From: zsongbo <zson...@gmail.com>
> Reply-To: <core-user@hadoop.apache.org>
> Date: Tue, 12 May 2009 20:06:37 +0800
> To: <core-user@hadoop.apache.org>
> Subject: Re: How to do load control of MapReduce
> 
> Yes, I also found that the TaskTracker should not use so much memory.
>   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
> 
> 
> 32480 schubert  35  10 1411m 172m 9212 S    0  2.2   8:54.78 java
> 
> The previous 1GB is the default value, I have just change the heap of TT to
> 384MB one hours ago.
> 
> I also found DataNode also need not too much memory.
>   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
> 
> 32399 schubert  25   0 1638m 372m 9208 S    2  4.7  32:46.28 java
> 
> 
> In fact, I define the -Xmx512m in child opt for MapReduce tasks. But I found
> the child task use more memory than the definition:
>   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
> 
> 
> 10577 schubert  30  10  942m 572m 9092 S   46  7.2  51:02.21 java
> 
> 10507 schubert  29  10  878m 570m 9092 S   48  7.1  50:49.52 java
> 
> Schubert
> 
> On Tue, May 12, 2009 at 6:53 PM, Steve Loughran <ste...@apache.org> wrote:
> 
>> zsongbo wrote:
>> 
>>> Hi Stefan,
>>> Yes, the 'nice' cannot resolve this problem.
>>> 
>>> Now, in my cluster, there are 8GB of RAM. My java heap configuration is:
>>> 
>>> HDFS DataNode : 1GB
>>> HBase-RegionServer: 1.5GB
>>> MR-TaskTracker: 1GB
>>> MR-child: 512MB   (max child task is 6, 4 map task + 2 reduce task)
>>> 
>>> But the memory usage is still tight.
>>> 
>> 
>> does TT need to be so big if you are running all your work in external VMs?
>> 


Reply via email to