Raghu,

I don't actually have exact numbers from jmap, although I do remember that
jmap -histo reported something less than 256MB for this process (before I
restarted it).

I just looked at another DFS process that is currently running and has a VM
size of 1.5GB (~600 resident). Here jmap reports a total object heap usage
of 120MB. The memory block list reported by jmap <pid> doesn't actually seem
to contain the heap at all since the largest block in that list is 10MB in
size (/usr/java/jdk1.6.0_10/jre/lib/amd64/server/libjvm.so). However, pmap
reports a total usage of 1.56GB.

-- Stefan



> From: Raghu Angadi <rang...@yahoo-inc.com>
> Reply-To: <core-user@hadoop.apache.org>
> Date: Sun, 10 May 2009 20:06:13 -0700
> To: <core-user@hadoop.apache.org>
> Subject: Re: Huge DataNode Virtual Memory Usage
> 
> what do 'jmap' and 'jmap -histo:live' show?.
> 
> Raghu.
> 
> Stefan Will wrote:
>> Chris,
>> 
>> Thanks for the tip ... However I'm already running 1.6_10:
>> 
>> java version "1.6.0_10"
>> Java(TM) SE Runtime Environment (build 1.6.0_10-b33)
>> Java HotSpot(TM) 64-Bit Server VM (build 11.0-b15, mixed mode)
>> 
>> Do you know of a specific bug # in the JDK bug database that addresses this
>> ?
>> 
>> Cheers,
>> Stefan
>> 
>> 
>>> From: Chris Collins <ch...@scoutlabs.com>
>>> Reply-To: <core-user@hadoop.apache.org>
>>> Date: Fri, 8 May 2009 20:34:21 -0700
>>> To: "core-user@hadoop.apache.org" <core-user@hadoop.apache.org>
>>> Subject: Re: Huge DataNode Virtual Memory Usage
>>> 
>>> Stefan, there was a nasty memory leak in in 1.6.x before 1.6 10.  It
>>> manifested itself during major GC.  We saw this on linux and solaris
>>> and dramatically improved with an upgrade.
>>> 
>>> C
>>> On May 8, 2009, at 6:12 PM, Stefan Will wrote:
>>> 
>>>> Hi,
>>>> 
>>>> I just ran into something rather scary: One of my datanode processes
>>>> that
>>>> I¹m running with ­Xmx256M, and a maximum number of Xceiver threads
>>>> of 4095
>>>> had a virtual memory size of over 7GB (!). I know that the VM size
>>>> on Linux
>>>> isn¹t necessarily equal to the actual memory used, but I wouldn¹t
>>>> expect it
>>>> to be an order of magnitude higher either. I ran pmap on the
>>>> process, and it
>>>> showed around 1000 thread stack blocks with roughly 1MB each (which
>>>> is the
>>>> default size on the 64bit JDK). The largest block was 3GB in size
>>>> which I
>>>> can¹t figure out what it is for.
>>>> 
>>>> Does anyone have any insights into this ? Anything that can be done to
>>>> prevent this other than to restart the DFS regularly ?
>>>> 
>>>> -- Stefan
>> 
>> 


Reply via email to