Chris,

Thanks for the tip ... However I'm already running 1.6_10:

java version "1.6.0_10"
Java(TM) SE Runtime Environment (build 1.6.0_10-b33)
Java HotSpot(TM) 64-Bit Server VM (build 11.0-b15, mixed mode)

Do you know of a specific bug # in the JDK bug database that addresses this
?

Cheers,
Stefan


> From: Chris Collins <ch...@scoutlabs.com>
> Reply-To: <core-user@hadoop.apache.org>
> Date: Fri, 8 May 2009 20:34:21 -0700
> To: "core-user@hadoop.apache.org" <core-user@hadoop.apache.org>
> Subject: Re: Huge DataNode Virtual Memory Usage
> 
> Stefan, there was a nasty memory leak in in 1.6.x before 1.6 10.  It
> manifested itself during major GC.  We saw this on linux and solaris
> and dramatically improved with an upgrade.
> 
> C
> On May 8, 2009, at 6:12 PM, Stefan Will wrote:
> 
>> Hi,
>> 
>> I just ran into something rather scary: One of my datanode processes
>> that
>> I¹m running with ­Xmx256M, and a maximum number of Xceiver threads
>> of 4095
>> had a virtual memory size of over 7GB (!). I know that the VM size
>> on Linux
>> isn¹t necessarily equal to the actual memory used, but I wouldn¹t
>> expect it
>> to be an order of magnitude higher either. I ran pmap on the
>> process, and it
>> showed around 1000 thread stack blocks with roughly 1MB each (which
>> is the
>> default size on the 64bit JDK). The largest block was 3GB in size
>> which I
>> can¹t figure out what it is for.
>> 
>> Does anyone have any insights into this ? Anything that can be done to
>> prevent this other than to restart the DFS regularly ?
>> 
>> -- Stefan


Reply via email to