On Tuesday 14 October 2008 1:55:11 pm Bharath Ganesh wrote:
> Dan,
>
> Looks like even Anoop says the heap size is constant. I guess he is talking
> about an increase in the OS process memory.
> Isn't it?

Possibly, but it could also be the PermGenSpace or something.   I checked that 
as well (the jdk6 jconsole can print that out) and that remained steady as 
well.   

If it's in the process memory space, that's probably a bug in either Jetty 
(nio stuff it does) or in the JDK itself.  Nothing we can do about either of 
those.

Dan



>
> -Bharath
>
> On Tue, Oct 14, 2008 at 10:48 PM, Daniel Kulp <[EMAIL PROTECTED]> wrote:
> > I'm honestly no seeing this on my linux box with the latest code.   I hit
> > it
> > with about 500K requests to warm up the JIT and stuff, check the heap
> > sizes using jconsole (1.6VM), then hit it with another 1.5 million
> > requests and rechecked the heap sizes and they ended up exactly the same.
> >
> > My suggestion is to try again with a more recent version of CXF at the
> > very least.    Also try the latest JDK's.
> >
> > Dan
> >
> > On Monday 13 October 2008 7:14:37 am Anoop Prasad wrote:
> > > Dear Bharat,
> > >
> > > 1. I have tested on three OS Env
> > >
> > >              >Win XP
> > >              >Solaris 10
> > >              >SuSE Linux (on ATCA)
> > >
> > > 2. Yes, on windows the behavior is not- alarming. Its stable. But
> > > solaris and Linux shows a memory Increase, as i have explained in the
> > > prev Post.
> > >
> > > 3. I have used JConsole  and rational Purify to monitor the
> > > Heap/non-heap/Perm Generations of Memory.
> > >    "prstat" utility was used on Solaris to observe the memory claimed
> > > by the process.Also "pmap"
> > >
> > > 4. <i>//The memory of the server continously increases from ~90M to
> > > ~350M which
> > > never comes down.
> > > How did you check this?</i>
> > > At startup prstat gives a value around 80M Resident Memory and total
> > > Mem 164M. After the load testing (around 800,000 continuous requests)
> > > the
> >
> > value
> >
> > > goes very high, ie 325M RSS and 388M Total Memory.
> > > At the same time Heap shows negligible increase after GC, as i have
> > > shown in the table.
> > >
> > > One word about the table:
> > > Table has the following columns
> > >
> > >      > Total  Mem as shown in prstat
> > >      > RSS  Mem as shown in prstat
> > >
> > >             eg:
> > >   PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP
> > >  3769 root      207M  124M sleep   59    0   0:02:09 4.4% java/40
> > >     and..
> > >
> > >      >Used
> > >      >Commited
> > >      >Maximum allowed
> > >
> > > Memory as per JConsole of JDK 5
> > > And the RSS and total mem used for the process never goes down from the
> > > high value even after GC (but this is the behavior of process on
> > > Solaris
> >
> > ?)
> >
> > > 5. This memory Increase happens only in HTTP transport. in JMS its
> > > fine,
> >
> > as
> >
> > > Hubert observed.
> > >
> > > 6. I can send the pmap dump and other details, if required.
> > >
> > > anoopPrasad
> > >
> > > Two roads diverged in a wood, and I -- I took the one less traveled by,
> >
> > and
> >
> > > that has made all the difference!
> > >
> > > HUAWEI TECHNOLOGIES CO.,LTD. huawei_logo
> > >
> > >
> > > Address: Huawei Industrial Base
> > > Bantian Longgang
> > > Shenzhen 518129, P.R.China
> > > www.huawei.com
> >
> > -------------------------------------------------------------------------
> >--
> >
> > >- ---------------------------------------------------------
> > > This e-mail and its attachments contain confidential information from
> > > HUAWEI, which
> > > is intended only for the person or entity whose address is listed
> > > above. Any use of the
> > > information contained herein in any way (including, but not limited to,
> > > total or partial
> > > disclosure, reproduction, or dissemination) by persons other than the
> > > intended
> > > recipient(s) is prohibited. If you receive this e-mail in error, please
> > > notify the sender by
> > > phone or email immediately and delete it!
> >
> > --
> > Daniel Kulp
> > [EMAIL PROTECTED]
> > http://dankulp.com/blog



-- 
Daniel Kulp
[EMAIL PROTECTED]
http://dankulp.com/blog

Reply via email to