* mark harwood:
Thanks, I have a heap dump now from a run with reduced JVM memory
(in order to speed up a failure point) and am working through it
offline with VisualVm.
This test induced a proper OOM as opposed to one of those timed out
waiting for GC type OOMs so may be misleading.
It
* mark harwood:
Could you get a heap dump (eg with YourKit) of what's using up all the
memory when you hit OOM?
On this particular machine I have a JRE, no admin rights and
therefore limited profiling capability :(
Maybe this could give you a heap dump which you can analyze on a
different
, 11 March, 2009 10:42:33
Subject: Re: A model for predicting indexing memory costs?
* mark harwood:
Could you get a heap dump (eg with YourKit) of what's using up all the
memory when you hit OOM?
On this particular machine I have a JRE, no admin rights and
therefore limited profiling capability
mark harwood wrote:
Thanks, I have a heap dump now from a run with reduced JVM memory
(in order to speed up a failure point) and am working through it
offline with VisualVm.
This test induced a proper OOM as opposed to one of those timed out
waiting for GC type OOMs so may be
Michael McCandless wrote:
Ie, it's still not clear if you are running out of memory vs hitting
some weird it's too hard for GC to deal kind of massive heap
fragmentation situation or something. It reminds me of the special
(I cannot be played on record player X) record (your application)
Mark Miller wrote:
Michael McCandless wrote:
Ie, it's still not clear if you are running out of memory vs
hitting some weird it's too hard for GC to deal kind of massive
heap fragmentation situation or something. It reminds me of the
special (I cannot be played on record player X)
:56
Subject: Re: A model for predicting indexing memory costs?
Mark Miller wrote:
Michael McCandless wrote:
Ie, it's still not clear if you are running out of memory vs hitting some
weird it's too hard for GC to deal kind of massive heap fragmentation
situation or something. It reminds me
: Re: A model for predicting indexing memory costs?
mark harwood wrote:
I've been building a large index (hundreds of millions) with mainly
structured data which consists of several fields with mostly unique values.
I've been hitting out of memory issues when doing periodic commits/closes
- Original Message
From: Michael McCandless luc...@mikemccandless.com
To: java-user@lucene.apache.org
Sent: Tuesday, 10 March, 2009 0:01:30
Subject: Re: A model for predicting indexing memory costs?
mark harwood wrote:
I've been building a large index (hundreds of millions
From: Ian Lea ian@gmail.com
To: java-user@lucene.apache.org
Sent: Tuesday, 10 March, 2009 10:54:05
Subject: Re: A model for predicting indexing memory costs?
That's not the usual OOM message is it? java.lang.OutOfMemoryError: GC
overhead limit exceeded.
Looks like you might be able
...@thetaphi.de
-Original Message-
From: mark harwood [mailto:markharw...@yahoo.co.uk]
Sent: Tuesday, March 10, 2009 12:07 PM
To: java-user@lucene.apache.org
Subject: Re: A model for predicting indexing memory costs?
Thanks, Ian.
I forgot to mention I tried that setting
To: java-user@lucene.apache.org
Sent: Tuesday, 10 March, 2009 11:32:48
Subject: RE: A model for predicting indexing memory costs?
It does not indefinitely hang, I think the problem is, that the GC takes up
all processor resources and nothing else runs any more. You should also
enable the parallel
- Original Message
From: Uwe Schindler u...@thetaphi.de
To: java-user@lucene.apache.org
Sent: Tuesday, 10 March, 2009 11:32:48
Subject: RE: A model for predicting indexing memory costs?
It does not indefinitely hang, I think the problem is, that the GC takes
up
all processor resources
: Tuesday, 10 March, 2009 12:53:19
Subject: RE: A model for predicting indexing memory costs?
It does not indefinitely hang,
I guess I just need to be more patient.
Thanks for the GC settings. I don't currently have the luxury of 15
other processors but this will definitely be of use in other
Subject: Re: A model for predicting indexing memory costs?
mark harwood wrote:
I've been building a large index (hundreds of millions) with mainly
structured data which consists of several fields with mostly unique values.
I've been hitting out of memory issues when doing periodic
,
Mark
- Original Message
From: Michael McCandless luc...@mikemccandless.com
To: java-user@lucene.apache.org
Sent: Tuesday, 10 March, 2009 0:01:30
Subject: Re: A model for predicting indexing memory costs?
mark harwood wrote:
I've been building a large index
out of settings to tweak here.
Cheers,
Mark
- Original Message
From: Michael McCandless luc...@mikemccandless.com
To: java-user@lucene.apache.org
Sent: Tuesday, 10 March, 2009 0:01:30
Subject: Re: A model for predicting indexing memory costs?
mark harwood wrote
mark harwood wrote:
Could you get a heap dump (eg with YourKit) of what's using up all
the memory when you hit OOM?
On this particular machine I have a JRE, no admin rights and
therefore limited profiling capability :(
That's why I was trying to come up with some formula for estimating
:54
To: java-user@lucene.apache.org
Subject: Re: A model for predicting indexing memory costs?
That's not the usual OOM message is it? java.lang.OutOfMemoryError: GC overhead
limit exceeded.
Looks like you might be able to work round it with -XX:-UseGCOverheadLimit
http://java-monitor.com/forum
On Mar 10, 2009, at 7:55 AM, mark harwood wrote:
It does not indefinitely hang,
I guess I just need to be more patient.
Thanks for the GC settings. I don't currently have the luxury of 15
other processors but this will definitely be of use in other
environments.
It is also, usually
I've been building a large index (hundreds of millions) with mainly structured
data which consists of several fields with mostly unique values.
I've been hitting out of memory issues when doing periodic commits/closes which
I suspect is down to the sheer number of terms.
I set the
mark harwood wrote:
I've been building a large index (hundreds of millions) with mainly
structured data which consists of several fields with mostly unique
values.
I've been hitting out of memory issues when doing periodic commits/
closes which I suspect is down to the sheer number of
22 matches
Mail list logo