Ross Gardler wrote:
> Ferdinand Soethe wrote:
> >Ross Gardler wrote:
> >
> >>Hmmm... we had better create an issue for this, it smells of a memory 
> >>leak.

Done a while ago: FOR-572.

> >Don't get me wrong I just took the first size that worked. Doesn't
> >mean it has to be this much.
> 
> Yeah, I understand that, but I can't think of any reason why we need to 
> increase the maxmemory just because the site has got bigger, we are 
> still only processing one file at a time.

Of course, it would be better to solve the problem
so that we can go back to a lower default. I just
added the fixme note.

> You are the third person to hit this problem now, so we can be pretty 
> sure it is not down to an individuals installation.
> 
> Does anyone have any ideas why we suddenly need more memory?

See the issue which links to the prior dev discussions
about this.

http://issues.apache.org/jira/browse/FOR-572
run a memory profiler while forrest is operating

I noticed some troubles with PDF generation. 
One workaround was to generate some PDFs for
documents that don't change much, e.g. the DTD docs
and then process them via map:read in the sitemap.
That workaround is in trunk already.

My theory is that we always did have a memory leak.
Now that we have three times the number of documents
the leak becomes apparent.

Ron is also suggesting a linkmap issue in the other thread.

David

Reply via email to