Possible memory leak

2006-06-30 Thread Karthik

Jermias,

Thanks for the input. I'll try the XML -> FO file -> PDF approach, but my
initial tests show that the FO file gets created without any issues and I
suspect the FO -> PDF conversion has issue.


Meanwhile, I was trying to get a little into the performance issue and I noticed
that the memory consumed by the java process almost spikes by 400-500 gig and
also seems that it does'nt get released after the transformation is completed.
This is how I base my observation :

The java batch process runs on a 2 CPU server which has about 8GB RAM. The heap
size was set to 1 gig. Before running the process, the java virtual memory was
showing around 600MB, and right after the FOP process started, the memory went
all the way up to 1.3 gig and stayed there. The process concatenates every 500
xml's into 1 PDF (the 500 XML's themseves are available as a concatenated XML).
I tried initially with 2000 XML's, which should have produced 4 PDF's. I enabled
Garbage collection on Websphere and kept track of it when the job ran. What I
saw was, the amount of memory GC was able to clear and make available was
getting reduced slowly say from 25% initially to all the way down to 0% and
eventually in the middle of the last PDF creation, the process got terminated. 

Each of these XML's have tables and are about 2-3 pages in size. It also uses
barcode4j.

Below is the transformation code :

FopFactory fopFactory = getFopFactory();

private void transformXMLToPDF(FopFactory fopFactory, Source src, Source xslt,
File pdf, File foFile 
throws FOPException, TransformerException, IOException
{
OutputStream out = new java.io.FileOutputStream(pdf);
out = new java.io.BufferedOutputStream(out);

 
try {
Fop fop = getFop(fopFactory, out);
TransformerFactory factory = TransformerFactory.newInstance();
Transformer transformer = factory.newTransformer(xslt);
transformer.setParameter("print-mode", "true");

Result res = new SAXResult(fop.getDefaultHandler());
 
transformer.transform(src, res);

logger.im("Completed transformation!");
fop = null;
} finally {
out.close();
}

}

private FopFactory getFopFactory()
{
// configure fopFactory
return FopFactory.newInstance();
}

private synchronized Fop getFop(FopFactory fopFactory, OutputStream out) throws
FOPException
{

// configure foUserAgent 
FOUserAgent foUserAgent = fopFactory.newFOUserAgent();

return (Fop)fopFactory.newFop(MimeConstants.MIME_PDF, foUserAgent, out);

}  

I'm at FOP-0.92beta.

Please provide any suggestions. I'll work on the doing the process in 2 steps
and log my obervations.

Thanks
karthik


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Possible memory leak

2006-06-30 Thread Karthik
Karthik  yahoo.com> writes:
Sorry, this post was supposed to be a follow-up of "Performance issue" I posted
earlier.

Thanks
karthik



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Possible memory leak

2006-06-30 Thread J.Pietschmann

Karthik wrote:

The java batch process runs on a 2 CPU server which has about 8GB RAM. The heap
size was set to 1 gig. Before running the process, the java virtual memory was
showing around 600MB, and right after the FOP process started, the memory went
all the way up to 1.3 gig and stayed there.


The memory allocated to the JVM by the OS is not a reliable indicator
for Java memory leaks, JVM implementations tend to keep memory
allocations (as does basically every program which is supposed
to be used in general in a one-shot mode),
A peak indicating that the whole allowed heap size has actually
been allocated is not unusual for FOP while processing large
input.

J.Pietschmann

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Possible memory leak

2006-07-03 Thread karthik
J.Pietschmann,
Thanks for your comments.
Like you said, I agree that once memory is allocated JVM does not release it.
But, why would the process slow down gradually and eventually die?. Isn't this a
indication of a memory leak?. Pl. let eme know if you have any suggestions or
comments to improve the performance of this process.

Thanks
karthik





-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Possible memory leak

2006-07-03 Thread Andreas L Delmelle

On Jul 3, 2006, at 15:19, karthik wrote:

Hi,

Like you said, I agree that once memory is allocated JVM does not  
release it.
But, why would the process slow down gradually and eventually die?  
Isn't this a

indication of a memory leak?


Not necessarily. It just means that, given FOP's architecture,  
instances of FObj subclasses will be created for each FO node in your  
document. Those instances will, under normal circumstances, be  
released at the end-of-layout for every page-sequence node. If your  
document is not split up into multiple page-sequences, then this  
means the same as end-of-layout for the whole document.
For every node in the page-sequence (more or less) a LayoutManager  
instance is created, and these ultimately build a tree of Area  
objects. If your document contains forward-references (page-number- 
citations to the end of the document), currently all those Areas will  
be released only after those references are fully resolved.


The more the heap starts filling, the less remaining space you have  
to work with, until the maximum is reached. By that time, the main  
thread will most likely be running extremely slow.
If the GC is moderately intelligent, it will be able to check  
relatively quickly if there are *any* objects that are no longer  
referenced, giving the impression of not running at all (dying out;  
immediately returning control to the main-thread). If no objects can  
be released, then GC can do no more than wait for the main thread to  
continue, and check back later --and since the main thread runs so  
slowly, much, much later...



Pl. let eme know if you have any suggestions or
comments to improve the performance of this process.


See above: memory leak? Not if a 'leak' means 'keeping *unnecessary*  
references alive'. A lot depends on the structure of your FO document...



Cheers,

Andreas


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Possible memory leak

2006-07-05 Thread karthik
Andreas,

Thanks for your comments. Looks like for sure there are FOP references hanging
around that GC could not clear up, which eventually kills the process. Below is
a higher-level layout of the xsl. Let me know if you see anything to improve.

The below xsl is applied to XML's having 500 documents(
each in a loop(below is the XML layout). 1 XML of 500 documents produces 1 PDF. 








.
.
.



I create a page-sequence for each  and page-citation is contained
within each of these page-sequence.

My expectation was that the FOP references would clear out after each PDF is
created(which isn't happening).
 

http://www.w3.org/1999/XSL/Format";>


























  







   













  








 


 

























 

..

Let me know if you see some red flags.

Thank youvery much!
-Karthik




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Possible memory leak

2006-07-18 Thread Andreas L Delmelle

On Jul 5, 2006, at 16:21, karthik wrote:

Hi,

Sorry to get back so late.

Thanks for your comments. Looks like for sure there are FOP  
references hanging

around that GC could not clear up, which eventually kills the process.


[Since you've been profiling the process anyway, I feel like taking  
advantage of this. Hope you don't mind... ;)]


Which types of references? What kind of objects? Any pattern  
emerging? Any specific type of object we should definitely take a  
closer look at?



Below is
a higher-level layout of the xsl. Let me know if you see anything  
to improve.


Not immediately. It seems OK, but then, since it is higher-level, we  
still haven't seen what the page-sequences contain? How many pages by  
average? Minimum? Maximum? Do they contain a lot of large and/or  
nested tables?


Oh, just noticed something that does not look completely right to me,  
though.
Don't know what the rest thinks; could be that I need to brush up my  
XSLT :/


Here we go:



  
  







   ^
...

  




  ^
This somehow seems dangerous, because:
what exactly is the context-node (.) defined to be inside a named  
template? IOW: is it always and necessarily the context-node within  
the calling template/for-each?


If I were you, I'd replace that entire construct by:


...


Any reason in particular you're using for-each->call-template instead  
of apply-templates?



Cheers,

Andreas


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Possible memory leak

2006-07-18 Thread Andreas L Delmelle

On Jul 18, 2006, at 22:23, Andreas L Delmelle wrote:



Oh, just noticed something that does not look completely right to  
me, though.
Don't know what the rest thinks; could be that I need to brush up  
my XSLT :/


OK, my bad. It is literally defined in the Rec: "Unlike apply- 
templates, call-template does not change the current node or current  
node list."


Still, I'm wondering:
Any reason in particular you're using for-each->call-template  
instead of apply-templates?



Thanks,

Andreas


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Possible memory leak

2006-07-27 Thread Karthik
Andreas,

Thanks again for your time. Nothing specific in using call-template instead of
apply-templates.

Coming back to profiling, I tried taking a snapshot of the heap after a FOP
conversion using Jprobe and Jprobe reports "CondLengthProperty" as the loitering
object. Not very familiar with analyzing this further. I have a snapshot file
and some screen prints of Jprobe, not sure how to post it in this forum. 

But this is how the reference tree to the object looks like  :

Root -> AreaTreeHandler -> XMLWhiteSpaceHandler -> Block -> TableCell ->
CommonBorderPaddingBackground -> CondLengthProperty -> CondLengthProperty

Not sure if I am doing the right thing, but I'm willing to help or contribute
further on this.

Thanks again for the help,
karthik








-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Possible memory leak

2006-07-27 Thread Andreas L Delmelle

On Jul 27, 2006, at 18:54, Karthik wrote:

Coming back to profiling, I tried taking a snapshot of the heap  
after a FOP
conversion using Jprobe and Jprobe reports "CondLengthProperty" as  
the loitering
object. Not very familiar with analyzing this further. I have a  
snapshot file
and some screen prints of Jprobe, not sure how to post it in this  
forum.


Well, best to avoid posting large attachments to the list, as you  
will force every subscriber to download them.


Maybe we could also move this whole discussion to Bugzilla, and add  
those files as attachments over there. Benefit is that, even when we  
don't immediately have the time to look further, there will be a  
constant reminder to look into this.



But this is how the reference tree to the object looks like  :

Root -> AreaTreeHandler -> XMLWhiteSpaceHandler -> Block ->  
TableCell ->
CommonBorderPaddingBackground -> CondLengthProperty ->  
CondLengthProperty


Hmm... So, are you sure *all* these CondLengthProperty instances have  
this same reference tree? Or did you check only one?


Attempt at interpretation:
In itself, *one* of these trees at a given point in the process is  
certainly nothing to worry about. The WS-handler keeps a reference to  
the current block. That block references its parent, a TableCell,  
which references its own properties, and a length-conditional  
property holds a reference to its corresponding property --e.g.  
padding-before<->padding-top, depending on whether 'before' maps to  
'top' given the writing-mode and reference-orientation).
I wouldn't be surprised to see a lot of these trees occur in the  
course of the process, but if I esteem correctly, in a literal  
snapshot, there should be only one. There is only one handler which  
has one reference to the current block. That reference is never  
explicitly cleared, but strictly speaking, it never needs to be since  
it is re-used. Taking a snapshot right after FOP has finished, would  
reveal the last one, provided that the reference tree Root- 
>AreaTreeHandler->XMLWhiteSpaceHandler has not yet been completely  
cleared/released.


At most, we could try nulling this out explicitly in  
PageSequence.endOfNode(), but I'm not sure if that is going to make  
much of a difference. It *may* result in those objects being GC'ed  
sooner, but that ultimately depends on the JVM. :/


Not sure if I am doing the right thing, but I'm willing to help or  
contribute

further on this.


Something that could also be of use to judge this completely is an  
example of a FO-file (note: NOT the XML+XSLT, but the result of the  
XSL transform, because that ultimately is the input for FOP) Is it  
possible for you to generate one with Xalan or Saxon? Use a source  
XML with non-confidential dummy data. It doesn't need to be large,  
but just enough to give us an idea of what a typical document looks  
like for you --so we only have to imagine it a thousand times larger.  
(Or did you already send one to this list in the past?)



Thanks,

Andreas

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Possible memory leak FOP 0.95

2008-09-23 Thread ACHA | Marco Wayop

Version info / Hardware: 

- FOP: 0.92b and 0.95 

- OS: Windows server 2003 5.2R / Linux Debian 4.0 

- Hardware: 4GB RAM with Xeon 5120 /  8GB RAM with Xeon 5130 

  

Hi, 

  

I have memory problems on several machines and they seem to be caused by FOP. 

50% of the total available memory is assigned to the JVM, using parameters like 
-Xms and -Xmx 

  

After generating a PDF file about 30% of the allocated memory can not be 
released by the Garbage Collector. 

I used JConsole to verify that during PDF creation - for example - about 300MB 
gets allocated and about 100MB just stays in the tenured memory generation. 

  

Several profilers like Jhat and MAT point to SAXParser holding 
org.apache.fop.fo.FOTreeBuilder references. 

(see **profiler fragment**) 

  

I use Spring to instantiate the webservice holding the fop- and transformer 
factory. For testing purpuses I even instantiated the factories in the service 
methods. 

  

Ideas are greatly appreciated. 

  

Thanks, 

Marco Wayop 

The Netherlands, Amsterdam 

  

(**profiler fragment** Shortest paths to the accumulation point) 

class name / size on the heap 

org.apache.fop.fo.flow.table.TableBody / 88 MB 

parent org.apache.fop.fo.flow.table.TableRow / 115 MB 

parent org.apache.fop.fo.flow.table.TableCell / 115 MB 

parent org.apache.fop.fo.flow.Block / 115 MB 

fobj org.apache.fop.fo.RecursiveCharIterator / 115 MB 

firstWhiteSpaceInSeq org.apache.fop.fo.XMLWhiteSpaceHandler / 115 MB 

whiteSpaceHandler org.apache.fop.area.AreaTreeHandler / 115 MB 

foEventHandler org.apache.fop.fo.FOTreeBuilder / 115 MB 

m_saxHandler com.sun.org.apache.xml.internal.serializer.ToXMLSAXHandler / 115 
MB 

fLexicalHandler, fContentHandler org.apache.xerces.parsers.SAXParser / 115 MB 

value java.lang.ThreadLocal$ThreadLocalMap$Entry / 115 MB 

table java.lang.ThreadLocal$ThreadLocalMap / 115 MB 

threadLocals org.apache.tomcat.util.threads.ThreadWithAttributes / 115 MB 



Re: Possible memory leak FOP 0.95

2008-09-23 Thread Andreas Delmelle

On Sep 23, 2008, at 18:05, ACHA | Marco Wayop wrote:

Hi


Version info / Hardware:
- FOP: 0.92b and 0.95



Was the problem the same with 0.92 and 0.95? If not, is it possible  
to repeat the test with FOP Trunk? Jeremias recently fixed a memory- 
leak in the PropertyCache (but that concerned littering of stale  
WeakReferences). Maybe this is the one bugging you here... (OTOH,  
that was not yet present in FOP 0.92)?




After generating a PDF file about 30% of the allocated memory can  
not be released by the Garbage Collector.
I used JConsole to verify that during PDF creation - for example -  
about 300MB gets allocated and about 100MB just stays in the  
tenured memory generation.




FOP does have some static mappings defined, which would explain why  
you can't release 100% of the allocated memory, but there's no way  
they could account for 100MB of heap, IIC.


Several profilers like Jhat and MAT point to SAXParser holding  
org.apache.fop.fo.FOTreeBuilder references.

(see **profiler fragment**)


Weird... Can you share a bit more about the tests? I mean, are the  
observations based on a single run? Is there an indication, for  
example, that the second or third time around, you end up with  
200-300MB of heap that cannot be released? (which would be indicative  
of a true memory-leak)



I use Spring to instantiate the webservice holding the fop- and  
transformer factory. For testing purpuses I even instantiated the  
factories in the service methods.


Not sure if I get the picture correctly (no experience with Spring  
here), but just a cautionary note: be VERY mindful about using  
'centralized' TransformerFactory instances. As long as you're running  
a single thread, there should be no trouble to speak of, but once you  
would start running multiple concurrent sessions, then to make sure  
you get no weirdness, like inexplicable NullPointerExceptions,  
TransformerFactory instances should be pooled in some way. The same  
does not hold for FopFactory, which aims to be thread-safe.




(**profiler fragment** Shortest paths to the accumulation point)
class name / size on the heap
org.apache.fop.fo.flow.table.TableBody / 88 MB
parent org.apache.fop.fo.flow.table.TableRow / 115 MB
parent org.apache.fop.fo.flow.table.TableCell / 115 MB
parent org.apache.fop.fo.flow.Block / 115 MB
fobj org.apache.fop.fo.RecursiveCharIterator / 115 MB
firstWhiteSpaceInSeq org.apache.fop.fo.XMLWhiteSpaceHandler / 115 MB
whiteSpaceHandler org.apache.fop.area.AreaTreeHandler / 115 MB
foEventHandler org.apache.fop.fo.FOTreeBuilder / 115 MB




Just committed a /very/ small change to FOP Trunk which should at  
least makes this path impossible (see: http://svn.apache.org/viewvc? 
rev=698322&view=rev)


I'd be interested to find out if this really 'fixes' the issue, or  
only delays the inevitable... Everything from the  
XMLWhiteSpaceHandler up (= the bulk of the memory) should be long  
gone by the time of your snapshot. No idea why the remainder of the  
path does not get cleared (maybe this happens only at periodic  
intervals, by Tomcat).



HTH!

Andreas

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: Possible memory leak FOP 0.95

2008-09-24 Thread ACHA | Marco Wayop
Hi Andreas,

Thanks for your reply.

>Was the problem the same with 0.92 and 0.95? 
With both versions

>Is there an indication, for example, that the second or third time around, you 
>end up with  
>200-300MB of heap that cannot be released? (which would be indicative of a 
>true memory-leak)
Yes, with every run - eventually - 100MB is added to the tenured generation and 
stays there.

I tested generating different PDF files. The concept remains the same. If 
during PDF creation 900MB is allocated then 300MB remains in the tenured gen.

>TransformerFactory instances should be pooled in some way.
Thanks for mentioning that Andreas. Tomcat handles requests multithreaded so I 
will certainly look for a way to pool the transformer factory by using 
threadLocals for example or to just instantiate the transformer factory in the 
webservice method.

>Just committed a /very/ small change to FOP Trunk which should at  
>least makes this path impossible (see: http://svn.apache.org/viewvc? 
>rev=698322&view=rev)
Great! I will try to gather JAI, JCE, JUnit and XMLUnit and compile the 
project. I guess there isn't a binary available.

>No idea why the remainder of the path does not get cleared (maybe this happens 
>only at periodic intervals, by Tomcat).
Tomcat shouldn't directly influence the natural behaviour of the garbage 
collector.

>I'd be interested to find out if this really 'fixes' the issue
After I have managed to compile the project and run some tests I will let you 
know for sure.

Thanks again.

Greetz,
Marco


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Possible memory leak FOP 0.95

2008-09-24 Thread Andreas Delmelle

On Sep 24, 2008, at 11:39, ACHA | Marco Wayop wrote:



Is there an indication, for example, that the second or third time  
around, you end up with
200-300MB of heap that cannot be released? (which would be  
indicative of a true memory-leak)
Yes, with every run - eventually - 100MB is added to the tenured  
generation and stays there.


That's strange, but may be explained somehow by the fact that the  
SAXParser (which references the FOTreeBuilder etc.) remains  
referenced in Tomcat's table of ThreadLocals (as can be seen from the  
profile output you posted earlier).
As noted, I don't really know what makes Tomcat cleanup that table.  
I'm certain that it is unrelated to standard Garbage Collection; as  
long as that table holds a hard reference to the SAXParser, the GC  
will obviously not magically release it... The best we can do on our  
end, is try to make sure FOP holds no dangling references. The change  
I committed should hardly make a difference in normal circumstances  
(if the context does not keep a hard reference to the SAXParser any  
longer than necessary).



Cheers

Andreas

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Possible memory leak FOP 0.95

2008-09-24 Thread Andreas Delmelle

On Sep 24, 2008, at 11:39, ACHA | Marco Wayop wrote:

Just noticed something minor:



Just committed a /very/ small change to FOP Trunk which should at
least makes this path impossible (see: http://svn.apache.org/viewvc?
rev=698322&view=rev)
Great! I will try to gather JAI, JCE, JUnit and XMLUnit and compile  
the project. I guess there isn't a binary available.


Indeed not, but building FOP is a proverbial piece-of-cake, if you  
have Ant installed. Check out via Subversion, navigate to FOP's root  
directory, and run 'ant package'. The resulting fop.jar can then be  
found in the 'build' directory.


Note that, to build FOP, you don't need JUnit or XMLUnit. This is  
only required if you want to run all the unit-tests as well. For the  
'package' build target, you don't need them.



Cheers

Andreas

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: Possible memory leak FOP 0.95

2008-10-27 Thread ACHA | Marco Wayop
Hi Andreas,

Sorry for postponing.

> The change I committed should hardly make a difference in normal 
> circumstances  
I have good news: it did fix it! 

However just compiling the checkout of the fop-trunk was not enough.
I had to use a more recent version of the Xerces framework (Xerces-J-bin.2.9).

(note: just updating the Xerces framework wasn't enough either;It really was 
the combination of the latest trunk/Xerces framework)

I'm considering to apply the XMLWhiteSpaceHandler/PropertyCache fixes to my 
local repository version of FOP 0.92 and starting a migration traject for the 
dozens of xsl-fo 's we have (making them FOP 0.95 compatible).

Greetings and thanks for an excellent product,
Marco Wayop


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Memory leak? (was: 'Possible' memory leak on fop-users)

2006-07-27 Thread Andreas L Delmelle

On Jul 27, 2006, at 23:56, Andreas L Delmelle wrote:


I wouldn't be surprised to see a lot of these trees occur in the  
course of the process, but if I esteem correctly, in a literal  
snapshot, there should be only one. There is only one handler which  
has one reference to the current block. That reference is never  
explicitly cleared, but strictly speaking, it never needs to be  
since it is re-used. Taking a snapshot right after FOP has  
finished, would reveal the last one, provided that the reference  
tree Root->AreaTreeHandler->XMLWhiteSpaceHandler has not yet been  
completely cleared/released.


Was re-thinking this particular phrasing, and had a closer look...  
Moved it to fop-dev, because of the importance.


Firstly, this looks like a damned circular reference, indeed! That's  
my bad, sorry.


Since the reference to the last block is not released unless the  
reference to the Root's AreaTreeHandler is cleared, this keeps the  
entire ancestry alive, up to the PageSequence, which itself holds a  
reference to the Root? :| ... :(


Definitely worth a try to release the reference XMLWhiteSpaceHandler- 
>Block as soon as possible.


OTOH, looking deeper, I'm strangely surprised no-one saw this one  
before --so surprised even that it makes me think I'm missing  
something :


Root.addChildNode(PageSequence) results in a reference to the  
PageSequence being kept in the Root's list of child nodes. Right?


AFAICT, this reference is *never* released as long as the Root object  
is alive, so it seems like currently, our 'split up in page- 
sequences' performance hint is complete and utter bogus...?


Sorry to disappoint you all.

Good news is, both are rather easily fixed --at least on the surface.

Either:
a) override addChildNode() in Root, so that the PageSequences don't  
get added to the List at all; maybe only under certain circumstances  
(unresolved forward references?) should this be needed

b) call Root.removeChild(this) in PageSequence.endOfNode()
c) call Root.removeChild() from the next PageSequence's startOfNode()

Unfortunately, I am a bit stuck in the marker-property rework ATM -- 
FOText in a marker turns out to be a little bit more difficult than  
the FObj-subclasses... Decided to take care of the dubious static  
FOText.lastFOTextProcessed in one go, so that will make a nice set of  
improvements 8)


I'll make it a priority to clear this up after that, if nobody beats  
me to it.



Cheers,

Andreas

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Memory leak? (was: 'Possible' memory leak on fop-users)

2006-08-09 Thread Karthik
Andreas,

I ran my test cases against fop-trunk as of 08/03 and am seeing a very good
boost in performance. I profiled against the same test case that produced
loitering objects in fop 0.92beta, and did NOT find any loitering objects with
the trunk code. The memory usage also seems to be very stable and I see more
frequent garbage collections than it used to be in 0.92beta. Overall, the 
process seem to use less memory than before.

Below are some comparisons from my test environment :

Total pages processed : 12000 approx (split up as 1500 pages per pdf in a loop)

1. Memory usage  : FOP 0.92beta started of with 500MB  and went all the way 
upto 1.2 GB easily and JVM crashed after processing 5000 pages approx. The
latest version used upto a max of 750 MB (from 500 MB initial) and never went
beyond that.

2. Processing Time : 0.92 beta slowed down gradually from 4 minutes per 1500
page pdf to 15 min, when finally the JVM crashed. But the latest code took
consistently 3-4 min to produce 1500 pages.

I'm not sure if the above comparison makes sense to anyone, 
but I just wanted to report for comparisons sake.

Overall the performance is been good so far and I'll keep profiling the process
to look for any red flags.

Let me know, if you want to track any other details.

Thanks
Karthik


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]