[
https://issues.apache.org/jira/browse/FOP-2860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17404761#comment-17404761
]
tntim96 commented on FOP-2860:
------------------------------
We're encountering this problem too. Is there any update or work-around for
this issue?
> BreakingAlgorithm causes high memory consumption
> ------------------------------------------------
>
> Key: FOP-2860
> URL: https://issues.apache.org/jira/browse/FOP-2860
> Project: FOP
> Issue Type: Bug
> Affects Versions: 2.3
> Reporter: Raman Katsora
> Priority: Critical
> Attachments: image-2019-04-16-10-07-53-502.png, test-1500000.fo,
> test-250000.fo, test-300000.fo
>
>
> when a single element (e.g. {{<fo:block>}}) contains a sufficiently large
> amount of text, the fo-to-pdf transformation causes very high memory
> consumption.
> For instance, transforming a document with {{<fo:block>}} containing 1.5
> million characters (~1.5Mb [^test-1500000.fo]) requires about 3Gb of RAM.
> The heapdump shows 27.5 million
> {{org.apache.fop.layoutmgr.BreakingAlgorithm.KnuthNode}} (~2.6Gb).
> We start observing this issue, having about 300 thousand characters in a
> single element ([^test-300000.fo]). But the high memory consumption isn't
> observed when processing 250 thousand characters ([^test-250000.fo]).
--
This message was sent by Atlassian Jira
(v8.3.4#803005)