On 18.01.2008 00:09:14 Andreas L Delmelle wrote:
> On Jan 17, 2008, at 20:57, Simon Pepping wrote:
> 
> > On Thu, Jan 17, 2008 at 12:27:11AM +0100, Andreas L Delmelle wrote:
> >> Right now, the element list is constructed as the result of  
> >> recursive calls
> >> to getNextChildLM.getNextKnuthElements().
> >> /The/ return list upon which the page breaker operates is the one  
> >> that is
> >> ultimately returned by the FlowLM.
> >>
> >> Instead of that, I've been thinking in the direction of making it  
> >> a data
> >> structure that exists 'physically' separate from the LMs.
> >> This structure, created and maintained by the PageSequenceLM,  
> >> would be
> >> passed down into an appendNextKnuthElementsTo() method.
> >>
> >> The lower-level LMs can signal an interrupt to the ancestor LMs,  
> >> based on
> >> information they get through the LayoutContext --forced breaks  
> >> being the
> >> most prominent.
> >> The FlowLM, instead of simply continuing the loop, could give  
> >> control back
> >> to the PageSequenceLM, which can run the page breaker over the  
> >> list up to
> >> that point.
> >
> > I would rather pass a reference to the page breaker in the
> > getNextKnuthElements call. Each LM can then append Knuth elements in a
> > callback to the pagebreaker. At each such append callback, the page
> > breaker can decide to run the Knuth algorithm and ship pages. When
> > this callback finishes, the LM can continue. Running the Knuth
> > algorithm intermittently makes no sense in a total-fit algorithm.
> 
> Right. Running the algorithm intermittently may make no sense /in/ a  
> total-fit algorithm, the implementation of which currently takes for  
> granted the fact that a given sequence S will always be complete, so  
> we can immediately move on from end-of-layout to building the area  
> tree for the page-sequence. Suppose, however, that this will no  
> longer be guaranteed.
> 
> Also, I would see the basic strategy evolve in such a way that we  
> have some check on the list's eventual size to determine whether to  
> use best-fit or total-fit. Implementing this logic inside the LMs or  
> the breaking-algorithm seems out-of-place. As Jeremias mentioned, we  
> would have to have some mechanism for limiting memory consumption.  
> Keeping total-fit as the default strategy for the layout-engine is  
> fine by me, as long as we also can switch to best-fit at the  
> appropriate point. This point is unrelated to anything layout- 
> specific, so I was thinking that a separate data structure would help  
> make the implementation of such a mechanism much cleaner. If this  
> check becomes part of each childLM's getNextElements(), this might  
> turn out to become a pain to maintain... If we implement it as part  
> of the add() and remove() of that hypothetical structure, it remains  
> nicely separated from both the LM-logic and the breaking-algorithm.  
> Two places where it does not belong.

Please note that total-fit is not possible if the available IPD changes
from page to page as this will require regeneration of part of the
element list(s) after a page break decision. So part of the check
whether total-fit can be used must be the inspection of the
layout-master-set. This will add further complexity. The
layout-master-set creates further complications anyway: page-position
"last"/"only" processing are non-trivial especially is column balancing
and span changes come into play (one part where FOP is still incomplete
today).

BTW, has anyone checked how the XSL 1.1 flow-map functionality impacts
all this? I haven't had the chance.

<snip/>



Jeremias Maerki

Reply via email to