Manuel Mall wrote:
On Wed, 4 Jan 2006 03:51 am, Andreas L Delmelle wrote:


Sorry to interject into this debate, but I have to say that I agree with Manuel and thought I'd better speak up as this debate doesn't appear to be making any progress.

Thanks for trying to improve this important area of the code Andreas, I don't want to appear ungrateful for your efforts, it's just I have similar concerns to Manuel.

To sum it up:
Our implementation of Donald Knuth's algorithm first creates the
element lists for the FOs, and then from those lists it calculates
the most favorable break-positions. Subsequently, it adds the areas
based on those breaks to the block-area, right?
Now, what I mean:
If the element-lists for the trailing spaces(*) are modeled
appropriately, and we add a forced break (infinite penalty) for the
end-of-block, then the algorithm will always create one final pseudo-
line-break(**) where those spaces are dissolved if present, just as
they would be when it were the first line. The generated pseudo-line
(s) will have no content at all. Maybe a minor tweak needed in
LineArea to return zero BPD when it has no child-areas, and there we
go... In Block.addChildArea, we can then test for zero-BPD line-areas
to keep them from effectively being added to the block.

Something like that? Or am I still missing important implications?


I think the important point is that the Knuth algorithm cannot be made to strip trailing spaces. Only by placing hacky code around the algorithm can this effect been achieved. Code which from my perspective has caused a lot of bugs and unwanted side effects. Bugs which Jeremias and Manuel seem to be constantly fixing in this area. So I think leading and trailing space removal should be kept in the refinement (FO Tree) stage for this reason.

Also, as Manuel pointed out, the Knuth algorithm does not handle cross LM space removal. Something which can be achieved more easily in the FO Tree.

<snip/>

Chris


Reply via email to