On 20.09.2005 23:50:12 Andreas L Delmelle wrote:
> Hi,
> 
> Jeremias, Luca or Simon will probably be able to make the most sense 
> out of it, but if there's anyone else that can add a few comments, feel 
> free to do so.
> (FYI: This is completely separate from my idea to move the 
> border-collapsing to the FOTree.)
> 
> Now, I'm still not fully at home in the Knuth element generation 
> algorithm, so I don't know exactly whether what I'm about to describe 
> is at all feasible/doable. Maybe it's currently already done this way, 
> and I'm missing the point somewhere... In that case: sorry for the 
> noise. :-/
> 
> Here goes:
> I get the impression that the elements for borders and those for the 
> content of the cells are created in one single pass, which seems to be 
> the source of the so-called 'interaction problem' --IIC, this refers to 
> the situation where, for example, we have already generated the AFTER 
> border elements for the first two cells, while it's only when 
> generating the elements for the third cell that a break is triggered. 
> So, the obtained border- and content-elements become invalid, and need 
> to be re-evaluated (possibly taking the footer into account).
> Is this a correct assessment of the issue?

Unfortunately not. I get the impression that you haven't understood, yet,
how the Knuth approach works. We don't reevaluate any decisions in this
approach, but rather calculate ALL(!) possible decisions beforehand and
incorporate them into the element list we generate. The breaker will
merely choose a break possibility and the addAreas stage will paint the
results given the break decision. The only reevaluation will happen if
we start to implement support for the "changing available IPD" problem,
i.e. when the available IPD is different from page to page within the
same page-sequence. In this case we will need to be able to recreate the
element list from an arbitrary former break possibility on forward which
means that all decisions are reevaluated from this point on due to
changed environmental factors (the IPD). Even the line-breaking has to
be redone, although the inline element list will not have to be
recreated.

This calculation of all possible decisions when generating the element
list is exactly the same problem I'm currently facing with space
resolution. I have to precalculate all space resolution scenarios for
every single break possibility in order to be able to create the right
element list. Mind-breaking, I tell you...... :-)

> Am I correct when I say that this problem doesn't pose itself when the 
> break would occur in the first cell of the row(group)?
>
> If so, I'm wondering whether it could help if the element generation 
> for row(groups) were split up in two (possibly three passes) and be 
> made to look like the following (in pseudo-code):
> 
> while( rowIterator.hasNext() ) {
>    if( firstRowGroupInPageOrColumn ) {
>      generateBeforeBorderElements();
>    }
>    generateAfterBorderElements();
>    generateContentElements();
> }
> 
> So, by the time we get to generating boxes/glues/penalties for the 
> content of the cells, we would already have the minimum/maximum widths 
> for *all* possible AFTER border elements in the row.
> The generateAfterBorderElements() step would create two element lists:
> - one to use if there is no page- or column-break
> - an alternate list to use in case the content triggers a break (which 
> would then include all elements for the footer, if any)

I don't think something like that is possible. During my analysis I
found that the effective borders influence the Knuth element generation
a lot. You can't separate the borders from the content. Have a look at
the notes in the Wiki. They show this interaction. It's all documented
there. The element list generation is fully implemented for the separate
border model. For the collapsing border model, several examples are
documented and fully calculated. The only thing left is the algorithm to
handle all the little difficulties arising from the collapsing border
model. The most important pages for implementing the collapsing border
model are these:
http://wiki.apache.org/xmlgraphics-fop/TableLayout/KnuthElementsForTables/RowBorder
http://wiki.apache.org/xmlgraphics-fop/TableLayout/KnuthElementsForTables/RowBorder2
http://wiki.apache.org/xmlgraphics-fop/TableLayout/KnuthElementsForTables/HfIntegrationInSteppingAlgorithm

> Maybe both lists could be made to include the elements for the AFTER 
> padding as well (? since we have to iterate over the cells/grid-units 
> anyway).
> 
> Eventually only one of the two lists will be merged with the content 
> element list, depending on the situation after the content element list 
> completely known, but it would become a matter of inserting the right 
> list (and discarding the incorrect one --at least, throwing away its 
> elements).
> 
> The only drawback I immediately see is that the 
> generateAfterBorderElements() step would have to make the comparison 
> with the footer- or table-borders for each and every row, unless we 
> were to do this only in case the remaining page- or column-BPD has 
> dropped below a certain threshold.
> 
> The only remaining problems would then be that:
> a) there may be row(groups) whose content is so large that the 
> remaining BPD is more than enough before the content's elements are 
> generated, but only drops below the threshold during the 
> generateContentElements() step.
> b) there's always the possibility of a forced break, regardless of the 
> remaining BPD
> 
> The creation of the alternate element list should therefore be 
> implemented as a separate step that can be triggered either during 
> generateAfterBorderElements() or generateContentElements().
> 
> In any case, besides gaining certainty about min- or max-border-widths, 
> splitting up the element generation in 2-3 passes would allow us to 
> gain a few hints on the content to get an idea of the probability of a 
> page- or column-break.
> I mean: without actually triggering creation of a full element list for 
> the content, we could maybe do a quick traverse of the FOTree-fragment 
> contained in each cell to see if any of its descendants have a break-* 
> property specified.
> To make an even more educated guess, perhaps we could even perform some 
> off-hand calculations based on the average font-size, the number of 
> blocks, the number of characters of the descendant FOText nodes, the 
> content-height for contained images... But this all *without* 
> generating the elements. Only minimal communication with the actual 
> childLMs in that step, placing the focus on the FONode-elements (= the 
> list returned by TableCell.getChildNodes()) and their properties.
> 
> 
> Does this make any sense?

Hmmmmmm. Unless I'm totally mistaken, you're off-course, unfortunately.


Jeremias Maerki

Reply via email to