Victor Mote wrote:
Peter B. West wrote:


These are interesting and important issues.  I had no notion of the HZ
algorithm, but I was dimly aware from my reading as a teenager of the
"rivers" problem, and acutely conscious of its distracting effect from
my reading.  In my thinking about layout, I have been conscious of the
need to be able to evaluate such issues at a high level.  The only way
such an evaluation can be done is by layout look-ahead.  The page must
be laid out before "rivers" can be assessed.  (Finding them would be an
interesting problem in itself - and no doubt part of HZ.)

It actually would seem to go beyond look-ahead, and instead be more along
the lines of laying the content out multiple times & scoring each one.
True, but I had in mind that any such approach will be built on the fact that any layout is, in some sense, tentative. Rhett raised the question some time ago of a means recording (and scoring) intermediate results, something which will be an essential element of such a solution.

At this stage, I would tend to think not of doing every possible layout, but of following the "optimum" values to perform initial layout, and then testing the result for "goodness". The minimum-maximum range provides the slack - within the context of the spec - for applying whatever other set of layout tuning algorithms that FOP implements.

I would see these being arranged as a set of heuristics - for want of a better word - that are applied in a structured fashion to detected layout conflicts of particular types. What comprises a conflict would be determined by those configurable parameters.

In the initial version, we only need to provide for the most basic of these, as long as the mechanism is general enough to allow for refinement.

One
of the articles that Rhett pointed out indicates that Karow was working on a
"chapter" level optimization -- probably equivalent to a page-sequence for
us. It would seem easy to have several thousand or more possible layout
options for an expanse that big.

One issue in implementing this kind of thing is to make it configurable, or
even to make specification part of the standard. A lot of on-the-fly
web-based users won't want to spend the hardware resources to get output
this finely tuned, but those of us who are generating high-quality static
content won't mind. In other words, we need quick-and-dirty solutions that
are optimized for speed to be able to coexist with more complex solutions
that are optimized for quality. Part of what triggered my thoughts here was
a thread on the XSL-FO list in which it was stated that XEP takes about 3
times as long to run as FOP.  There are a lot of possible reasons for this
(including implementation of features that we don't have yet), but it is
possible that they have implemented some better H&J work.

I don't intend to implement any of this any time soon, but I need to let
some of the concepts sink in for a while, so I thought I had better get
started, in anticipation of (hopefully) getting back into FOP code again
within about a week.
--
Peter B. West  [EMAIL PROTECTED]  http://www.powerup.com.au/~pbwest/
"Lord, to whom shall we go?"


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, email: [EMAIL PROTECTED]

Reply via email to