On Fri, Jan 18, 2008 at 12:09:14AM +0100, Andreas L Delmelle wrote:
> I have always somehow assumed there to be a threshold in the most common 
> cases. In the sense that certain (maybe in fact even the bulk of) feasible 
> breaks can already be excluded quite early, even if the sequence consists 
> of millions of elements. Simply because they would, in intermittent runs, 
> repeatedly be considered to be 'bad' breaks, or because they become 'worse' 
> than the previous run.
> Something like this happens already, in a way, I think, albeit very deeply 
> embedded inside the algorithm, and a bit late too...
> Once we start determining actual breaks, the further we progress in the 
> page-sequence, the more layout-possibilities that need to be excluded. We 
> simply cannot consider all of them, including the ones that would lead to 
> unreasonable results.

Study the book, Andreas. Donald Knuth has treated all this, in a
manner that is quite complete. As Vincent said, this is the principle
of dynamic programming.

Earlier feasible pagebreaks are kept 'alive' because later feasible
pagebreaks keep a chain of references to them. Early feasible
pagebreaks that are not part of a 'best' chain of pagebreaks up to one
of the active nodes, are not referenced anymore and therefore
disappear from memory.

Simon

-- 
Simon Pepping
home page: http://www.leverkruid.eu

Reply via email to