On 30 sept. 2012, at 11:39, David Kastrup <d...@gnu.org> wrote: > David Kastrup <d...@gnu.org> writes: > >> "m...@mikesolomon.org" <m...@mikesolomon.org> writes: >> >>> On 29 sept. 2012, at 19:54, m...@mikesolomon.org wrote: >>> >>> On 29 sept. 2012, at 19:53, "Keith OHara" <k-ohara5...@oco.net> >>> wrote: >>> >>> On Sat, 29 Sep 2012 10:30:32 -0700, m...@mikesolomon.org >>> <m...@mikesolomon.org> wrote: >>> >>> >>> The way you're using "tentative" is almost exactly how >>> pure properties are used in LilyPond. >>> >>> Specifically, 'pure-height being the estimated vertical extent >>> before line-breaking, while 'height is its extent after >>> line-breaking. >>> >>> If there are distinct properties to describe the position at >>> different stages, then each property can be evaluated just >>> once (as HanWen suggested, and as Mike agreed 100%). >>> >>> More thinking. I'm not enthusiastic about stages - it is a top down >>> approach that locks us into certain points of evaluation. What if we >>> decided to add or get rid of a stage? Would we need to create things >>> like unpure-pure-containers for various stages? What qualifies as a >>> stage? >> >> Dependencies, I should guess. A "stage" is where we break circular >> dependencies. > > Basically, a grob says "I want to have this and that information for > making my positioning" and LilyPond says "You can't get it right now". > Then the grob says "ok, I'll do a tentative positioning", and LilyPond > will come back with more information later and ask again. > > Now the problem here is when we are getting oscillation or even just a > converging process. If there is convergence involved, we are better off > calculation the _relation_ between the positionings. In a linear > optimization problem, those define the surface plane of a simplex (which > has possible solutions inside, impossible solutions outside, and where > we are looking for the furthest distance from 0 as the goal of > optimization) as a constraint. Intersecting with other > surfaces/constraints gives us the total solution space, and travelling > outside along its edges (which are the intersection of two planes) moves > us to the optimal solution. > > Doing this iteratively means jumping around on the inside of the > simplex. Each jump may be quite faster than determining the active > boundaries of the simplex, but of course the simplex method focuses on > _those_ pairings of parameters/positioning which are actually valid > tradeoffs. And since it is an efficient method, it does not get > confused when the heuristics go wrong.
I'm not completely sure that I follow what you're saying (I don't know anything about the internals of the simplex method) but if you mean that recalculating tentative property values may result in a series of values that doesn't converge towards anything, then yes, you are right. Most tentative properties will likely have convergent behavior, meaning that as more information becomes available, they'll get closer and closer to actual properties. However, there is no systematic way to enforce this. I've spent a lot of time thinking about making linear programming possible in LilyPond and have come to the conclusion that aspects of LilyPond's logic make it such that decisions are not linear. The classic is: --) We do outside staff positioning, at which point element A gets shifted up. --) Element B, not being able to be placed between element A and the staff, gets placed above element A. --) We redo outside staff positioning after cross-staff-grobs' vertical skylines are calculated. --) Element A is now shifted higher up. --) Element B is now able to be stashed between element A and the staff. That is a leap in heights (we go from having element B in the skyline to not). So discussing convergence is really difficult - I think the best we can do are heuristics to decide when to settle on ok-enough values. Cheers, MS _______________________________________________ lilypond-devel mailing list lilypond-devel@gnu.org https://lists.gnu.org/mailman/listinfo/lilypond-devel