On Wed, 1 Oct 2008, Elliott Hird wrote:
> On 1 Oct 2008, at 19:05, Bayes wrote:
>
>> 
>> I submit the following proposal, titled "No spring ii office and" (AI=1):
>> {{{
>> 
>> If proposal 5111 was adopted, amend rule 1871 by adding the following 
>> information:
>> with this text:
>>  the sum of the source and destination are the nominees, quorum is 1/2 the 
>> number of VCs is restricted to players.
>> [snip]
>
> This is generated by a 4-order markov chain (which acts as order-2 because 
> spaces
> are considered tokens). It just runs from a special START token to an END 
> one. 
> Anyone
> have any good ideas to shorten the proposals it outputs? It's kind of 
> irritating,
> because the only ways I can think basically halt it after a while (and so not 
> reaching
> END) and thus just being cut off midsentence.

Up the probabilities on reaching END or otherwise weight as preferrable
steps that lead to higher probabilities of END?  (may not be too hard to
generate from transition matrix).  Might better model the writing principle 
of "succinct whenever possible, long when necessary" that more mirrors a 
human editing process.

Or perhaps a non-stationary transition matrix that increases movement
towards END as sentence lengthens (reflection of "avoid run-on sentences" 
human decision).

Other possibility is to develop a second-pass edit capability of the 
initial chain.

And long proposals happen when humans write them anyway.

-Goethe



Reply via email to