In the business of character encoding, it's not helpful to try to construct algorithmic rules that lead from one set of conditions to the state of "encoded". It just doesn't work that way.

What does work is to think of factors, or criteria, that you can use in weighing a question. Certain factors weigh in favor of encodings, others don't (or have large negative weights - logo's currently have infinite negative weights :) ).

Many of these criteria managed to get written down in the Policies and Procedures document and have been helping Unicode and WG2 decide encoding questions. Others are still mainly present in the collective consciousness of the encoding committee. Such is life.

What's not helpful is for outside observers to propound theories of encoding that are seemingly based on more algorithmic foundations, or that embody more rigid or formulaic requirements for this that an the other thing.

It's not that meeting certain requirements isn't helpful in advancing the case for encoding a character or symbol, but rather that it works only by increasing the weight in favor, not by flipping a switch up or down. It's really important to not mischaracterize the nature of the character encoding business in this way.

That's all I want to contribute to the current thread.

A./

Reply via email to