Should the Unicode Consortium decide to recommend an existing (or new)
character as a raised decimal for numbers, we would add that to CLDR, and
recommend that implementations accept either one as equivalent when parsing.


Mark <https://plus.google.com/114199149796022210033>
*
*
*— Il meglio è l’inimico del bene —*
**


On Sun, Mar 10, 2013 at 10:39 AM, Richard Wordingham <
richard.wording...@ntlworld.com> wrote:

> On Sat, 9 Mar 2013 18:58:45 -0700
> "Doug Ewell" <d...@ewellic.org> wrote:
>
> > Richard Wordingham wrote:
>
> > > The general feeling seems to be that computers don't do proper
> > > decimal points, and so the raised decimal point is dropping out of
> > > use.
>
> > Any discussion of whether "computers" handle decimal points properly
> > can't happen without talking about number-to-string conversion
> > routines in programming languages and frameworks.
>
> The question is what users will demand. Expectations have been low
> enough that the loss of decimal points has been accepted.
> Additionally, striving for an apparently hard to get raised decimal
> point risks being forced to use an achievable decimal comma.
>
> > Conversion routines are often able to choose between full stop and
> > comma as the decimal separator, based on locale, but I'm not aware of
> > any that will use U+00B7.
>
> > The same is true for using U+2212, or even U+2013, as the "negative"
> > sign instead of U+002D, which looks just terrible for this purpose in
> > many fonts.
>
> U+2212 is not necessary for English (see CLDR exemplar characters), so
> CLDR policy (if not rules) do not allow it in CLDR conversion rules.
> I'm feeling lucky that I've got away with using it in documents for a
> few years now, but may be I've only succeeded because we've been cut and
> pasting from a Unicode-aware environment (Windows) to an 8-bit
> environment (ill-maintained Solaris, hated by management).
>
> Richard.
>
>

Reply via email to