ES6 introduces String.prototype.codePointAt() and
String.codePointFrom() as well as an iterator (not defined). It struck
me this is the only place in the platform where we'd expose code point
as a concept to developers.

Nowadays strings are either 16-bit code units (JavaScript, DOM, etc.)
or Unicode scalar values (anytime you hit the network and use utf-8).

I'm not sure I'm a big fan of having all three concepts around. We
could have String.prototype.unicodeAt() and String.unicodeFrom()
instead, and have them translate lone surrogates into U+FFFD. Lone
surrogates are a bug and I don't see a reason to expose them in more
places than just the 16-bit code units.


-- 
http://annevankesteren.nl/
_______________________________________________
es-discuss mailing list
[email protected]
https://mail.mozilla.org/listinfo/es-discuss

Reply via email to