Sorry if it already has been picked up (I searched and didn't found anything
close that).
In my last months of work with JavaScript what that I miss a lot in ES5
syntax is:
1. Syntax shortcut for '.prototype'. Instead of writing
String.prototype.trim I'd love to be able to write for example
This request is the very definition of little things that go a long way. I
write a hell of a lot of code that boils down
to Function.prototype.bind(Function.prototype.call/apply,
Somebuiltin.prototype.method). The fact that there's builtin way to
accomplish `string.split(\n).map(String.split)`
error in the example should be:`string.split(\n).map(String.trim)`
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss
The established way of doing this is [].forEach, .trim, {}.valueOf. I imagine
that by now, there would be no performance penalty, any more, because most
engines are aware of this (ab)use. But it is indeed not very
intention-revealing. It might make sense to wait with this proposal until
I would ask as an exploratory idea: is there any interest in, and what
problems exist with exposing most {Builtin}.prototype.* methods as unbound
functional {Builtin}.* functions. Or failing that, a more succint
expression for the following:
Function.prototype.[call/apply].bind({function}).
On 21 February 2012 13:59, Brandon Benvie bran...@brandonbenvie.com wrote:
I would ask as an exploratory idea: is there any interest in, and what
problems exist with exposing most {Builtin}.prototype.* methods as unbound
functional {Builtin}.* functions. Or failing that, a more succint
On 02/20/12 16:47, Brendan Eich wrote:
Andrew Oakley wrote:
Issues only arise in code that tries to treat a string as an array of
16-bit integers, and I don't think we should be particularly bothered by
performance of code which misuses strings in this fashion (but clearly
this should still
On 21 February 2012 00:03, Brendan Eich bren...@mozilla.com wrote:
These are byte-based enodings, no? What is the problem inflating them by
zero extension to 16 bits now (or 21 bits in the future)? You can't make an
invalid Unicode character from a byte value.
One of my examples, GB 18030,
There is a proposal for making available existing functions via modules in
ES6:
http://wiki.ecmascript.org/doku.php?id=harmony:modules_standard
If there are methods missing from this list that can reasonably be
used as stand-alone functions, then I'm sure nobody will object to
adding
Andrew Oakley wrote:
On 02/20/12 16:47, Brendan Eich wrote:
Andrew Oakley wrote:
Issues only arise in code that tries to treat a string as an array of
16-bit integers, and I don't think we should be particularly bothered by
performance of code which misuses strings in this fashion (but
Brendan Eich wrote:
in open-source browsers and JS engines that use uint16 vectors internally
Sorry, that reads badly. All I meant is that I can't tell what
closed-source engines do, not that they do not comply with ECMA-262
combined with other web standards to have the same observable
Normalization happens to source upstream of the JS engine. Here I'll call on a
designated Unicode hitter. ;-)
I agree that Unicode Normalization shouldn't happen automagically in the JS
engine. I rather doubt that normalization happens to source upstream of the JS
engine, unless by
Phillips, Addison wrote:
Normalization happens to source upstream of the JS engine. Here I'll call on a
designated Unicode hitter. ;-)
I agree that Unicode Normalization shouldn't happen automagically in the JS engine. I rather doubt that
normalization happens to source upstream of the JS
I meant ECMA-262 punts source normalization upstream in the spec pipeline
that runs parallel to the browser's loading-the-URL | processing-what-was-
loaded pipeline. ECMA-262 is concerned only with its little slice of
processing
heaven.
Yep. One of the problems is that the source script
Because it has always been possible, it’s difficult to say how many scripts
have transported byte-oriented data by “punning” the data into strings.
Actually, I think this is more likely to be truly binary data rather than text
in some non-Unicode character encoding, but anything is possible, I
Phillips, Addison wrote:
Because it has always been possible, it’s difficult to say how many
scripts have transported byte-oriented data by “punning” the data into
strings. Actually, I think this is more likely to be truly binary data
rather than text in some non-Unicode character encoding,
On Feb 21, 2012, at 7:37 AM, Brendan Eich wrote:
Brendan Eich wrote:
in open-source browsers and JS engines that use uint16 vectors internally
Sorry, that reads badly. All I meant is that I can't tell what closed-source
engines do, not that they do not comply with ECMA-262 combined with
On Tue, Feb 21, 2012 at 3:11 PM, Brendan Eich bren...@mozilla.com wrote:
Hi Mark, thanks for this post.
Mark Davis ☕ wrote:
UTF-8 represents a code point as 1-4 8-bit code units
1-6.
...
Lock up your encoders, I am so not a Unicode guru but this is what my
reptile coder brain remembers.
Hi Mark, thanks for this post.
Mark Davis ☕ wrote:
UTF-8 represents a code point as 1-4 8-bit code units
1-6.
No. 1 to *4*. Five and six byte UTF-8 sequences are illegal and invalid.
UTF-16 represents a code point as 2 or 4 16-bit code units
1 or 2.
Yes, 1 or 2 16-bit code
Thanks, all! That's a relief to know, six bytes always seemed to long
but my reptile coder brain was also reptile-coder-lazy and I never dug
into it.
/be
Phillips, Addison wrote:
Hi Mark, thanks for this post.
Mark Davis ☕ wrote:
UTF-8 represents a code point as 1-4 8-bit code units
1-6.
I'll reply to Brendan's proposal in two parts: first about the goals for
supplementary character support, second about the BRS.
Full 21-bit Unicode support means all of:
* indexing by characters, not uint16 storage units;
* counting length as one greater than the last index; and
*
On Feb 21, 2012, at 6:05 PM, Norbert Lindenberg
ecmascr...@norbertlindenberg.com wrote:
I'll reply to Brendan's proposal in two parts: first about the goals for
supplementary character support, second about the BRS.
Full 21-bit Unicode support means all of:
* indexing by characters, not
Second part: the BRS.
I'm wondering how development and deployment of existing full-Unicode software
will play out in the presence of a Big Red Switch. Maybe I'm blind and there
are ways to simplify the process, but this is how I imagine it.
Let's start with a bit of code that currently
23 matches
Mail list logo