Hi Marcos,

I have a few more comments on your comments below, but here's the main 
important reason for exceptions: They are the prerequisite for compatibly 
introducing new values over time.

The issue is: If we assign a default behavior to values that aren't explicitly 
described in the spec, then applications may come to depend on this default 
behavior. There've been many cases in the past where TC 39 couldn't introduce 
new behavior because there was an established behavior for a construct and the 
concern that applications depended on it. The case where TC 39 generally agrees 
that behavior can be changed is when the current behavior (both specified and 
implemented) is to throw an exception.

For example, we recently decided to introduce new Unicode character escape 
sequence of the form \u{xxxxxx} into the language for ES6. For identifiers and 
string literals, there was no compatibility concern, because in both cases the 
ES5 specification required to throw exceptions for such character sequences, 
and implementations followed the spec. For regular expression literals, even 
though the specification required to throw an exception, implementations had 
consistently assigned a different meaning to such character sequences (/\u{10}/ 
is interpreted as /u{10}/). TC 39 therefore decided that the new Unicode 
character escape sequences could be supported in regular expression literals 
only under a new flag.

In addition, it doesn't seem that the Internationalization API is alone in 
using exceptions. WebIDL provides enumeration types, defined as sets of strings 
just like we use them, and in its ECMAScript binding specifies that unknown 
strings result in a TypeError exception (slightly different from the RangeError 
exceptions we use). An enumeration is used, for example, in the TextTrack API 
within HTML5.
http://www.w3.org/TR/WebIDL/#idl-enums
http://www.w3.org/TR/WebIDL/#es-enumeration
http://dev.w3.org/html5/spec/single-page.html#texttrackmode

Norbert


On Sep 3, 2012, at 5:40 , Marcos Caceres wrote:

(accidentally left out es-discuss when I responded to Norbert… response if 
below)

On Monday, 3 September 2012 at 13:20, Marcos Caceres wrote:

> Hi Norbert,  
> 
> On Saturday, 1 September 2012 at 00:31, Norbert Lindenberg wrote:
> 
>> On Aug 31, 2012, at 7:17 , Marcos Caceres wrote:
>> 
>> The way I understand it, backwards compatibility means that code that runs 
>> on an old version without exceptions continues to run on the new version 
>> with the same results. It does not mean that code that takes advantage of 
>> new capabilities in the new version will get the same results on the old 
>> version.
> 
> You are correct, lets call this graceful degradation or fault tolerance. I 
> guess it does not matter what we call it, so long as it's understood that 
> desired behaviour is that the API continues to work on old runtimes when if 
> new options are made available without significant code changes (such as 
> wrapping code in try/catch blocks).  

I understand why this may be desirable, but I don't think it's a good model for 
API evolution.

>>> Bogus hypothetical example - ES i18n.next introduces "formal", where the 
>>> day always has the first letter capitalised:
>>> 
>>> {day: "formal"}
>> 
>> 
>> Let's assume weekday - the day property currently has only "numeric" and 
>> "2-digit", therefore no letters.
>> 
>>> Because of this throw behaviour, the introduction of "formal" in a future 
>>> spec would potentially break all existing implementations today. If this 
>>> behaviour is left as is, (1) all calls to the i18n API will need to be 
>>> wrapped in try/catch (as a preemptive measure as the introductions of a new 
>>> option value would cause a throw). As a side effect, (2) users on old 
>>> browsers could be negatively impacted in the future without the knowledge 
>>> of the developer.
>> 
>> 
>> Applications only have to wrap their calls in try/catch if they use a new 
>> option value and want to run on old browsers that may not understand the new 
>> value.
> 
> The problem with the above is that the developer may not even know that the 
> API throws - if the developer runs their code on only browsers that support 
> "formal" then she would never know there was a problem. The above also 
> assumes the code is being maintained by the developer. The developer may 
> choose not to care about particular users or may not have access to (or even 
> be aware of) their application failing on particular browsers.  
> 
> As I've stated before, the current API design is allowing developers to 
> unknowingly exclude users. By simply providing a fallback, this problem goes 
> away.  

This problem goes away, but at the same time it becomes harder to evolve the 
API in the first place, and for developers who know what they're doing to take 
advantage of it.

>> It's similar to how they have to check whether new API exists if they want 
>> to run on browsers that may not have that API yet.
> 
> I don't think the API design should rely on workarounds that have emerged to 
> overcome issues with the Web platform (i.e., feature testing and duck typing 
> are in many ways a failing of the platform, and not something that should be 
> built on, IMHO). APIs should not rely on having to be wrapped in support 
> checking code as it makes code much more inefficient, harder to read, and 
> less maintainable. Compare:
> 
> //always gonna get something the user understands…  
> var ldate = (new Date()).toLocaleString("en", {day: "formal"});  
> 
> To:
> if(window.Intl !== undefined){
> try{
> var ldate = (new Date()).toLocaleString("en", {day: "formal"});
> }catch(e){
> //ok, gotta try something else…
> ...
> }
> }

But then, what's your alternative for determining the capabilities of an 
evolving platform? The other model I know is platform version numbers, which 
works to some extent for monolithic platforms that get upgraded as a whole 
every few years (Java, Windows, etc.). It doesn't work for the web, with its 
multiple implementations of multiple specifications, all evolving at different 
schedules and with different feature sets for each release.

> Also, feature detection is usually done at the most basic level: like 
> window.hasOwnProperty("Intl"); and not try/catch every possible option in the 
> an API. Take a look at Modernizr:
> 
> http://modernizr.com/downloads/modernizr.js
> 
> Most tests are pretty simple (in a good way): in as far as they most just 
> test "if (x in y) return true/false". In most cases, actual functionality is 
> not tested - just presence of a given host object, method, or attribute.  

Simple tests are adequate to determine available functionality at a high level 
(is there an Intl.DateTimeFormat?); if you want to more details, I assume tests 
in general will get more complicated.

>> Applications probably also would not have to wrap all calls - I think modern 
>> applications run a sequence of tests early on to detect which of the 
>> features they want to use are supported, and then adjust their behavior 
>> accordingly (polyfill, dumb down, ask the user to upgrade, ...).
> 
> This assumes a lot of sophistication on the part of developers. History shows 
> that most developers don't do unit testing or any other kind of testing.  
> 
> The API should make no assumptions about the code around it - the assumed 
> context should not matter (i.e., wrapped in try/catch or not should not 
> factor into the API design). To be clear: the API should be designed to be 
> testable, but it should be assumed that it will *not* be tested.  

API design always makes assumptions about how the API is going to be used. 
Assuming that developers will not use try/catch is just a different assumption 
than we've been making.

>> In this specific case, checking whether "formal" is supported also lets 
>> applications decide what to do if "formal" is not supported.
> 
> This assumes that runtimes are the same across platforms or that the 
> developer has access to all the platforms on which she is required to test.  

No. All that's required to test this is one implementation that supports 
"formal" and one that doesn't. These can be an old and a new version of the 
same browser, or different browsers on the same platform.

>> Most likely, it would choose "long" instead because it's closer to "formal" 
>> than "numeric", which would be the default used by the implementation.
> 
> The choice of value is not really important here. What is important is that 
> the API continue to return something useful to the user irrespective of the 
> developer's fumble.  
>>> For the case of (1), developers may not even be aware that the API throws 
>>> (or has thrown) for a particular set of users using an outdated browser - I 
>>> personally found that it throws by accident. This would lead to (2), which 
>>> unfairly punishes users without the developer's knowledge - causing the 
>>> application to potentially stop working all together if the exception is 
>>> not caught.
>> 
>> Web developers generally need to be aware that there are different browser 
>> versions with different capabilities - that's not new. They decide whether 
>> IE 6 is the baseline or IE 9, and then check for features that were 
>> introduced after the baseline version.
> 
> That's simply impossible to do in most cases and puts a massive financial and 
> technological burden on developers: imagine you had to own a copy of Windows 
> XP, Vista, 7, and Windows 8… as well as then have a Mac and a machine to test 
> Linux, as well as a range of iOS devices, Android Devices, Windows Phone 7 
> and 8, a Blackberry, etc. etc. The number of browser engines and environments 
> that ES is being used on continues to grow, this task simply becomes 
> impossible - which is also why I argue that the API needs to be more fault 
> tolerant.  

That's a problem in general, but not specific to this API.

> The above also assumes that the software is being maintained: what happens if 
> the developer adds "formal" then stops maintaining the software, but it's 
> still used by people?  

Over time more implementations will support "formal", and the problem goes away.

> This is also compounded by the fact that release cycles are changing - it is 
> impossible to test every possible version of Chrome and FireFox since they 
> started their rapid release cycle… To make matters worst, if a particular 
> browser vendor decides tomorrow to abandon support for Windows XP, that 
> potentially leaves millions of people out in the cold… I don't think IE10 
> will be supported by XP. Fast-forward 5-10 years, Windows 8 becomes the new 
> Windows XP - and IE10 will be the new "IE6" with millions stuck on that 
> version.  
> 
> And from a user's perspective, those who can't afford latest and greatest 
> software (or who has no option to update their browsers, like in iOS) risk 
> being deliberately or accidentally excluded by developers.

Apple has upgraded the browser on my iOS devices several times in the past, and 
I expect another upgrade this month. But they've stopped upgrading my 2008 iPod 
Touch, so I know what you mean.

> Also, setting a baseline sets a bad practice (and is fundamentally against 
> the core principles of "one Web"): developers should never target a 
> particular baseline - they unfortunately have to do this sometimes, but it's 
> certainly not something that should be assumed in the design of an API.  

Well, for this API the baseline are the upcoming implementations that will 
provide the API for the first time. If you want your web application to run on 
Tim Berners-Lee's NeXT machine of 1991, you should not expect the API to be 
there. Come to think of it, you shouldn't expect JavaScript to be there 
either...

>>> IMHO, falling back to defaults, or by ignoring bogus values, would make the 
>>> API degrade gracefully without negatively impacting users (that is 
>>> something I really like about the API today - it does that if you feed it 
>>> bogus language tag or bogus option it does not understand). I think it 
>>> should be the same for option values.
>> 
>>> I would also argue that having the above throw is inconsistent with the 
>>> rest of the API. The API already provides nice fallbacks to defaults in 
>>> most cases. For example, today (without i18n API support), calling any of 
>>> the to*LocaleString() simply "just works" - overcoming side-effect 2 above 
>>> and not punishing users with an exception. I think most developers would 
>>> expect that kind of fallback behaviour - you are guaranteed to get a 
>>> date/time/number that the user will understand (even if it's not as pretty 
>>> as a custom formatted one):
>>> 
>>> x.toLocaleTimeString("en", {foo: "bar"})
>>> //"12:00:00 AM"
>>> 
>>> x.toLocaleTimeString("klingon", {foo: "bar"})
>>> 
>>> //"12:00:00 AM"
>>> 
>>> …and so on… which is a great graceful fallback behaviour .
>> 
>> 
>> Yes and no. For structurally invalid language tags, it will throw. For 
>> structurally valid tags that it just doesn't recognize, it will use 
>> fallbacks. And there has been a lot of discussion about these fallbacks 
>> between some internationalizers who want a lot of flexibility to do what 
>> they think is best for the user, and TC 39 members who want behavior fully 
>> specified so that it's predictable and consistent across implementations.
> 
> I personally think the behaviour of the API with regards to language tags is 
> reasonable because it checks language-tag structure, but does not throw when 
> the browser does not know the language: i.e., it behaves in a future proof 
> manner that degrades gracefully. This is exactly how I think the options 
> should work too for unknown options: so long as the option is a String, it 
> should behave the same way as with language tags - i.e., fall back to 
> something gracefully when the value is unknown.  
>>> If the spec authors want to "warn" developers that they've used an 
>>> unsupported option value, maybe put into the spec to call "console.warn()" 
>>> or similar, if available. That allows developers to know they've used 
>>> something unsupported/unknown/or misspelled without punishing users - 
>>> mobile safari and Chrome do this already for meta-viewport values, so there 
>>> is some helpful precedence here. Yes, relying on "console.warn" is non 
>>> standard, but this could just be specified as non-normative text by just 
>>> saying where appropriate in the spec:
>>> 
>>> "Optionally, warn the developer that the option value is unsupported by, 
>>> for example, calling console.warn() or similar if available to the 
>>> implementation."
>>> 
>>> I think all modern browsers support the console object, and so does Node.js.
>> 
>> 
>> ECMAScript is used in more environments than just browsers and Node.js, and 
>> TC 39 is generally staying away from making any specific requirements on the 
>> host environments. Clause 16 of the Language spec specifies precisely which 
>> errors must be reported when, but says nothing about how to report them.
> 
> That's fine, but my argument is really about not forcing developer errors 
> onto users (users can't do anything about those problems). It's simple to 
> specify a fallback in the spec that protects users from developer error (and 
> makes developers lives easier because they can feel confident that the API 
> will always return something the user is likely to understand - as it does 
> today).  
>>> On the other hand, if the developer wants to check if what they inputed as 
>>> options is supported by the API, they can call .resolvedOptions() and 
>>> manually verify that all their options were understood by the browser. So:
>>> 
>>> var formatter = new v8Intl.DateTimeFormat(x, {day: "formal"}),
>>> options = formatter.resolvedOptions();
>>> 
>>> if(options.day !== "formal"){
>>> //no formal support, need to use something else
>>> options.day = "long";
>>> formatter = new v8Intl.DateTimeFormat(x, options),
>>> }
>>> … continue…
>> 
>> 
>> 
>> 
>> 
>> Actually, options.day !== "formal" does not mean that the implementation 
>> doesn't understand "formal" - it could also be that the requested locale 
>> happens not to have "formal" day names, while the implementations supports 
>> "formal" and other locales have such names.
> 
> Correct, but the above still holds as, IMO, "the right way"™ for a developer 
> to check if the API understood what the developer wanted (without the need to 
> throw an exception).  
>>> Note that there is precedence also in other APIs to not throw exceptions on 
>>> bogus options. For example, the Geolocation API does not throw in the 
>>> following cases and simply "just works" (tested in Chrome):
>>> 
>>> var l = function(e){console.log(e)};
>>> navigator.geolocation.getCurrentPosition(l, l, {maximumAge:"bananas"});
>>> navigator.geolocation.getCurrentPosition(l, l, {maximumAge:Node})
>>> navigator.geolocation.getCurrentPosition(l, l, {maximumAge:{}})
>> 
>> 
>> 
>> 
>> 
>> Is that good? Without looking at the WebIDL spec, what are the maximumAge 
>> values derived from "bananas", Node, or {}? Although, I'm afraid you can 
>> provide the same nonsense in some options of the Internationalization API...
> 
> It's good in as far that it's robust. In all bogus cases, the geolocation API 
> simply falls back to 0: "If a PositionOptions parameter was present, and its 
> maximumAge attribute was defined to a non-negative value, assign this value 
> to an internal maximumAge variable. If maximumAge was defined to a negative 
> value or was not specified, set the internal maximumAge variable to 0".  
> 
> Thanks for discussing this. Hopefully I've presented a coherent argument as 
> to why I think the change is important and clearly outlined the benefits for 
> both developers and users.  
> 
> Looking forward to this API being finalised and landing in browsers soon! :)  
> 
> Kind regards,
> Marcos




_______________________________________________
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss

Reply via email to