Re: Replacing Gecko's URL parser

2013-07-01 Thread Mike Hommey
On Mon, Jul 01, 2013 at 05:43:01PM +0100, Anne van Kesteren wrote:
> I'd like to discuss the implications of replacing/morphing Gecko's URL
> parser with/into something that conforms to
> http://url.spec.whatwg.org/
> 
> The goal is to get URL parsing to the level of quality of our CSS and
> HTML parsers and get convergence over time with other browsers as at
> the moment it's quite different between browsers.
> 
> I'm interested in hearing what people think. I outlined two issues
> below, but I'm sure there are more. By the way, independently of the
> parser bit, we are proceeding with implementing the URL API as drafted
> in the URL Standard in Gecko, which should make testing URL parsing
> easier.
> 
> 
> Idempotent: Currently Gecko's parser and the URL Standard's parser are
> not idempotent. E.g. http://@/mozilla.org/ becomes
> http:///mozilla.org/ which when parsed becomes http://mozilla.org/
> which is somewhat bad for security. My plan is to change the URL
> Standard to fail parsing empty host names. I'll have to research if
> there's other cases that are not idempotent.

Note that some "custom" schemes may be relying on empty host names. In
Gecko, we have about:foo as well as resource:///foo. In both cases, foo
is the path part.

Mike
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


WebAPI Meeting: Tuesday 2 July @ 10 AM Pacific [1]

2013-07-01 Thread Andrew Overholt

Meeting Details:

* Agenda: https://etherpad.mozilla.org/webapi-meetingnotes
* WebAPI Vidyo room
* A room we can find, San Francisco office
* Spadina conf. room, Toronto office
* Allo Allo conf. room, London office

* Vidyo Phone # +1-650-903-0800 x92 Conference #98413 (US/INTL)
* US Vidyo Phone # 1-800-707-2533 (PIN 369) Conference #98413 (US)

* Join irc.mozilla.org #webapi for back channel

All are welcome.

Andrew

[1]
http://www.timeanddate.com/worldclock/fixedtime.html?msg=WebAPI+meeting&iso=20130702T10&p1=224&am=30 


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Making proposal for API exposure official

2013-07-01 Thread Jet Villegas
Don't forget to add your prefixed API's/properties to this meta bug:
https://bugzilla.mozilla.org/show_bug.cgi?id=775235

We've been very active in getting rid of prefixes as quickly as we can. I love 
that CSS Flexbox shipped unprefixed after testing with an about:config flag for 
several cycles. Let's lose the prefixes, regardless of how "official" our API 
exposure policy is.

--Jet
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Making proposal for API exposure official

2013-07-01 Thread Mounir Lamouri
On 26/06/13 18:27, Ehsan Akhgari wrote:
 2. ecosystem- and hardware-specific APIs that are not standard or of
 interest to the broader web at that time (or ever) may be shipped in
 a  way to limit their harm of the broader web (ex. only on a device
 or only in specific builds with clear disclaimers about applicability
 of exposed APIs). An example of this is the FM Radio API for Firefox
 OS.
>>>
>>> When I read this, I read "It is okay to have Mozilla ship a phone with
>>> proprietary APIs". That means that we are okay with Mozilla creating the
>>> situation Apple created on Mobile, a situation that Mozilla has been
>>> criticising a lot. Shipping proprietary APIs on a specific device is
>>> harming the broader Web if that device happens to be one of the most
>>> used device out there...
>>
>> The way you read it obviously not something we want to do.  What if we
>> dropped the "ecosystem-"?  I can't see how we can allow ourselves to
>> ship hardware-specific APIs that don't work everywhere without an
>> exception like this.  Are there situations where we would ship such an
>> API on desktop if there's very little chance of the required hardware
>> existing there?
> 
> I think what Mounir is worrying about is the reverse situation where
> people code against our APIs without knowing that they're not available
> on other devices.  Mounir, do you have the same concern about certified
> Firefox OS APIs?  What about privileged APIs?

No, Certified-only, Privileged-only APIs are not exposed to the Web so I
am fine with them not being actively standardised as long as we are not
planning to increase their outreach (ultimately all those APIs should be
exposed to the Web).
However, I disagree with the idea that if an API is exposed to a small
enough portion of the Web, it is fine to not worry about standardisation
- because, anyway, it is just a small portion of the Web. This is what I
understand from that exception and I think it is wrong.

--
Mounir
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Making proposal for API exposure official

2013-07-01 Thread Mounir Lamouri
On 26/06/13 17:08, Andrew Overholt wrote:
> On 25/06/13 12:15 PM, Mounir Lamouri wrote:
>> Also, I do not understand why we are excluding CSS, WebGL and WebRTC. We
>> should definitely not make this policy retro-apply so existing features
>> should not be affected but if someone wants to add a new CSS property,
>> it is not clear why this shouldn't go trough this process.
> 
> My hope was to get something in place for APIs and then build up to
> other web-exposed "things" like CSS, etc.

In my opinion, CSS, HTML, DOM, WebGL, WebRTC and other Web APIs should
follow that rules. I do not see why CSS, for example, should be an
exception.

>> "ship" is too restrictive. If a feature is implemented and available in
>> some version (even behind a flag) with a clear intent to ship it at some
>> point, this should be enough for us to follow.
> 
> I changed it to "at least two other browser engines ship (regardless if
> it's behind a flag or not in their equivalent of beta or release) -- a
> compatible implementation of this API ".  How's that?  I don't want to
> see us basing our decision to ship on another engine's use of their
> nightly equivalent for experimentation (whether this happens right now
> or not).  Am I worried for no reason?

As Henri said, we should make sure that there is a genuine intent to
ship if a feature is implemented in a browser (even behind a flag).
Reaching out to the other vendors in that case should be easy.

>>> 2. ecosystem- and hardware-specific APIs that are not standard or of
>>> interest to the broader web at that time (or ever) may be shipped in
>>> a  way to limit their harm of the broader web (ex. only on a device
>>> or only in specific builds with clear disclaimers about applicability
>>> of exposed APIs). An example of this is the FM Radio API for Firefox
>>> OS.
>>
>> When I read this, I read "It is okay to have Mozilla ship a phone with
>> proprietary APIs". That means that we are okay with Mozilla creating the
>> situation Apple created on Mobile, a situation that Mozilla has been
>> criticising a lot. Shipping proprietary APIs on a specific device is
>> harming the broader Web if that device happens to be one of the most
>> used device out there...
> 
> The way you read it obviously not something we want to do.  What if we
> dropped the "ecosystem-"?  I can't see how we can allow ourselves to
> ship hardware-specific APIs that don't work everywhere without an
> exception like this.

If this exception is only about mobile or hardware-specific APIs, we
might want as well remove it. If we do not standardise things like FM
Radio API it is not really because it requires a FM Radio (a lot of
phones have this feature) but mostly because no one else want this for
the moment.

> Are there situations where we would ship such an
> API on desktop if there's very little chance of the required hardware
> existing there?

Indeed, we would not ship an API on desktop if it doesn't work on
desktop but I am not following the logic here. If an API works only on
Mobile, it should be standardised as well. An good example being the
Screen Orientation API.

>>> Declaring Intent
>>> API review
>>> Implementation
>>> Shipping
>>
>> I think some clarifications are needed in those areas.
> 
> I changed the section headers to:
> 
> Declaring Intent to Implement
> API review
> Implementation
> Intent to Ship and Shipping
> 
> How's that?

I didn't meant the names but the content of those sections ;)

>> The issue with having "dev-platform" finding a consensus with intent
>> emails is that we might end up in a infinite debate. In that case, we
>> should use the module system and have the module owner(s) of the
>> associated area of code make the decision. If the module owner(s) can't
>> take this decision, we could go upward and ask Brendan to make it.
> 
> I admit I didn't think much about "dev-platform" coming to a consensus.
>  I guess I'd like major disputes to be handled on a case-by-case basis
> and not have to define what should be done in the infinite discussion
> situations.  Maybe we should just be more forthcoming as reviewers or
> module owners about something we wouldn't want to ship and thus save
> potential implementors' time.

I would bet that consensus is going to be reached most of the time. We
used to discuss a lot of things in dev-webapi and we never ended up in
infinite discussions.

--
Mounir
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Replacing Gecko's URL parser

2013-07-01 Thread Anne van Kesteren
On Mon, Jul 1, 2013 at 6:58 PM, Benjamin Smedberg  wrote:
> Currently protocol handlers are extensible and so parsing is spread
> throughout the tree. I expect that extensible protocol handling is a
> non-goal, and that there are just a few kinds of URI parsing that we need to
> support. Is it your plan to replace extensible parsing with a single
> mechanism?

Yes. Basically all non-blessed schemes would end up with scheme,
scheme data, query, and fragment components (blessed schemes would
also have username, password, host, port, and path segments). Any
further parsing would have to be done through scheme-specific
processing and cannot cause URL parsing to fail. More concretely,
data:blah would not fail the URL parser, but http://test:test/ would.

The idea here is to provide consistency with regards to URL parsing as
far as it's exposed to the web and to remain compatible with the
current web.


>> Idempotent: Currently Gecko's parser and the URL Standard's parser are
>> not idempotent. E.g. http://@/mozilla.org/ becomes
>> http:///mozilla.org/ which when parsed becomes http://mozilla.org/
>> which is somewhat bad for security. My plan is to change the URL
>> Standard to fail parsing empty host names. I'll have to research if
>> there's other cases that are not idempotent.
>
> I don't actually know what this means. Are you saying that
> "http://@/mozilla.org/"; sometimes resolves to one URI and sometimes another?

I'm saying that if you parse and serialize it and then parse it again,
mozilla.org is suddenly the host rather than the path. Non-idempotency
of the URL parser has caused security issues though I'm not really at
liberty to discuss them here.


>> File URLs: As far as I know in Gecko parsing file URLs is
>> platform-specific so the URL object you get back will have
>> platform-specific characteristics. In the URL Standard I tried to
>> align parsing mostly with Windows, allowing interpretation of the file
>> URL up to the platform. This means platform-specific badness is
>> exposed, but is risky.
>
> Files are inherently platform-specific. What are the specific risks you are
> trying to mitigate?

I want the object you get out of new URL("file://C:/test") to be
consistent across platforms. I don't want JavaScript APIs to become
platform-specific, especially one as core as URL. (That the algorithm
that uses the URL to retrieve the data uses a platform-specific code
path is fine, that part is not observable.)


I cannot really comment on adding thread safety other than that it
seems good to have for the workers implementation.


--
http://annevankesteren.nl/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Replacing Gecko's URL parser

2013-07-01 Thread Patrick McManus
On Mon, Jul 1, 2013 at 12:43 PM, Anne van Kesteren  wrote:

> I'd like to discuss the implications of replacing/morphing Gecko's URL
> parser with/into something that conforms to
> http://url.spec.whatwg.org/
>

I know its not your motivation, but the lack of thread safety in the
various nsIURIs is a common roadblock for me and something I'd love to see
solved in a rewrite... but as benjamin mentions there are a lot of
pre-existing implementations..
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Replacing Gecko's URL parser

2013-07-01 Thread Gavin Sharp
.sOn Mon, Jul 1, 2013 at 10:58 AM, Benjamin Smedberg
 wrote:
>> Idempotent: Currently Gecko's parser and the URL Standard's parser are
>> not idempotent. E.g. http://@/mozilla.org/ becomes
>> http:///mozilla.org/ which when parsed becomes http://mozilla.org/
>> which is somewhat bad for security. My plan is to change the URL
>> Standard to fail parsing empty host names. I'll have to research if
>> there's other cases that are not idempotent.
>
> I don't actually know what this means. Are you saying that
> "http://@/mozilla.org/"; sometimes resolves to one URI and sometimes another?

function makeURI(str) ioSvc.newURI(str, null, null)

makeURI("http://@/mozilla.org/";).spec -> http:///mozilla.org/
makeURI("http:///mozilla.org/";).spec -> http://mozilla.org/

In other words,

makeURI(makeURI(str).spec).spec does not always return "str".

Gavin
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Replacing Gecko's URL parser

2013-07-01 Thread Benjamin Smedberg

On 7/1/2013 12:43 PM, Anne van Kesteren wrote:


I'm interested in hearing what people think. I outlined two issues
below, but I'm sure there are more. By the way, independently of the
parser bit, we are proceeding with implementing the URL API as drafted
in the URL Standard in Gecko, which should make testing URL parsing
easier.
Currently protocol handlers are extensible and so parsing is spread 
throughout the tree. I expect that extensible protocol handling is a 
non-goal, and that there are just a few kinds of URI parsing that we 
need to support. Is it your plan to replace extensible parsing with a 
single mechanism?



Idempotent: Currently Gecko's parser and the URL Standard's parser are
not idempotent. E.g. http://@/mozilla.org/ becomes
http:///mozilla.org/ which when parsed becomes http://mozilla.org/
which is somewhat bad for security. My plan is to change the URL
Standard to fail parsing empty host names. I'll have to research if
there's other cases that are not idempotent.
I don't actually know what this means. Are you saying that 
"http://@/mozilla.org/"; sometimes resolves to one URI and sometimes another?




File URLs: As far as I know in Gecko parsing file URLs is
platform-specific so the URL object you get back will have
platform-specific characteristics. In the URL Standard I tried to
align parsing mostly with Windows, allowing interpretation of the file
URL up to the platform. This means platform-specific badness is
exposed, but is risky.


Files are inherently platform-specific. What are the specific risks you 
are trying to mitigate?


--BDS

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Replacing Gecko's URL parser

2013-07-01 Thread Anne van Kesteren
I'd like to discuss the implications of replacing/morphing Gecko's URL
parser with/into something that conforms to
http://url.spec.whatwg.org/

The goal is to get URL parsing to the level of quality of our CSS and
HTML parsers and get convergence over time with other browsers as at
the moment it's quite different between browsers.

I'm interested in hearing what people think. I outlined two issues
below, but I'm sure there are more. By the way, independently of the
parser bit, we are proceeding with implementing the URL API as drafted
in the URL Standard in Gecko, which should make testing URL parsing
easier.


Idempotent: Currently Gecko's parser and the URL Standard's parser are
not idempotent. E.g. http://@/mozilla.org/ becomes
http:///mozilla.org/ which when parsed becomes http://mozilla.org/
which is somewhat bad for security. My plan is to change the URL
Standard to fail parsing empty host names. I'll have to research if
there's other cases that are not idempotent.

File URLs: As far as I know in Gecko parsing file URLs is
platform-specific so the URL object you get back will have
platform-specific characteristics. In the URL Standard I tried to
align parsing mostly with Windows, allowing interpretation of the file
URL up to the platform. This means platform-specific badness is
exposed, but is risky.


--
http://annevankesteren.nl/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Rendering meeting today

2013-07-01 Thread Milan Sreckovic

The Rendering meeting is about all things Gfx, Image, Layout, and Media.
It takes place every second Monday, alternating between 2:30pm PDT and 5:30pm 
PDT.

The next meeting will take place today, Monday, July 1st at 2:30 PM US/Pacific
Please add to the agenda: 
https://wiki.mozilla.org/Platform/GFX/2013-July-1#Agenda

San Francisco - Monday, 2:30pm
Winnipeg - Monday, 4:30pm
Toronto - Monday, 5:30pm
GMT/UTC - Monday, 21:30
Paris - Monday, 11:30pm
Taipei - Tuesday, 5:30am
Auckland - Tuesday, 9:30am

Video conferencing:
Vidyo room Graphics (9366)
https://v.mozilla.com/flex.html?roomdirect.html&key=vu1FKlkBlT29

Phone conferencing:
+1 650 903 0800 x92 Conf# 99366
+1 416 848 3114 x92 Conf# 99366
+1 800 707 2533 (pin 369) Conf# 99366

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Code coverage take 2, and other code hygiene tools

2013-07-01 Thread Ted Mielczarek
On 6/24/2013 11:02 PM, Justin Lebar wrote:
> Under what circumstances would you expect the code coverage build to break
> but all our other builds to remain green?
Most of the issues I saw with our old code coverage setup were directly
related to them not matching our normal production builds. We saw
brokenness from the different compiler flags[1], timing out on test
steps[2] because they were running a monolithic build+test run, simply
failing to run tests because they didn't have an X DISPLAY set[3]. All
of these things were compounded by them not being terribly visible, so
nobody would notice they were broken for long periods of time.

Assuming the new setup more closely matches our existing builds I would
not expect as many problems to crop up. If we have someone committed to
owning the builds it probably wouldn't be that hard to keep them working.

-Ted

1. https://bugzilla.mozilla.org/show_bug.cgi?id=657396
2. https://bugzilla.mozilla.org/show_bug.cgi?id=657631
3. https://bugzilla.mozilla.org/show_bug.cgi?id=657647

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform