Re: Mapping Rust panics to MOZ_CRASH on non-Rust-created threads

2016-03-23 Thread Brian Smith
Henri Sivonen  wrote:

> I think for release builds, we should have the following:
>  1) Rust panic!() causes a crash that's MOZ_CRASH()-compatible for
> crash-reporting purposes. (See
> https://mxr.mozilla.org/mozilla-central/source/mfbt/Assertions.h#269
> and particularly
> https://mxr.mozilla.org/mozilla-central/source/mfbt/Assertions.h#184 )
>  2) All Rust code in Gecko, even std, is compiled without unwinding
> support.
>  3) The panic!() reason strings should not end up in the binary.
> (Neither reason strings in std nor reason strings from elsewhere.)
>

I agree with all of this. And, it would be great if this
configuration—including the non-SSE2 x86 target—was available to non-Gecko
users of Rust. Actually, it would be great if this were the default
configuration for Rust.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Mapping Rust panics to MOZ_CRASH on non-Rust-created threads

2016-03-22 Thread Brian Smith
On Tue, Mar 22, 2016 at 3:03 AM, Henri Sivonen  wrote:

> It seems that the Rust MP4 parser is run a new Rust-created thread in
> order to catch panics.
>

Is the Rust MP4 parser using panics for flow control (like is common in JS
and Java with exceptions), or only for "should be impossible" situations
(like MOZ_CRASH in Gecko)?

IMO panics in Rust should only be used for cases where one would use
MOZ_CRASH and so you should configure the rust runtime to abort on panics.

I personally don't expect people to write correctly write unwinding-safe
code—especially when mixing non-Rust and Rust—any more than I expect people
to write exception-safe code (i.e. not at all), and so abort-on-panic is
really the only acceptable configuration to run Rust code in.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Linux distro readiness for Rust in Gecko

2016-03-20 Thread Brian Smith
Henri Sivonen  wrote:

> An example of this *not* being the case: I expect to have to import
> https://github.com/gz/rust-cpuid into Gecko in order to cater to the
> Mozilla-side policy sadness of having to support Windows XP users
> whose computers don't have SSE2.


With my Rust programmer hat on:

I recommend that you don't do that. Instead, have your Rust code call
Gecko's C/C++ CPUID code. Or set some global variable with the "has SSE2"
flag in your C++ code before the Rust code gets invoked, and have the Rust
code test the SSE2 flag.

It's not worth using unstable features, especially `asm!` which is
fundamentally flawed and shouldn't be stabilized, just to do something this
simple.

With my Esteemed Mozilla Alumni hat on:

It's absolutely ridiculous that Mozilla lets Debian and Red Hat choose what
tools Mozilla uses to build its software. Mozilla needs to use the best
tools for the job, and assist (mildly) in helping Debian and Red Hat cope
with whatever difficulties they incur due to that. Not just Rust, but
GCC/Clang and NSS and everything else.

And, same with ESR; even reading this sentence cost you more effort than
ESR is worth.

Love,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ Core Guidelines

2016-01-11 Thread Brian Smith
Henri Sivonen <hsivo...@hsivonen.fi> wrote:

> On Wed, Jan 6, 2016 at 9:27 PM, Brian Smith <br...@briansmith.org> wrote:
> > Henri Sivonen <hsivo...@hsivonen.fi> wrote:
> >>
> >> On Thu, Oct 1, 2015 at 9:58 PM, Jonathan Watt <jw...@jwatt.org> wrote:
> >> > For those who are interested in this, there's a bug to consider
> >> > integrating
> >> > the Guidelines Support Library (GSL) into the tree:
> >> >
> >> > https://bugzilla.mozilla.org/show_bug.cgi?id=1208262
> >>
> >> This bug appears to have stalled.
> >>
> >> What should my expectations be regarding getting an equivalent of (at
> >> least single-dimensional) GSL span (formerly array_view;
> >> conceptually Rust's slice) into MFBT?
> >>
> >> > On 30/09/2015 22:00, Botond Ballo wrote:
> >> >> The document is a work in progress, still incomplete in many places.
> >> >> The initial authors are Bjarne Stroustrup and Herb Sutter, two
> members
> >> >> of the C++ Standards Committee, and they welcome contributions via
> >> >> GitHub to help complete and improve it.
> >>
> >> In their keynotes, a template called array_buffer was mentioned. What
> >> happened to it? array_buffer was supposed to be array_view
> >> (since renamed to span) plus an additional size_t communicating
> >> current position in the buffer. Surprisingly, Core Guidelines has an
> >> example of reading up to n items into span but the example doesn't
> >> show how the function would signal how many bytes between 0 and n it
> >> actually read, so the Guidelines themselves don't seem to give a
> >> proper answer to signaling how many items of a span a function read or
> >> wrote.
> >
> >
> > This functionality already exists--in a safer form than the Core C++
> > form--in Gecko: mozilla::pkix::Input and mozilla::pkix::Reader.
>
> I admit I'm not familiar with the nuances of either GSL span or
> mozilla::pkix::Input. What makes the latter safer?
>

mozilla::pkix::Input/Reader will never throw an exception or abort the
process; instead it always returns an explicit success/failure result. It
seems GSL will either abort or throw an exception in many situations. Since
aborting is terrible and exceptions are not allowed in Gecko code, it seems
Input/Reader is safer.

The documentation for the Rust version of Input/Reader [1] attempts to
explain more of the benefits of the Input/Reader approach. The one in
*ring* is better than the one in mozilla::pkix in quite a few respects, but
the idea is mostly the same.


> mozilla::pkix::Input seems to be read-only. I'm looking for both
> read-only and writable spans.
>

That's something Input/Reader doesn't do, because it is focused exclusively
on parsing (untrusted) input.

[1] https://briansmith.org/rustdoc/ring/input/index.html

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ Core Guidelines

2016-01-06 Thread Brian Smith
Henri Sivonen  wrote:

> On Thu, Oct 1, 2015 at 9:58 PM, Jonathan Watt  wrote:
> > For those who are interested in this, there's a bug to consider
> integrating
> > the Guidelines Support Library (GSL) into the tree:
> >
> > https://bugzilla.mozilla.org/show_bug.cgi?id=1208262
>
> This bug appears to have stalled.
>
> What should my expectations be regarding getting an equivalent of (at
> least single-dimensional) GSL span (formerly array_view;
> conceptually Rust's slice) into MFBT?
>
> > On 30/09/2015 22:00, Botond Ballo wrote:
> >> The document is a work in progress, still incomplete in many places.
> >> The initial authors are Bjarne Stroustrup and Herb Sutter, two members
> >> of the C++ Standards Committee, and they welcome contributions via
> >> GitHub to help complete and improve it.
>
> In their keynotes, a template called array_buffer was mentioned. What
> happened to it? array_buffer was supposed to be array_view
> (since renamed to span) plus an additional size_t communicating
> current position in the buffer. Surprisingly, Core Guidelines has an
> example of reading up to n items into span but the example doesn't
> show how the function would signal how many bytes between 0 and n it
> actually read, so the Guidelines themselves don't seem to give a
> proper answer to signaling how many items of a span a function read or
> wrote.
>

This functionality already exists--in a safer form than the Core C++
form--in Gecko: mozilla::pkix::Input and mozilla::pkix::Reader.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Merging comm-central into mozilla-central

2015-10-26 Thread Brian Smith
On Mon, Oct 26, 2015 at 1:45 PM, Joshua Cranmer  <pidgeo...@gmail.com>
wrote:

> FWIW, when Brian Smith made his comments on mozilla.dev.security.policy, I
> did try to find a bug detailing what he was talking about... and I couldn't
> find what he was talking about, which means that our security team is
> finding problems in Thunderbird and not properly notifying any Thunderbird
> developers of them.


Did you try clicking on the links in my emails?

> Here is a good example to show that the security of Thunderbird's
> S/MIME handling is not properly managed:
> https://bugzilla.mozilla.org/show_bug.cgi?id=1178032

> You can see an example of this policy at work at
> https://bugzilla.mozilla.org/show_bug.cgi?id=1114787.

Love,
Brian
-- 
https://briansmith.org/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to not fix: Building with gcc-4.6 for Fx38+

2015-03-11 Thread Brian Smith
Mike Hommey m...@glandium.org wrote:
 Brian Smith wrote:
 It is very inconvenient to have a minimum supported compiler version
 that we cannot even do test builds with using tryserver.

 Why this sudden requirement when our *current* minimum supported
 version is 4.6 and 4.6 is nowhere close to that on try. That is also
 true for older requirements we had for gcc. That is also true for clang
 on OSX, and that was also true for the short period we had MSVC 2012 as
 a minimum on Windows. I'm not saying this is an ideal situation, but I'd
 like to understand why gcc needs to suddenly be treated differently.

The current situation is very inconvenient. To improve it, all
compilers should be treated the same: Code that builds on
mozilla-inbound/central/tryserver is good enough to land, as far as
supported compiler versions are concerned. So, for example, if clang
3.7 is what is used on the builders, then clang 3.6 would be
unsupported. And the same with GCC and MSVC.

Further, it is best to upgrade compiler versions as fast as possible,
so that we can make more use of newer C++ features. I contributed many
patches in bug 1119072 so that MSVC 2015 can become the minimum MSVC
version ASAP. The same should happen with GCC and clang so that we can
write better code using newer C++ features ASAP. (This also requires
replacing STLPort with a reasonable C++ standard library
implementation on Android/B2G.)

 Did any of them state a preference for not going to GCC 4.8? If so,
 what was the reasoning?

 At least for Debian, current stable can't build security updates with
 more than 4.7.

Isn't this a chicken and egg problem? If Firefox required GCC 4.9 then
Debian would figure out a way to build security updates using GCC 4.9.
It is easier for Debian to insist on GCC 4.7 so that's what Debian
asks for. But, it is better to optimize for Mozilla developer
efficiency than any Linux distros' efficiency. In particular, things
like minimum compiler versions affect every Mozilla developer's
efficiency, which affects the rate at which we can ship improvements
to 100% of Mozilla's users. But, Linux-distro-packaged Firefox makes
up less than 1% of the userbase.

Note that I'm not saying Debian is unimportant. I'm saying that
Mozilla should focus on what's best for developer productivity, and
then assist Debian and others cope with whatever inconvenience that
that strategy causes them.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to not fix: Building with gcc-4.6 for Fx38+

2015-03-11 Thread Brian Smith
bo...@mozilla.com wrote:
 Also, from what I can tell of the C++ features that gcc-4.8 enables (from 
 [1]), none of them are available until MSVC 2015.
 It seems likely that we'll be supporting MSVC 2013 until the next ESR, so I 
 don't see that moving to 4.8 gives us any immediate benefits.

 [1] https://developer.mozilla.org/en-US/docs/Using_CXX_in_Mozilla_code

ESR is also an incredibly wasteful drag on Mozilla development. When
ESR was first proposed we agreed, as far as I understand, that we
would NOT hold back improvements to the real Firefox for the sake of
ESR. In fact, it is counterproductive to hold back changes like this
for ESR's benefit because doing so makes it *harder* to backport
changes to ESR. For example, imagine if you had updated the Chromium
code after the next ESR branched, and then 12 months later a serious
and hard-to-fix security bug in the old Chromium code was found*. It
would be much less work to backport the patch is the minimum compiler
version for ESR was as similar to the minimum compiler version for
real Firefox.

In the particular case of MSVC on Windows, it would be particularly
good to make MSVC 2015 the minimum compiler version ASAP for this
reason, assuming it will be possible at all. That's why I contributed
all the patches in bug 1119072 elsewhere to facilitate that.

Cheers,
Brian

* Of course, security researchers won't be looking at that old
Chromium code because so few people use Firefox ESR that it isn't
worth doing it. So, users of Firefox ESR will generally be less secure
than users of the real Firefox.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to not fix: Building with gcc-4.6 for Fx38+

2015-03-11 Thread Brian Smith
Ryan VanderMeulen rya...@gmail.com wrote:
 (2) The trychooser tool should be extended to make it possible to
 build with GCC 4.7 on any platforms where it is supported, and
 bootstrap.py be updated to install GCC 4.7 alongside the
 currently-installed compiler.

 All Android and B2G JB/KK emulator builds are on GCC 4.7. Linux Desktop and
 B2G ICS/L emulator builds are GCC 4.8. All of the aforementioned are
 available on Trychooser.

Thanks for sharing that. For the patches that I contribute to Gecko,
that actually works out OK, because it seems like all my code is built
on all those platforms. I The overall issue of wasting time supporting
relatively unimportant configurations, which aren't checked during
tryserver/inbound/central builds still applies, though.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to Ship: Fetch API

2015-02-19 Thread Brian Smith
nsm.nik...@gmail.com wrote:
 Target release: FF 38 or 39 (feedback requested)
 Currently hidden behind: dom.fetch.enabled.
 Bug to enable by default: https://bugzilla.mozilla.org/show_bug.cgi?id=1133861

Great work!

Is there a test that verifies that fetch is correctly handled by
nsIContentPolicy (for extensions like AdBlock) and mixed content
blocker? If not, could you please add one before shipping?

Thanks,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charter: Web Application Security (WebAppSec) Working Group

2015-02-11 Thread Brian Smith
Daniel Veditz dved...@mozilla.com wrote:
 On Thu, Jan 29, 2015 at 10:32 PM, L. David Baron dba...@dbaron.org wrote:

 (1) The Confinement with Origin Web Labels deliverable is described
 in a way that makes it unclear what the deliverable would do.  It
 should be clearer.  Furthermore, the lack of clarity means we
 couldn't evaluate whether we are comfortable with it being in the
 charter.

 Brian's objections seem to be to a different sub-origin proposal from Joel
 Weinberger of Google. COWL is essentially a data-tainting proposal that
 builds on the capabilities of CSP to make it safer to use 3rd party
 libraries and mashups. Having it in the charter is not a commitment that
 Mozilla will implement this, but it's a promising idea and having it in the
 charter means it's in scope for WASWG to discuss it.

Yes, I agree I was mistaken. You can read more about COWL at
http://cowl.ws/. Note, in particular, that the prototype is a
modification of Firefox. Also note this acknowledgement from the
second COWL paper: We thank Bobby Holley, Blake Kaplan, Ian Melven,
Garret Robinson, Brian Smith, and Boris Zbarsky for helpful
discussions of the design and implementation of COWL.

 (2) The Entry Point Regulation for Web Applications deliverable seems

 to have serious risks of breaking the ability to link.  It's not
 clear that the security benefits of this specification outweigh the
 risks to the abilities of Web users.

 The Working Group is also concerned that we not break the ability to do
 links on the web. We have added that as an explicit requirement in the
 charter. This work item is the most nebulous item in the charter. It has
 some promising ideas that could help prevent CSRF type attacks; it might
 also turn out to be completely unworkable and be dropped. We'd like it to be
 in the charter so we can explore these concepts under the W3 IPR commitments
 of the WG members.

I think it would be good to work out how much of the problem is solved
by same-origin cookies and related things, and how much is left over
for EPR to solve, before the working group dives into EPR. EPR and CSP
pinning look like they have a lot of overlap with app manifests.

 This item was indeed a reference to the Powerful Features spec, which has
 been explicitly added to the deliverables section. The Web Application
 Security WG has been directed by the TAG to document best practices on
 this (http://www.w3.org/2001/tag/doc/web-https). The charter has been
 clarified to note that only the algorithm for determining if a given
 context is sufficiently secure will be normative, and advice on when a
 feature might designate itself as requiring a secure context will be
 non-normative.

I think that Powerful Features is a terrible name, but I support the
webappsec work on it.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Fwd: [blink-dev] Intent to Ship: Plugin Power Saver Poster Images

2015-02-09 Thread Brian Smith
On Mon, Feb 9, 2015 at 8:03 AM, Benjamin Smedberg benja...@smedbergs.us wrote:
 On 2/7/2015 4:38 AM, Jet Villegas wrote:
 I'm skeptical of the immediate value. We need to focus on Flash hangs and
 also the security issues surrounding Flash 0-days especially as distributed
 by ad networks. Power saving is not our immediate or medium-term focus.

Isn't power saving mostly a euphemism for ad blocker in Safari?

if you click-to-play all Flash-based ads, you've greatly reduced the
Flash hangs and security issues surrounding Flash 0-days especially as
distributed by ad networks.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charter: Web Application Security (WebAppSec) Working Group

2015-01-30 Thread Brian Smith
L. David Baron dba...@dbaron.org wrote:
 Is the argument you're making that if the site can serve the ads
 from the same hostname rather than having to use a different
 hostname to get same-origin protection, then ad-blocking (or
 tracking-blocking) tools will no longer be able to block the ads?

Yes.

Anyway, my point isn't to suggest that Mozilla should ask for this
item to be removed from the charter. Rather, my point is that this
item has some pretty big, non-obvious ramifications (not just related
to tracking) that Mozilla should understand. I think what you said
about it being described in an unclear way is a good response. Joel
Weinberger from the Chrome Security Team already explained a lot of it
to me privately. I recommend talking to him about it, if you want to
understand it better.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charter: Web Application Security (WebAppSec) Working Group

2015-01-18 Thread Brian Smith
L. David Baron dba...@dbaron.org wrote:
 The W3C is proposing a revised charter for:

   Web Application Security Working Group
   http://www.w3.org/2014/12/webappsec-charter-2015.html
   https://lists.w3.org/Archives/Public/public-new-work/2014Dec/0008.html

 Mozilla has the opportunity to send comments, objections, or support
 through Friday January 30.

 Mozilla is involved in this working group; see membership at
 https://www.w3.org/2000/09/dbwg/details?group=49309public=1order=org .

 Please reply to this thread if you think there's something else we
 should say, or if you think we should support the charter.

Please see the threads at

[1] https://lists.w3.org/Archives/Public/public-webappsec/2014Nov/0179.html
[2] https://groups.google.com/d/topic/mozilla.dev.privacy/Rbm1XdfXX6k/discussion

In particular, although I think the sub-origin work is potentially
very useful, it seems to have some pretty negative unintended
consequences. Even if you don't share my specific concerns about the
potential negative interaction between the sub-origin part of the
proposed charter with respect to Mozilla's Tracking Protection work,
it is still a good idea for Mozilla to spend some time to fully
understand all the intended and unintended consequences of the
sub-origin concept and the specific design being proposed for it.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Dropping support for MSVC2012

2015-01-02 Thread Brian Smith
Ehsan Akhgari ehsan.akhg...@gmail.com wrote:
 On 2015-01-02 2:03 PM, Brian Smith wrote:
 In this case, the problem is that I wrote a patch to explicitly delete
 (= delete) some members of classes in mozilla::pkix. mozilla::pkix
 cannot depend on MFBT for licensing and build independence reasons
 (e.g. so it can be put into NSS). I don't want to add the equivalent
 of MOZ_DELETE to mozilla::pkix just to make MSVC2012 work.

 = delete currently cannot be used in Mozilla code according to
 https://developer.mozilla.org/en-US/docs/Using_CXX_in_Mozilla_code.

I realize that now. It is very unfortunate that the rules for using
C++ in Mozilla code are not simply if it passes tryserver, it's OK.
I hope that Mozilla accelerates its deprecation for old compilers (GCC
less than 4.8, in particular, so that enum class can be used safely)
and improves the automation.

 I am
 not sure why you don't want to add the equivalent of MOZ_DELETE given how
 easy that is.

  Our personal opinion about MSVC2012 aside, without a decision
 to drop support for MSVC2012, we cannot say no to fixing the build issues
 specific to that compiler, and such decision has not been made so far.

First, I will back out the offending patch pending a resolution of
this discussion, in the interests of being a good team player while
this discussion unfolds. I've already asked a VS2012 user to review
the patch here:

https://bugzilla.mozilla.org/show_bug.cgi?id=1117003#c3

However, I think my time is better spent arguing for dropping MSVC2012
support (and allowing = delete) than writing another = delete
macro. So, let's try to resolve that it is OK to drop MSVC2012 support
in Firefox 37 now.

 We shouldn't hold people to supporting MSVC2012 without a way to
 verify that MSVC2012 can build the code correctly on tryserver. That
 is, it is unreasonable to require that  we won't check in patches
 that require compiler features that 2012 does not support if MSVC2012
 is not in tryserver. It's especially an unnecessary burden on us
 independent contributors.

 FWIW people fix compiler issues that cannot be tested on try server all the
 time.

Because I develop on Windows and most others develop on Linux, I am
disproportionately involved in these bugs, which usually affect
differences in fail-on-warnings behavior between Windows and newer
clang versions. That's why I'd like for us to resolve to move forward
to increasing minimum compiler versions at a faster pace, because that
seems like an effective and cheap way of reducing the occurrences of
such issues.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


RE: Dropping support for MSVC2012

2015-01-02 Thread Brian Smith
Ehsan wrote:
 Note that MSVC 2012 is supported in the sense that we'd accept
 patches that help fix it, and we won't check in patches that require
 compiler features that 2012 does not support.

In this case, the problem is that I wrote a patch to explicitly delete
(= delete) some members of classes in mozilla::pkix. mozilla::pkix
cannot depend on MFBT for licensing and build independence reasons
(e.g. so it can be put into NSS). I don't want to add the equivalent
of MOZ_DELETE to mozilla::pkix just to make MSVC2012 work.

 It would be much easier to keep it running if there was at least one
 builder that ran VS2012 that failed when someone checks in a compile
 that breaks it. The non-zero cost is mostly fixing there regressions,
 and would be much lower cost if they were caught earlier.

 But what benefit would we get out of doing that?  Keeping MSVC2012
 working should not be a goal to itself.  I can't think of what benefit
 adding official support for MSVC2012 can have.

We shouldn't hold people to supporting MSVC2012 without a way to
verify that MSVC2012 can build the code correctly on tryserver. That
is, it is unreasonable to require that  we won't check in patches
that require compiler features that 2012 does not support if MSVC2012
is not in tryserver. It's especially an unnecessary burden on us
independent contributors.

The best solution is to just drop MSVC2012 support and officially
allow features like = delete to be used from Gecko 37 onward.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Fwd: David Keeler is now the module owner of PSM

2014-08-01 Thread Brian Smith
-- Forwarded message --
From: Brian Smith br...@briansmith.org
Date: Fri, Aug 1, 2014 at 9:24 AM
Subject: David Keeler is now the module owner of PSM
To: mozilla-governa...@lists.mozilla.org, mozilla's crypto code
discussion list dev-tech-cry...@lists.mozilla.org, David Keeler
dkee...@mozilla.com


Hi,

Amogst other things, PSM is the part of Gecko (Firefox) that connects
Gecko to NSS and other crypto bits.

David Keeler has taken on most of the responsibility for keeping
things in PSM running smoothly and so it makes sense to have him be
the module owner. After asking the other PSM module peers, I went
ahead and made that change:

https://wiki.mozilla.org/Modules/Core#Security_-_Mozilla_PSM_Glue

Congratulations David!

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Try-based code coverage results

2014-07-07 Thread Brian Smith
On Mon, Jul 7, 2014 at 11:11 AM, Jonathan Griffin jgrif...@mozilla.com
wrote:

 I guess a related question is, if we could run this periodically on TBPL,
 what would be the right frequency?

 We could potentially create a job in buidlbot that would handle the
 downloading/post-processing, which might be a bit faster than doing it on
 an external system.


Ideally, you would be able to trigger it on a try run for specific test
suites or even specific subsets of tests. For example, for certificate
verification changes and SSL changes, it would be great for the reviewer to
be able to insist on seeing code coverage reports on the try run that
preceded the review request, for xpcshell, cppunit, and GTest, without
doing coverage for all test suites.

To minimize the performance impact of it further, ideally it would be
possible to scope the try runs to cppunit, GTest, and xpcshell tests under
the security/ directory in the tree.

This would make code review more efficient, because the reviewers wouldn't
have to spend as much time suggesting missing tests as part of the review.

In PSM, and probably in Gecko generally, people are unlikely to write new
tests for old code that they are not changing, so periodic full reports
would be less helpful than reports for tryserver.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ standards proposals of potential interest, and upcoming committee meeting

2014-06-11 Thread Brian Smith
On Tue, Jun 10, 2014 at 4:19 PM, Botond Ballo bba...@mozilla.com wrote:

  Why put this into core C++? Why not leave it to libraries?

 The standard library is a library :)

 One of the biggest criticisms C++ faces is that its standard library is
 very narrow in scope compared to other languages like Java or C#, and thus
 programmers often have to turn to third-party libraries to accomplish tasks
 that one can accomplish out of the box in other languages. The Committee
 is trying to address this criticism by expanding the scope of the standard
 library.


C++ desperately needs improvements for reducing the occurrence of
use-after-free errors. Rust features for this like borrowed references
should be imported into the language. A real-life (but reduced/modified)
example from mozilla::pkix:

Result parse(const std::vectoruint8_t der)
{
Input input(der);
uint8_t b;
if (input.Read(b) != Success) {
return Failure;
}

   return Result;
}

It should be possible to ensure that, no matter what the Input class does
with the argument in its constructor, there will be no references to the
der parameter that outlive the scope of the function. Right now this is
impossible to specify in C++.

Discriminated unions (Boost.Variant or better) are another safety feature
that would be very useful to have in Standard C++.

There should be a way to indicate in a switch statement whether you intend
to cover all the cases:

   enum A { a, b, c };

   switch (x) {
  case a:
 return true;
  case b:
 return false;
   }

There needs to be a way to tell the compiler to reject this switch
statement because it doesn't cover all the cases. To work around the lack
of that feature, we end up writing:

   switch (x) {
  case a:
 return true;
  case b:
 return false;
  case c:
 return false;
  default:
 PR_NOT_REACHED(unexpected case); // or...
 MOZ_CRASH(unexpected case);
   }

But, that's turning a statically-detectable error into a potential runtime
crash, unnecessarily. Currently, we can mess with compiler warnings
settings to catch these but that isn't good enough.

Large parts of the standard library are unusable or nearly unusable when
exceptions are disabled, such as the standard containers. Sometimes they
can be used if the default allocator is changed to abort on out-of-memory,
but often abort-on-OOM is not what you want. Thus, much (most?) Gecko code
cannot use a huge part of the standard library. For example, it should be
made possible to call std::resize() when exceptions are disabled and
without triggering abort-on-OOM, such that the caller can detect when the
resize fails. Similarly, it should be possible to attempt to append an
element to std::vector without an exception being throw or the process
being aborted on failure, but with the error still detectable.

std::shared_ptr is mostly unusable in Gecko code because there's no way to
specify whether you need thread-safety or not (usually you don't). There
should be a way to specify whether you want to pay the cost of thread
safety when using it.

The language should define a solution for the build time problems that we
work around with UNIFIED_SOURCES, so that we don't have to use
UNIFIED_SOURCES any more.

In general, I'd rather have the committee focus more on safety and
correctness of code by improving the core language and core libraries,
rather than standardizing APIs like graphics.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Google announces Chrome builds for Win64

2014-06-04 Thread Brian Smith
On Tue, Jun 3, 2014 at 11:37 AM, Chris Peterson cpeter...@mozilla.com
wrote:

 http://blog.chromium.org/2014/06/try-out-new-64-bit-windows-
 canary-and.html

 What is the status of Firefox builds for Win64? When Mozilla releases
 Win64 builds (again), we'll be seen as reacting to Google when we've
 actually been working on it for a while. :\


Does it make sense to ship 64-bit Firefox before shipping
mutli-process/sandboxed Firefox? I worry that 64-bit Firefox will be more
memory hungry than 32-bit Firefox and if it lands first then it will be
harder to land multi-process Firefox which is also likely to use more
memory. I think having multi-process sooner is more important than having
64-bit sooner, if there is such a choice to make. IMO, it would be good to
make explicit choices instead of just shipping whichever is done first.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: B2G, email, and SSL/TLS certificate exceptions for invalid certificates

2014-05-29 Thread Brian Smith
On Thu, May 29, 2014 at 2:03 PM, Andrew Sutherland 
asutherl...@asutherland.org wrote:

 This is a good proposal, thank you.  To restate my understanding, I think
 the key points of this versus the proposal I've made here or the variant in
 the https://bugzil.la/874346#c11 ISPDB proposal are:

 * If we don't know the domain should have a valid certificate, let it have
 an invalid certificate.


Right. But, I would make the decision of whether to allow an invalid
certificate only at configuration time, instead of every time you connect
to the server like Thunderbird does. Though you'd have to solve the problem
of dealing with a server that changed from one untrusted certificate to
another.



 * Preload more of the ISPDB on the device or maybe just an efficient
 mechanism for indicating a domain requires a valid certificate.


Right.


 * Do not provide strong (any?) guarantees about the ISPDB being able to
 indicate the current invalid certificate the server is expected to use.


Right. It would be better for us to spend more effort improving security
for secure servers that are trying to do something reasonable, instead of
spending time papering over fundamental security problems with the server.


 It's not clear what decision you'd advocate in the event we are unable to
 make a connection to the ISPDB server.  The attacker does end up in an
 interesting situation where if we tighten up the autoconfig mechanism and
 do not implement subdomain guessing (https://bugzil.la/823640), an
 attacker denying access to the ISPDB ends up forcing the user to perform
 manual account setup.  I'm interested in your thoughts here.


I think guessing is a bad idea in almost any/every situation because it is
easy to guess wrong (and/or get tricked) and really screw up the user's
config.

Maybe it would be better to crowdsource configuration information much like
location services do: get a few users to opt-in to
reporting/verifying/voting on a mapping of MX records to server settings so
that you can build a big centralized database of configuration data for
(basically) every mail server in existence. Then, when users get
auto-configured with that crowdsourced data, have them report back on
whether the automatic configuration worked.

Until we could do that, it seems reasonable to just make sure that ISPDB
has the configuration data for the most common commodity email providers in
the target markets for FirefoxOS, since FirefoxOS is primarily a
consumer-oriented product.



 Implementation-wise I understand your suggestion to be leaning more
 towards a static implementation, although dynamic mechanisms are possible.
  The ISPDB currently intentionally uses static files checked into svn for
 implementation simplicity/security, a decision I agree with.  The exception
 is our MX lookup mechanism at
 https://mx.thunderbird.net/dns/mx/mozilla.com



 I should note that the current policy for the ISPDB has effectively been
 try and get people to host their own autoconfig entries with an advocacy
 angle which includes rejecting submissions.  What's you've suggested here
 (and I on comment 11) implies a substantiative change to that.  This seems
 reasonable to me and when I raised the question about whether such changes
 would be acceptable to Thunderbird last year, people generally seemed to
 either not care or be on board:


It seems like you would be able to answer this as part of the scan of the
internet, by trying to retrieve the self-hosted autoconfig file if it is
available. I suspect you will find that almost nobody is self-hosting it.

I should also note that I think the automation to populate the ISPDB is
 still likely to require sizable engineering effort but is likely to have
 positive externalities in terms of drastically increasing our autoconfig
 coverage and allowing us to reduce the duration of the autoconfig probing
 process.  For example, we could establish direct mappings for all dreamhost
 mail clusters.


Autopopulating all the autoconfig information is a lot of work, I'm sure.
But, it should be possible to create good heuristics for deciding whether
to accept certs issued by untrusted issuers in an email app. For example,
if you don't have the (full) autoconfig data for an MX server, you could
try creating an SMTP connection to the server(s) indicated in the MX
records and then use STARTTLS to switch to TLS. If you successfully
validate the certificate from that SMTP server, then assume that the
IMAP/POP/LDAP/etc. servers use valid certificates too, even if you don't
know what those servers are.

Again, I think if you made sure that GMail, Outlook.com, Yahoo Mail, 163.com,
Fastmail, TuffMail, and the major analogs in the B2G markets were all
marked TLS-only-with-valid-certificate then I think a huge percentage of
users would be fully protected from whatever badness allowing cert error
overrides would cause.

Or, perhaps you could just create a whitelist of servers that are allowed
to have cert error 

Re: B2G, email, and SSL/TLS certificate exceptions for invalid certificates

2014-05-28 Thread Brian Smith
On Wed, May 28, 2014 at 5:13 PM, Andrew Sutherland 
asutherl...@asutherland.org wrote:

 On 05/28/2014 07:16 PM, David Keeler wrote:

 * there is only a single certificate store on the device and therefore
 that all exceptions are device-wide

 This is an implementation detail - it would not be difficult to change
 exceptions to per-principal-per-app rather than just per-principal.


 It's good to know this should be easy, thank you!


IIRC, different apps can share a single HTTPS connection. So, for HTTPS,
you'd also need to modify the HTTP transaction manager so that it doesn't
mix transactions from apps with different cert override settings on the
same connection.


My imagined rationale for why someone would use a self-signed certificate
 amounts to laziness.


We encourage websites and mail servers to use invalid and self-signed
certificates by making it easy to override the cert error.


 A theoretical (but probably not in reality) advantage of only storing one
 per domain:port is that in the event the key A is compromised and a new key
 B is generated, the user would be notified when going back to A from B.


This actually happens regularly in real life. If you accumulate all the
cert error overrides for a host then you end up permanently letting every
captive portal the user has accessed the site through MitM the user.


 David Keeler wrote:



  In terms of solving the issue at hand, we have a great opportunity to
 not implement the press this button to MITM yourself paradigm that
 desktop browsers currently use. The much safer option is to ask the user
 for the expected certificate fingerprint. If it matches the certificate
 the server provided, then the exception can safely be added. The user
 will have to obtain that fingerprint out-of-band over a hopefully secure
 channel.


David, I would like to agree with you but even I myself have never checked
the fingerprint of a certificate before adding a cert error override for a
site, and I suspect that implementing the solution you propose would be the
equivalent of doing nothing for the vast majority of cases, due to
usability issues.


  I agree this is a safe approach and the trusted server is a significant
 complication in this whole endeavor.  But I can think of no other way to
 break the symmetry of am I being attacked or do I just use a poorly
 configured mail server?


It would be pretty simple to build a list of mail servers that are known to
be using valid TLS certificates. You can build that list through port
scanning, in conjunction with the auto-config data you already have. That
list could be preloaded into the mail app and/or dynamically
retrieved/updated. Even if we seeded this list with only the most common
email providers, we'd still be protecting a lot more users than by doing
nothing, since email hosting is heavily consolidated and seems to be
becoming more consolidated over time.


 NB: I do think that if we must make it possible to insecurely add a
 certificate exception, then making it harder for users to do so is
 desirable.  My original hope was that we'd just provide a mechanism in the
 settings app to let users add exceptions and we'd never link the user
 directly to this from the email app.  Instead we'd bounce them to a support
 page first which would require a-hassle-but-not-ridiculous steps along the
 lines of the long flow via Thunderbird preferences.  It's unlikely a gmail
 vanity domain user would decide to actively take all those steps to
 compromise their security.


I don't think that making things difficult for the users of our software is
going to improve things too much because users will blame us for being
harder to use than our competitors.

One way to discourage the use of non-trusted certificates is to have a
persistent UI indication that the certificate is bad in the app, along with
a link to more info so that the user can learn why using such certificates
is a bad idea. This way, even if we make adding cert error overrides easy
for users, we're still putting pressure on the server administrator to use
a valid certificate.

Regarding DANE: Any TLS registry can apply to be a trust anchor in
Mozilla's CA program and we'll add them if they meet our requirements. We
can constrain them to issuing certificates that are trusted only for their
own TLDs; we've done this with some CAs in our program already. Any CA can
give away free certificates to any subset of websites (e.g. any website
within a TLD). Consequently, there really isn't much different about the CA
system we already have and DANE, as far as the trust model or costs are
concerned.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: nsRefPtr vs RefPtr

2014-05-13 Thread Brian Smith
On Mon, May 12, 2014 at 9:36 AM, Kyle Huey m...@kylehuey.com wrote:

 We should get rid of RefPtr, just like we did the MFBT refcounting classes.

 The main thing stopping a mechanical search and replace is that the
 two smart pointers have different semantics around
 already_AddRefed/TemporaryRef :(


Nit: Aren't the TemporaryRef semantics better? Seems like replacing
RefPtr-based stuff's use of RefPtr with nsRefPtr would be be making things
at least slightly worse here.

PSM (security/certverifier/* and security/manager/*) uses RefPtr because
RefPtr is more consistent with ScopedPtr, which PSM uses extensively to
provide RAII wrappers around NSS types in
security/manager/ssl/src/ScopedNSSTypes.h. IIRC, RefPtr's API is also
closer to std::shared_ptr's API, which I think is a plus. IMO, the PSM code
would be less self-consistent/readable if it were switched to use nsRefPtr,
unless we also replaced ScopedPtr with nsAuto???.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Policing dead/zombie code in m-c

2014-04-24 Thread Brian Smith
On Thu, Apr 24, 2014 at 4:03 PM, Ehsan Akhgari ehsan.akhg...@gmail.comwrote:

   * Are there obvious places that people should inspect for code that's

 being built but not used? Some libs that got imported for WebRTC
 maybe?


 Nothing big comes to my mind.  Perhaps hunspell on b2g?


https://bugzilla.mozilla.org/show_bug.cgi?id=611781: Project to reduce the
size of NSS libraries included in Firefox distributions. IIRC, this would
cut 500KB of object code.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Always brace your ifs

2014-02-22 Thread Brian Smith
On Sat, Feb 22, 2014 at 5:06 PM, Neil n...@parkwaycc.co.uk wrote:
 Joshua Cranmer wrote:
 Being serious here, early-return and RTTI (to handle the cleanup prior to
 exit) would have eliminated the need for gotos in the first place.

 I assume you mean RAII. Unfortunately that requires C++. (I was fooled too;
 someone pointed out to me on IRC that update is actually a function pointer
 member. You reap what you sow.)

I agree with Joshua. RAII requires C++ but requiring C++ is no big
deal. RAII and early returns greatly help readability and thus
security. This is exactly why the new certificate verification code is
written in C++.

Cheers,
Brian
-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Cairo being considered as the basis of a standard C++ drawing API

2014-02-09 Thread Brian Smith
On Sun, Feb 9, 2014 at 2:38 PM, Robert O'Callahan rob...@ocallahan.org wrote:
 I've already given my feedback on the cairo mailing list. Summary: Moz2D is
 the right thing for us, and probably for other application frameworks, but
 for applications that just want to draw their stuff on the screen or to
 print, cairo might be a better fit. Anyway ti doesn't really matter to us
 what the C++ people do.

It might matter to us in the context of asm.js. It seems likely that
if something like Moz2D became the standard API then we'd be able to
optimize it more easily than we'd be able to optimize an API that
worked much differently than Moz2D.

Also, if Moz2D seems like the right thing for other application
frameworks too, then that is useful feedback to pass back to the C++
committee.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Cairo being considered as the basis of a standard C++ drawing API

2014-02-09 Thread Brian Smith
On Sun, Feb 9, 2014 at 2:54 PM, Robert O'Callahan rob...@ocallahan.org wrote:
 On Mon, Feb 10, 2014 at 11:49 AM, Brian Smith br...@briansmith.org wrote:
 It seems likely that if something like Moz2D became the standard API then
 we'd be able to optimize it more easily than we'd be able to optimize an API
 that worked much differently than Moz2D.

 No, because asm.js code must go through Web platform APIs, and the Web
 platform API you would implement cairo bindings on top of is canvas-2D, and
 that's fixed in stone

I don't think it is fixed in stone that asm.js code must go through
Web Platform APIs. I believe the requirement is that it must be
possible to translate asm.js code into Web Platform APIs in a way
where the result works reasonably. AFAICT, there's nothing technically
stopping us from implementing any kind of specially-optimized
passthrough logic for any particular API, and also I think that idea
is compatible politically with our stance on asm.js, compared to
ActiveG.

 --- and we have it implemented on top of Moz2D, and it
 works well, better than when we had canvas-2D implemented on cairo.

Good to know.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Tagging legitimate main thread I/O

2014-02-07 Thread Brian Smith
On Fri, Feb 7, 2014 at 11:13 AM, David Keeler dkee...@mozilla.com wrote:
 On 02/07/14 10:31, ISHIKAWA, Chiaki wrote:
 Message:
 [10549] WARNING: Security network blocking I/O on Main Thread: file
 /REF-COMM-CENTRAL/comm-central/mozilla/security/manager/ssl/src/nsNSSCallbacks.cpp,
 line 422

David's explanation is mostly correct for Firefox (but see below).
However, for Thunderbird that warning occurs because Thunderbird is
blocking the main thread waiting for network I/O (and disk I/O).
Thunderbird should be fixed so that it stops doing network I/O on the
main thread. Then this warning will go away.

 AddonUpdateChecker.jsm calls CertUtils.checkCert, which traverses the
 peer's certificate chain (in an inefficient way, but that's beside the
 point). Getting a certificate's chain causes a verification to happen,
 which often results in network IO. This is in part due to the legacy
 certificate verification library we're currently hard at work replacing.

Even after insanity::pkix lands, it won't be OK to do certificate
verification on the main thread because OCSP requests would result in
the main thread blocking on network I/O. There is a bug tracking the
removal of main-thread certificate verification:
https://bugzilla.mozilla.org/show_bug.cgi?id=775698.

Cheers,
Brian

On Fri, Feb 7, 2014 at 11:13 AM, David Keeler dkee...@mozilla.com wrote:
 On 02/07/14 10:31, ISHIKAWA, Chiaki wrote:
 Message:
 [10549] WARNING: Security network blocking I/O on Main Thread: file
 /REF-COMM-CENTRAL/comm-central/mozilla/security/manager/ssl/src/nsNSSCallbacks.cpp,
 line 422

 This generally happens when javascript calls a function on an
 nsIX509Cert that attempts to verify it synchronously. If the certificate
 has an OCSP uri, network IO will block the main thread. For instance,
 AddonUpdateChecker.jsm calls CertUtils.checkCert, which traverses the
 peer's certificate chain (in an inefficient way, but that's beside the
 point). Getting a certificate's chain causes a verification to happen,
 which often results in network IO. This is in part due to the legacy
 certificate verification library we're currently hard at work replacing.
 In short, this is not legitimate main thread IO, but it's being fixed.
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform



-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A proposal to reduce the number of styles in Mozilla code

2014-01-06 Thread Brian Smith
On Sun, Jan 5, 2014 at 6:34 PM, Nicholas Nethercote
n.netherc...@gmail.com wrote:
 - There is an semi-official policy that the owner of a module can dictate its
   style. Examples: SpiderMonkey, Storage, MFBT.

AFAICT, there are not many rules that module owners are bound by. The
reason module owners can dictate style is because module owners can
dictate everything in their module. I think we should wait until we've
heard from module owners that strongly oppose the style changes and
then decide how to deal with that. Imposing changes on module owners,
that the module owners don't agree to goes against the governance
system we have in place. Our governance system is based on the idea
that module owners (and peers) will make good decisions. Implicit in
that is the idea that module owners may need to make decisions that
are sub-optimal for them, but which are optimal for the project in
general.

   There appears to be no good reason for this and I propose we remove it.
   Possibly with the exception of SpiderMonkey (and XPConnect?), due to it 
 being
   an old and large module with its own well-established style.

I guess you are implicitly excepting NSS and NSPR too, which are C code.

As far as PSM is concerned, my main ask is that such reformatting of
security/manager/ssl/src/* be done in February or later, so that the
current urgently-needed big refactoring in that code is not disrupted.

Cheers,
Brian
-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: On the usefulness of style guides (Was: style guide proposal)

2013-12-19 Thread Brian Smith
On Thu, Dec 19, 2013 at 6:13 PM, Ehsan Akhgari ehsan.akhg...@gmail.comwrote:

 But to address the main point of this paragraph, what's wrong with having
 *one* style that *everybody* follows?  I can't tell if you have something
 against that, or if you just care about a small subset of the tree and are
 happy with the status quo in that subset.


I've asked people in PSM land to follow the Mozilla Coding Style for new
code, except for modifications to pre-existing code. When a group of new
people started working on PSM, I took some time to make a widespread change
throughout many parts of PSM to make it more consistent, coding-style-wise.
However, there are a bunch of rules that I did not enforce as part of that
change. I've been tempted to make another mass change to do so, and I am
open to other people submitting patches in my module to make the code more
consistent with the Mozilla coding style. As a Necko peer, I would welcome
such changes in Necko too. However, it would be great to agree on what
changes are going to be done, before a large amount of effort is spent
doing them.

I don't think that everybody is as agreeable as me though. When I've been
asked to review code in other modules, my attempts to get people to follow
the Mozilla coding style, instead of the module peers'/owners' style,
received a lot of pushback. WebRTC comes to mind, though I think the
pushback was probably more over concern for delaying the landing of the
feature over concern about style.


  Personally, there are a couple of things I don't like about moz-style
 (though revisions to the central style guide at least have made it better
 than it used to be), but instead of bikeshedding the central style guide,
 we just do our own thing in the code we're responsible for.


 That is not very helpful.  If there is something in the mozilla style
 guide that you think is wrong and needs to change, *please* bring it up.
  If you're right, you'll be benefiting everyone.  And if it's just a matter
 of taste, perhaps you could sacrifice your preferences in the interest of
 the greater good?


In PSM, we created some scoped pointer wrapper types around NSS data
structures (ScopedNSSTypes.h), which are based on the MFBT scoped pointers.
And, consequently, PSM has standardized on MFBT smart pointers throughout
the module (there should be nsRefPtr in PSM, only RefPtr, for example).
Yet, most code in Gecko is based on the nsCOMPtr-like smart pointers
(nsAutoPtr, nsRefPtr). I don't know how big of a deal this is, but this is
the type of thing that would need to be resolved to have a consistent style
throughout Gecko.


  It's a decent amount of work to restyle the modules well

 That's actually not true.  There are tools which are very good at doing
 this work once we agree that it should be done.


Color me skeptical. I wouldn't want somebody to reformat the code in the
modules I have responsibility for without reviewing the changes. And,
reviewing tens of thousands of lines of changes is a lot of work.


 I don't think that anybody is suggesting that we come up with a set of
 style guides and carve them into stone and never consider anything
 otherwise.  But then again debating where the * in a pointer notation ends
 up with every week isn't the best use of everybody's time.  If and when
 someone finds something wrong in the style guideline they can bring it up
 and get the style modified if they have a good point.  Note that this is
 quite doable, as evidence of other projects which do this well shows.


If somebody submitted a patch to fix the * issue throughout PSM, I would
r+ it, though I don't look forward to spending the time to do it,
especially considering the issue of bitrot. (Please do not write such a
patch before the end of January; it wouldn't get r+d before then, due to
the bitrot issue with pending work.)

 I suppose my counter-question is 'How does standardizing styles across
 modules help us?' In my experience, reviewing (or being reviewed) for style
 takes almost no time.


I agree. With two exceptions, everything style-related related seems to be
insignificant regarding how much time I spend on stuff. I just make my code
look like the code around it, and if the reviewer complains about style
issues I generally just do whatever the reviewer wants so I don't need to
argue back-and-forth. Very simple.

However, there are two issues that are non-trivial distractions from real
work:
1. Many parts of NSS use tabs instead of spaces. AFAICT, this is an issue
for which the idea of fixing things is more-or-less agreed upon. But, no
time to actually do it.

2. Not everybody succeeds at making their new code look like the code
around it (or, in some cases, like any other code on Earth). I (and others)
waste a large amount of time during code reviews pointing out style nits.
If there were a tool that people were required to run to self-review their
code before asking for review, and that tool required work to make our code
more 

Re: NSPR logging dropping log messages

2013-12-05 Thread Brian Smith
On Thu, Dec 5, 2013 at 9:46 PM, Robert O'Callahan rob...@ocallahan.org wrote:
 bug 924253

I think we should also be careful that, when we have multiple
processes (which is always, because of e10s-based about:newtab
fetching), that those multiple processes are not clobbering each
other's output, when NSPR_LOG_FILE is used. I am not sure what the
current state of this is.

Cheers,
Brian
-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Deciding whether to change the number of unified sources

2013-12-03 Thread Brian Smith
On Tue, Dec 3, 2013 at 8:53 AM, Ted Mielczarek t...@mielczarek.org wrote:
 On 12/2/2013 11:39 PM, Mike Hommey wrote:
 Current setup (16):
   real11m7.986s
   user63m48.075s
   sys 3m24.677s
   Size of the objdir: 3.4GiB
   Size of libxul.so: 455MB

 Just out of curiosity, did you try with greater than 16?

This is what I want to know too, because based on your numbers, it
looks like 16 is the best of those listed.

Also, I would be very interested in seeing size of libxul.so for
fully-optimized (including PGO, where we normally do PGO) builds. Do
unified builds help or hurt libxul size for release builds? Do unified
builds help or hurt performance in release builds?

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


David Keeler is now a PSM peer

2013-11-21 Thread Brian Smith
Hi all,

Please join me in welcoming David Keeler as a PSM peer! Amongst many
other things, David implemented the HSTS preload list, integrated OCSP
stapling into Firefox, and is current implementing the OCSP
Must-Staple feature, which is a key part of our goal of making
certificate status checking faster and more effective. I've been very
impressed by his work and I know many others have been similarly
impressed.

I also shortened up the list of PSM peers so that it only includes
people who are still actively reviewing patches in PSM. I want to
thank Kai Engert and Bob Relyea for the huge contributions that
they've made in PSM. I still recommend that you ask them, or other NSS
peers, for advice whenever you need help with anything to do with NSS
or PKI. Their knowledge of the how  why in those areas is invaluable.

Cheers,
Brian
-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Plug-in feature not available in the web platform. Alternatives?

2013-11-10 Thread Brian Smith
On Fri, Nov 8, 2013 at 1:33 AM, fma spew fmas...@gmail.com wrote:
 We have a npapi-npruntime plug-in that access the Windows certificate store
 via CAPI to provide the end-user with its personal certificates to perform
 different operations.

We can and should switch from using NSS to using the CAPI personal
certificate store on Windows (and analogously for Mac and other
platforms). This would allow SSL client authentication through
Windows-managed client certificates.

What other types of operations are you doing?

 4- So, as encouraged by Benjamin Smedberg in the mentioned
 plugin-activation-in-firefox blog post, we post here our quesiton: Can
 you provide some guidance and/or advice? We feel ourselves stuck. Customers
 are asking for the new release and we have difficult to decide how to
 proceed. In the worst case, we will need to drop support for Firefox and
 encourage our customers to use a different browser.

What's the name of your product? Is there any way I can have a go with it?

Cheers,
Brian
-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Cost of ICU data

2013-10-17 Thread Brian Smith
On Thu, Oct 17, 2013 at 3:46 AM, Axel Hecht l...@mozilla.com wrote:
 We have issues with disk space, currently. We're already in the situation
 where all our keyboard data doesn't fit on quite a few of the devices out
 there.

Where can one read more about this? This ICU data is not *that* huge.
If we can't afford a couple of megabytes now on B2G then it seems like
we're in for severe problems soon. Isn't Gecko alone growing by
megabytes per year?

Cheers,
Brian
-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: devolution of cleartype rendering in Fx chrome

2013-10-16 Thread Brian Smith
On Wed, Oct 16, 2013 at 1:39 PM,  al...@yahoo.com wrote:
 In general, if I understand correctly, it's hard to use native subpixel AA
 in layers that use hardware accelerated compositing.  So in some cases we
 might need to choose between speed and subpixel rendering. (I'm not at all
 an expert in this area, though.)

 This is non accelerated rendering using old, stable, xp era rendering apis.
 There's no question that proper cleartype rendering can be achieved.  This
 all used to work even with the post 4.0 rendering susbsytem, these are
 relatively recent regressions (15, 18, 27).

 It's unreasonable that after a decade of adequate rendering on xp it should
 start falling apart like this.

I agree with the others that correct ClearType rendering is more
important than whatever performance gain we'd get by having poor
ClearType rendering. ClearType is one of the biggest reasons keeping
me using Windows instead of switching to other platforms; I almost
feel like I couldn't work without it. Further, because we generally
have quite small text in our browser chrome, ClearType rendering in
Chrome is especially important to get right, at least in the address
bar.

My understanding is that Windows XP is our top platform or
second-to-top platform behind Windows 7. We have more Windows XP users
than we have on Mac, Linux, B2G, and Android combined, right?

Because most people at Mozilla don't run Windows, and especially
because almost nobody at Mozilla runs Windows XP (and nobody should,
inside or outside of Mozilla), it may be a little too easy for us to
marginalize concerns like this, since we have no dogfooding and
because non-Windows users probably have a hard time grasping the
importance of ClearType for many end-users.

Still, I think this issue is one that is more appropriately discussed
on dev-firefox, not dev-platform. The dev-firefox mailing list is
here:
https://mail.mozilla.org/pipermail/firefox-dev/

Note that the dev-firefox mailing list is moderated and I recommend
that you use a more positive tone when posting to that mailing list.
If you have trouble getting your messages through moderation, let me
know and I will help.

Cheers,
Brian
-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Cost of ICU data

2013-10-15 Thread Brian Smith
On Tue, Oct 15, 2013 at 9:06 AM, Benjamin Smedberg
benja...@smedbergs.us wrote:
 Do we need this data for any language other than the language Firefox ships
 in? Can we just include the relevant language data in each localized build
 of Firefox, and allow users to get other language data via downloadable
 language packs, similarly to how dictionaries are handled?

My understanding is that web content should not be able to tell which
locale the browser is configured to use, for privacy (fingerprinting)
reasons. If we went the route suggested above, it would be easy to
figure out, for many users, which locale he/she is using.

 I am still working to get better number to quantify the costs in terms of
 lost adoption for additional download weight.

My (naive) understanding is that the Windows has its own API that does
what ICU does. I believe that Internet Explorer 11 is an existence
proof of that. If we used the Windows API on Windows, maybe we could
avoid building ICU altogether on Windows. Since that accounts to 90+%
of our users, that would almost make it problem solved all on its
own even if we did nothing else.

Cheers,
Brian
-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Cost of ICU data

2013-10-15 Thread Brian Smith
On Tue, Oct 15, 2013 at 10:50 AM, Anne van Kesteren ann...@annevk.nl wrote:
 On Tue, Oct 15, 2013 at 6:45 PM, Benjamin Smedberg
 benja...@smedbergs.us wrote:
 On 10/15/2013 1:18 PM, Brian Smith wrote:
 My understanding is that web content should not be able to tell which
 locale the browser is configured to use, for privacy (fingerprinting)
 reasons.

 I haven't heard this rule before. By default your browser language affects
 the HTTP accept-lang setting, as well as things like default font choices.
 You can certainly customize those back to a non-fingerprintable setting, but
 I'm not convinced that we should worry about this as a fingerprinting
 vector.

 I think preventing fingerprinting at a technical level is something
 we've lost though we should try to avoid introducing new vectors.

I think, at least, we should consider ways to avoid adding new vectors
when we are making decisions. It doesn't have to be *the* deciding
factor.

 As far as JavaScript API features go, I don't think we should vary our
 offering by locale. E.g. for Firefox OS we want changing locale to
 just work and not require a new version of Firefox OS. The same goes
 for a computer in a hotel or hostel or some such. Firefox should work
 for each locale users might have set in Gmail.

I strongly agree with this. No doubt there is a strong correlation
between the UI locale and the locale used for web content, but it is
far from a perfect correlation. Socially, we should be erring on the
side of encouraging a multilingual society instead of discouraging a
multilingual society. Technically, we should minimize the web-facing
differences between different installations of Firefox, because having
a consistent platform for web developers is a good thing. That is why
we create web standards, and that is why making parts of standards
optional is generally a bad thing.

I have no idea how to install a langpack. Presumably it is something
that is done through AMO. I am skeptical that this is easy enough to
make it acceptable to push this task off to the user. we should at
least automate it for them. If this data is too large and contributing
towards aborted installs, why not just split the installation phase
into two parts, and install the locale data in parallel to starting up
the browser?

Cheers,
Brian
-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: What platform features can we kill?

2013-10-10 Thread Brian Smith
On Thu, Oct 10, 2013 at 3:43 AM, Till Schneidereit
t...@tillschneidereit.net wrote:
 On Thu, Oct 10, 2013 at 12:00 PM, Gabriele Svelto gsve...@mozilla.com wrote:
 On 10/10/2013 02:36, Zack Weinberg wrote:

 In that vein, I think we should take a hard look at the image decoders.
 Not only is that a significant chunk of attack surface, it is a place
 where it's hard to innovate; image format after image format has died on
 the vine because it wasn't *enough* of an improvement to justify the
 additional glob of compiled code. Web-deliverable JS image decoders
 could open that up.


 Considering the performance profile of some of our low-end platforms (most
 Firefox OS devices, low-end Android devices too) I don't think that would be
 a good idea right now. Image decoding speed has a very measurable impact
 there during page/application startup. The difference between vectorized
 code-paths (NEON on ARM) and plain C is quite significant so moving it to JS
 (even asm.js-enabled JS) would probably lead to pretty bad performance
 regressions.

 Note that we'll have SIMD support in JS in the not-too-distant
 future[1]. Once asm.js supports it, this idea might be more practical.

 [1]: https://bugzilla.mozilla.org/show_bug.cgi?id=904913

I'm not sure. Things like this seem like really good ideas:
http://blogs.msdn.com/b/ie/archive/2013/09/12/using-hardware-to-decode-and-load-jpg-images-up-to-45-faster-in-internet-explorer-11.aspx

Obviously, I am linking to somewhat of an advertisement of a
competitor but the idea sounds great, especially the bit about
significantly lower memory usage.

Cheers,
Brian
-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: What platform features can we kill?

2013-10-09 Thread Brian Smith
On Wed, Oct 9, 2013 at 9:01 AM, Gervase Markham g...@mozilla.org wrote:
 * Windows integrated auth

I would love to kill Windows integrated auth. It seems like doing so
would mean almost the same thing as saying we don't care about
intranets though. That's something I would be very interested in
hearing about from the Product team.

We should remove the legacy window.crypto.* (MOZ_DOMCRYPTO_LEGACY)
stuff described at [1]. (Warning: The features mentioned in this
article are proprietary Mozilla extensions, and are not supported in
any other browser.) I am working on sorting out the politics of doing
so on dev-tech-crypto [2].

[1] https://developer.mozilla.org/en-US/docs/JavaScript_crypto
[2] 
https://groups.google.com/d/msg/mozilla.dev.tech.crypto/FRmpYubnan4/DDiAtniVW-0J

Cheers,
Brian
-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: What platform features can we kill?

2013-10-09 Thread Brian Smith
On Wed, Oct 9, 2013 at 9:01 AM, Gervase Markham g...@mozilla.org wrote:
 Attack surface reduction works:
 http://blog.gerv.net/2013/10/attack-surface-reduction-works/

 In the spirit of learning from this, what's next on the chopping block?

Master password. The UI is prone to phishing, it causes all sorts of
problems because of how we use the log in to the NSS database to
implement it, it causes annoying UX for the people that use it, the
cryptography used is useless (bing FireMaster), there's hardly any
resources to do anything to actually fix any of these problems other
than remove it, and it slows down progress on important security
features.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Poll: What do you need in MXR/DXR?

2013-10-03 Thread Brian Smith
On Wed, Oct 2, 2013 at 12:33 PM, Erik Rose e...@mozilla.com wrote:
 What features do you most use in MXR and DXR?

Blame. I wish blame mode was the default (only?) view.

 What keeps you off DXR? (What are the MXR things you use constantly? Or the 
 things which are seldom-used but vital?)

* Linking to a specific line of a specific revision.
* NSPR and NSS repos
* pre-Mercirual CVS history.

 If you're already using DXR as part of your workflow, what could it do to 
 make your work more fun?

* When in blame mode, the revision number of the most change to the
line is shown. I would like a link next to every line's revision
number that links to the *previous* revision where the line changed.
That way, I can navigate the change history much easier.

* When I click on the revision number next to a line in blame, I would
like that to navigate me to that line in the side-by-side diff view.
And, I want the side-by-side diff view to ALSO have blame revision
numbers, that allow me to navigate the side-by-side diffs' revision
history in a manner similar to previous point.

* I would like all of these things to be integrated into the editor of
Visual Studio 2012. (Perhaps this is out of scope of your group.)

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Implementing Pepper since Google is dropping NPAPI for good

2013-09-23 Thread Brian Smith
On Mon, Sep 23, 2013 at 2:41 PM, Benjamin Smedberg benja...@smedbergs.uswrote:

 On 9/23/2013 4:59 PM, Brian Smith wrote:

 Given that Pepper presents little benefit to users,


 Pepper presents a huge benefit to users because it allows the browser to
 sandbox the plugin. Once we have a sandbox in Firefox, NPAPI plugins will
 be the security weak spot in Firefox.

 You're making some assumptions here:

 * That the plugin is only Flash. No other plugin has Pepper or is likely
 to use pepper. And a significant number of users are still using non-Flash
 plugins.


I am making the assumption for now that Flash is the main thing we don't
have a solution for.


 * That we could have a pepper Flash for Firefox in a reasonable timeframe
 (highly unlikely given the engineering costs of Pepper).


I am not making this assumption. I am not saying we should/must do
Pepper. I am saying that it isn't right to say there is little benefit
to Pepper. Even with Flash being the only Pepper plugin, the (potential)
security advantages of Pepper make it very valuable.


 * That Flash is the primary plugin attack vector we should protect
 against. We know *out of date* Flash is an attack vector, but our security
 blocking already aims to protect that segment of the population. Up-to-date
 Flash does not appear to be highly dangerous.


Vulnerabilities are dangerous even when we don't know about them. And, even
when we do know about them, they are dangerous until the user can update to
a version without the vulnerability. My understanding is that if there were
a zero-day exploit in the Flash plugin, and Adobe took a week to ship a
fix, then all of our users would be vulnerable to that zero-day
vulnerability for a week or more.



  We need a story and a timeline for securing plugins. Click-to-play was a
 great start, but it is not enough.

 If our story for securing plugins is to
 drop support for them then we should develop the plan with a timeline for
 that.


 What is your definition of enough? With the change to mark plugins as
 click-to-play by default, they will be at least as secure as Firefox
 extensions, and less attack surface.


Like I said, the click-to-play change is a huge improvement. I can't
emphasize that enough. We don't have a sandbox for Firefox itself yet, so
now is not the time to be super critical of potential weaknesses in Adobe's
sandbox for Flash to argue that the exception for Flash is unreasonable. I
think everybody should feel good with the progress here.

These are all longer-term items, some of which are still research-y. I
 don't think it's either possible or necessary to develop a plan with a
 timeline in our current situation.


I don't think we necessarily need a detailed timeline for killing plugins
completely. I agree it would likely be impractical to create one even if we
tried.

But, we should be able to create and share plans for what we can accomplish
regarding improving things with respect to plugins in the next year, at
least. For example, in your earlier comments, you said that it didn't seem
realistic to kill NPAPI plugins by the end of 2014. I suppose that
includes, in particular, Flash. I agree with you, though I think there are
some people at Mozilla that disagree. Either way, it seems like we should
develop a more concrete plan for dealing with Flash security issues, at
least, for 2014--e.g. creating a plan to make click-to-play for Flash in
the event of a zero-day in the Flash player a viable alternative. I would
be happy to help create such a plan.

Also, several internal systems within Mozilla Corporation are Flash-based,
including our company-wide videoconferencing system and parts of our
payroll system (IIUC). I think it would be great if we developed a plan for
Mozilla Corporation to be able to dogfood a Flash-player-free Firefox
internally by the end of 2014, at least.

Cheers,
Brian
-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Implementing Pepper since Google is dropping NPAPI for good

2013-09-23 Thread Brian Smith
On Mon, Sep 23, 2013 at 3:40 PM, Chris Peterson cpeter...@mozilla.comwrote:

 On 9/23/13 2:41 PM, Benjamin Smedberg wrote:
 Even if Firefox supported the Pepper API, we would still need a Pepper
 version of Flash. And Adobe doesn't have one; Google does.

 When I was an engineer on Adobe's Flash Player team, Google did all
 development and builds of Flash for Pepper. Adobe just verified that
 Google's builds pass a certification test suite.


Just to re-iterate: I am not saying we should/must do a Pepper Flash Player
in Firefox. I am not particularly for or against it.

However, I will say that the people at Google that worked on Chromium's
sandboxing and Pepper have already reached out to us to help us with
sandboxing. We shouldn't assume that they wouldn't help us with the Pepper
Flash player without asking them. It might actually be easier to secure
help from Google than from Adobe.

Cheers,
Brian


 __
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/**listinfo/dev-platformhttps://lists.mozilla.org/listinfo/dev-platform




-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: You want faster builds, don't you?

2013-09-22 Thread Brian Smith
On Sun, Sep 22, 2013 at 6:20 PM, Mark Hammond mhamm...@skippinet.com.auwrote:

 [I also see a clobber build spend  5 minutes in various configure runs,
 which frustrates me every time I see it - so I minimize the shell ;]


Yep, and the amazing thing is that we basically don't even need to run most
of that junk on Windows (or any platform), because the vast majority of
configure output is fixed per platform--especially on Windows.

Example of wasted time: checking whether the C++ compiler can compile by
compiling a source file that we don't even need. How about we assume the
C++ compiler can compile a program and then fail when we compile a real
source file if/when our assumption is wrong?

On Mac and Linux, configure is so fast that it might not be worth
optimizing it. But, on Windows it is excruciatingly slow and it is worth
short-circuiting as much of it as we can.

Cheers,
Brian




 Mark

 __**_
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/**listinfo/dev-platformhttps://lists.mozilla.org/listinfo/dev-platform




-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Removing xml:base

2013-09-16 Thread Brian Smith
On Mon, Sep 16, 2013 at 5:06 PM, Adam Kowalczyk adam-kowalcz...@o2.pl wrote:
 For what it's worth, I find xml:base very useful in my extension. It is a
 feed reader and it displays content from many third-party sources on a
 single page, so there's a need for multiple base URIs in order to resolve
 relative URIs correctly.

 The arguments so far have focused on code simplicity, lack of support in
 other browsers, and Mozilla itself not using the feature. I haven't seen
 anyone address the arguably most important question: is the feature useful
 for the web at large? Perhaps we should improve our implementation and push
 for its adoption, rather than jump on the bandwagon?

 In principle, functionality provided by xml:base seems useful for web
 applications that deal with third-party content. Maybe someone more
 knowledgeable can estimate how much need there is in practice, though.

I think that using xml:base for content aggregation is a good
indication that the application should be reworked to use iframe
sandbox. If one don't feel confident enough in the application's
ability to sanitize/rewrite the third-party content so that all the
links become absolute (a good bet for pretty much any application),
then one shouldn't be injecting it into the page, IMO.

Cheers,
Brian
-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: On builds getting slower

2013-08-26 Thread Brian Smith
I talked to gps today and told him I would let him know my numbers on my
machine. I will share them with everybody:

My Win32 debug clobber build (mach build after mach clobber) was 39:54.84
today, up from ~33:00 a few months ago. Not sure if it is my system.
Immediate rebuild (no-op) was 10:13.05. A second no-op rebuild was
10:32.36. It looks like every shared library and every executable got
relinked.

This is on Windows7 64-bit, with 32GB of memory, an Intel 520 SSD, and an
Intel i7-3920XM @ 2.90Ghz/3.10Ghs. (Lenovo W530, plugged in, on Maximum
Performance power setting). I was doing other things (browsing, coding) in
the foreground while these builds ran in the background.

Cheers,
Brian



On Fri, Aug 2, 2013 at 2:13 PM, Gregory Szorc g...@mozilla.com wrote:

 (Cross posting. Please reply to dev.builds.)

 I've noticed an increase in the number of complaints about the build
 system recently. I'm not surprised. Building mozilla-central has gotten
 noticeably slower. More on that below. But first, a request.

 Many of the complaints I've heard have been from overhearing hallway
 conversations, noticing non-directed complaints on IRC, having 3rd parties
 report anecdotes, etc. *Please, please, please voice your complaints
 directly at me and the build peers.* Indirectly complaining isn't a very
 effective way to get attention or to spur action. I recommend posting to
 dev.builds so complaints and responses are public and easily archived. If
 you want a more personal conversation, just get in contact with me and I'll
 be happy to explain things.

 Anyway, on to the concerns.

 Builds are getting slower. 
 http://brasstacks.mozilla.com/**gofaster/#/http://brasstacks.mozilla.com/gofaster/#/has
  high-level trends for our automation infrastructure. I've also noticed
 my personal machines taking ~2x longer than they did 2 years ago.
 Unfortunately, I can't give you a precise breakdown over where the
 increases have been because we don't do a very good job of recording these
 things. This is one reason why we have better monitoring on our Q3 goals
 list.

 Now, on to the reasons why builds are getting slower.

 # We're adding new code at a significant rate.

 Here is a breakdown of source file types in the tree by Gecko version.
 These are file types that are directly compiled or go through code
 generation to create a compiled file.

 Gecko 7: 3359 C++, 1952 C, 544 CC, 1258 XPIDL, 110 MM, 195 IPDL
 Gecko 14: 3980 C++, 2345 C, 575 CC, 1268 XPIDL, 272 MM, 197 IPDL, 30 WebIDL
 Gecko 21: 4606 C++, 2831 C, 1392 CC, 1295 XPIDL, 292 MM, 228 IPDL, 231
 WebIDL
 Gecko 25: 5211 C++, 3029 C, 1427 CC, 1268 XPIDL, 262 MM, 234 IPDL, 441
 WebIDL

 That nets totals of:

 7: 7418
 14: 8667
 21: 10875
 25: 11872

 As you can see, we're steadily adding new source code files to the tree.
 mozilla-central today has 60% more source files than Gecko 7! If you assume
 number of source files is a rough approximation for compile time, it's
 obvious why builds are getting slower: we're building more.

 As large new browser features like WebRTC and the ECMAScript
 Internationalization API continue to dump hundreds of new source files in
 the tree, build times will increase. There's nothing we can do about this
 short of freezing browser features. That's not going to happen.

 # Header dependency hell

 We have hundreds of header files that are included in hundreds or even
 thousands of other C++ files. Any time one of these widely-used headers
 changes, the object files get invalidated by the build system dependencies
 and we have to re-invoke the compiler. This also likely invalidates ccache,
 so it's just like a clobber build.

 No matter what we do to the build backend to make clobber builds faster,
 header dependency hell will continue to undermine this progress for
 dependency builds.

 I don't believe the build config group is in a position to tackle header
 dependency hell at this time. We are receptive to good ideas and will work
 with people to land patches. Perhaps an ad-hoc group of Platform developers
 can band together to address this?

 # Increased reliance on C++ language features

 I *suspect* that our increased reliance on C++ language features such as
 templates and new C++11 features is contributing to slower build times.
 It's been long known that templates and other advanced language features
 can blow up the compiler if used in certain ways. I also suspect that
 modern C++11 features haven't been optimized to the extent years-old C++
 features have been. Combine this with the fact compilers are working harder
 than ever to optimize code and it wouldn't surprise me if a CPU cycle
 invested in the compiler isn't giving the returns it used to.

 I would absolutely love for a compiler wizard to sit down and profile
 Gecko C++ in Clang, GCC, and MSVC. If there are things we can do to our
 source or to the compilers themselves to make things faster, that could be
 a huge win.

 Like dependency hell, I don't believe the 

Re: Rethinking build defaults

2013-08-18 Thread Brian Smith
On Fri, Aug 16, 2013 at 2:43 AM, Andreas Gal andreas@gmail.com wrote:

 I would like to propose the opposite approach:

 - Remove all conditional feature configuration from configure. WebRTC et
 al are always on. Features should be disabled dynamically (prefs), if at
 all.
 - Reduce configure settings to choice of OS and release or developer.
 - Require triple super-reviews (hand signed, in blood) for any changes to
 configure.
 - Make parts of the code base more modular and avoid super include files
 cross modules (hello LayoutUtils.h).


I would much rather we try (a milder version of) what Andreas suggests
before we start disabling shipping features by default in developer builds.
If we had closer to 100% test coverage then turning things off by default
might maybe make sense. But, as it is, it is important that any manual
testing we do is testing something as close as possible to what we ship.

I do think we could benefit from a much simpler build system with fewer
options that result in us modifying configure.in and other
rebuild-the-world dependencies much less often. I don't think we should
completely eliminate conditionals in our build system. Like jlebar, I have
found there are cases where we can eliminate a lot of dead code on a
per-platform basis. However, we should be able to do said dead-code-removal
conditional compilation without modifying configure.in. Maybe that is as
simple as allowing support for a mozconfig-like addendum mechanism for
configure.in that lets somebody enable off-by-default features in their
local build without first needing to modify configure.in in a way that
would affect everybody else's builds.

If I think back to some of my recent changes to configure.in, they were
often changes that only affected one or two modules, and that logic could
have been done in a whatever.mk file or moz.build file that was included by
the affected modules instead of modifying configure.in. The only reason I
modified configure.in is because that's how we've always done it. For my
next such change, I will try to localize the conditional logic to my module
to avoid touching configure.in and whatever/confvars.sh. Perhaps others can
try the same experiment.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Rethinking build defaults

2013-08-18 Thread Brian Smith
On Fri, Aug 16, 2013 at 4:27 AM, Mike Hommey m...@glandium.org wrote:

  - Remove all conditional feature configuration from configure.
  WebRTC et al are always on. Features should be disabled dynamically
  (prefs), if at all.
  - Reduce configure settings to choice of OS and release or developer.

 With my Debian hat on, let me say this is both x86/x86-64/arm and
 Firefox/Firefox-OS centric.


I understand that you see that as a negative, but I see that as a big
positive because it maximizes the benefit:cost ratio of platform support.


 There are features that don't build on non mainstream architectures (hey
 webrtc, i'm looking at you), and while I do understand the horror it can
 be to some people that there could be a Firefox (^WIceweasel) build that
 doesn't support all the web, it's still better to have a browser that
 doesn't support everything than no browser at all (and considering i get
 bug reports from people using or trying to use iceweasel on e.g ppc or
 ia64, yes, there *are* people out there that like or would like to have
 a working browser, even if it doesn't make coffee).

 And Gecko is not used only to build web browsers (for how long?), so it
 makes sense for some features to be disableable at build time.


If it helps at all, I'd be happy for us to stop trying to support any
(product, target platform, build platform, toolchain) combination in
mozilla-central that isn't (planned to be) used in a normal mozilla-central
TBPL run: https://tbpl.mozilla.org/.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Standard C/C++ and Mozilla

2013-08-08 Thread Brian Smith
On Wed, Aug 7, 2013 at 6:47 PM, Ehsan Akhgari ehsan.akhg...@gmail.comwrote:

 (Sorry for the late reply, please blame it on Canadian statutory holidays,
 and my birthday date!)


Happy birthday!

On Fri, Aug 2, 2013 at 11:09 PM, Brian Smith br...@briansmith.org wrote:

 1. It avoids a phase of mass rewrites s/mozilla:Whatever/std::whatever/.
 (See below).
 2. It is reasonable to expect that std::whatever works as the C++
 standard says it should. It isn't reasonable to expect mozilla::Whatever to
 work exactly like std::whatever. And, often, mozilla::Whatever isn't
 actually the same as std::whatever.


 As Jeff mentioned, I think it's more important that we expect developers
 to read and believe the documentation where it exists.  The MFBT code is
 very well documented, and the documentation is usually in sync with the
 implementation.  That is already a huge improvement over the newer std::*
 stuff.  std::auto_ptr is perhaps the biggest example of people not reading
 documentation about std::* stuff.  ;-)


Still, all things being equal, it is better to help developers use std::*
instead of inventing something new, because it is likely (and increasingly
more likely) that they already know how std::* works. (I note that you also
made this point below.)


 But more importantly, as others mentioned, the fact that something lives
 in the std namespace doesn't mean that it adheres to the C++ standard.  So
 it seems to me like you're assuming that code living in the std namespace
 is bug free, but that's not true.  And when something lives in the std
 namespace, fixing it is very difficult.


I agree it is very difficult to deal with bugs in standard libraries.
Finding a bug in the implementation of std::whatever in a compiler/stdlib
we can't upgrade is a good reason to create mozilla::Whatever. Worrying,
ahead of time, that std::whatever might maybe have a bug in some
implementation, isn't a good reason to create mozilla::Whatever. YAGNI.


 But for whatever it's worth, I think that in general, for the std
 replacement code living in MFBT, it's best for us to try really hard to
 match the C++ standard where it makes sense.  We sometimes go through a
 crazy amount of pain to do that (see my patch in bug 802806 as an
 example!).  But if something doesn't make sense in the C++ standard or is
 not fit for our needs, we should do the right thing, depending on the case
 at hand.


snip


 But then again I don't find this argument very convincing in this
 particular case.  I hope that this discussion has provided some good
 reasons why implementing out mozilla::Whatever sometimes makes sense.  And
 later on when we decide to switch to std::whatever, I'd consider such a
 rewrite a net win because it makes our code easier to approach by people
 familiar with std::whatever.


My argument is not that we should avoid creating mozilla::Whatever at all
costs. My argument is that we should prefer upgrading compilers and/or just
add/backport something to STLPort to writing new code in MFBT that is the
same-but-different than what the standard C++ library provides. Obviously,
if there is a major benefit to having our own thing, then we should do so.
(I already gave the example of mozilla/Atomics.h.)


 About the mozilla-build sed issue, that is really really
 surprising/disappointing -- I'd have expected that we ship GNU sed there?
 Even bsd sed on Mac supports -i.  Please file a bug about this with more
 details.


https://bugzilla.mozilla.org/show_bug.cgi?id=373784 (reported 2007-03-13).

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Standard C/C++ and Mozilla

2013-08-08 Thread Brian Smith
On Wed, Aug 7, 2013 at 6:47 PM, Ehsan Akhgari ehsan.akhg...@gmail.comwrote:

 But for whatever it's worth, I think that in general, for the std
 replacement code living in MFBT, it's best for us to try really hard to
 match the C++ standard where it makes sense.  We sometimes go through a
 crazy amount of pain to do that (see my patch in bug 802806 as an
 example!).  But if something doesn't make sense in the C++ standard or is
 not fit for our needs, we should do the right thing, depending on the case
 at hand.


Bug 802806 is about fixing a bug in part of mozilla::TypeTraits, which is
designed to be line std::type_traits. The reason we can't use
std::type_traits (from my reading of bug 900040) is that STLPort's
implementation is incorrect. So, I think this is a good example of where we
disagree. My position is that we should fix STLPort's implementation for
GCC 4.4 ARM Linux (maybe just backport a fixed version) and use
std::type_traits everywhere.

Question:

What, precisely, are the differences in semantics between the
similarly-named functions in mozilla:TypeTraits and std::type_traits? I
think it is hard to definitively answer this question and that's why I'd
like us to consider creating things like mozilla::TypeTraits a last-resort
option behind fixing STLPort for GCC 4.4 ARM Linux or dropping support for
old versions of compilers. (This isn't to say that the implementation or
documentation for mozilla::TypeTraits is bad; it is very impressive. Just,
I wonder if we needed to spend as much effort on it compared to the
alternatives.)

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: reminder: content processes (e10s) are now used by desktop Firefox

2013-08-04 Thread Brian Smith
On Wed, Jul 31, 2013 at 1:10 AM, Gavin Sharp ga...@gavinsharp.com wrote:

 Bug 870100 enabled use of the background thumbnail service in Firefox
 desktop, which uses a browser remote=true to do thumbnailing of pages in
 the background.

 That means that desktop Firefox now makes use of E10S content processes.
 They have a short life time (one page load) and are generally triggered by
 opening about:newtab when thumbnails are missing or out of date (2 days
 old).


Besides the crashes, NSPR logging to a file is messed up because all the
processes write to the same log file. See:
https://developer.mozilla.org/en-US/docs/Mozilla/Debugging/HTTP_logging?redirectlocale=en-USredirectslug=HTTP_Logging#Creating_separate_logs_for_child_processes

I think it is time to make GECKO_SEPARATE_NSPR_LOGS the default.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: std::unique_ptr, std::move,

2013-08-02 Thread Brian Smith
On Fri, Aug 2, 2013 at 2:58 PM, Mike Hommey m...@glandium.org wrote:

  Upgrading minimum compiler requirements doesn't imply backporting those
  requirements to Aurora where ESR24 is right now.  Are you opposed to
  updating our minimum supported gcc to 4.7 on trunk when Firefox OS is
 ready
  to switch?

 Not at all, as long as ESR24 keeps building with gcc 4.4. I've even been
 complaining about b2g still using gcc 4.4 on trunk...


This adds too much risk of security patches failing to backport from
mozilla-central to ESR 24. Remember that one of the design goals of ESR is
to minimize the amount of effort we put into it so that ESR doesn't slow
down real Firefox. AFAICT, most people don't even want ESR at all. So, a
constraint to keep ESR 24 compatible with GCC needs to include some
resources for doing the backports.

Cheers,
Real Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: std::unique_ptr, std::move,

2013-08-02 Thread Brian Smith
On Fri, Aug 2, 2013 at 1:36 AM, Joshua Cranmer  pidgeo...@gmail.comwrote:

 On 8/1/2013 5:46 PM, Brian Smith wrote:

 FWIW, I talked about this issue with a group of ~10 Mozillians here in
 Berlin and all of them (AFAICT) were in favor of requiring that the latest
 versions of GCC be used, or even dropping GCC support completely in favor
 of clang, if it means that we can use more C++ language features and if it
 means we can avoid wasting time writing polyfills. Nobody saw installing a
 new version of GCC as part of the build environment as being a significant
 impediment.


 And how many of them have actually tried to install new versions of gcc
 from scratch? As someone who works in compiler development, I can tell you
 firsthand that setting up working toolchains is an intricate dance of
 getting several tools to work together--the libc headers, the standard C++
 library headers, debugger, linker, and compiler are all maintained by
 different projects, and a version mismatch between any two of these can
 foul up getting things to work that requires a lot of time and patience to
 fix even by people who know what they're doing. Look, for example, at some
 of the struggles we have had to go through to get Clang-on-Linux working on
 the buildbots.


We have mozilla-build for Windows. From what you say, it sounds like we
should have mozilla-build for Linux too that would include a pre-built GCC
or Clang or whatever we choose as *the* toolchain for desktop Linux.


 Also, the limiting factor in using new C++ features right now is b2g,
 which builds with g++-4.4. If we fixed that, the minimum version per this
 policy would be g++-4.7. the limiting factor would either be STLport (which
 is much slower to adopt C++11 functionality than other libraries tied
 primarily to one compiler) or MSVC, which has yet to implement several
 C++11 features.


Moving to GCC 4.7 is one of the requirements for the B2G system security
project so I hope that happens soon anyway. Also, the set of code that is
compiled for B2G is different (though, obviously overlapping) with the set
of code that is compiled for desktop. In fact, if my understand of bug
854389 is correct, we could ALREADY be building Gecko with GCC 4.7 on B2G
if we did one of two things: (1) Add a one-line patch to some Android
header file, or (2) compile gonk with GCC 4.4 and compile Gecko with GCC
4.7 (or clang). If we have any more delays in upgrading to Jelly Bean then
we should consider one or both of these options.


 Instead of arguing right now about whether or not the minimum version
 policy suggested by glandium and I is too conservative, perhaps we should
 wait until someone proposes a feature whose need for polyfilling would
 depend on that policy comes up.


That sounds reasonable to me. So, based on that then, let's get back to my
original question that motivated the discussion of the policy: If we add
std::move, std::forward, and std::unique_ptr to STLPort for Android and
B2G, can we start using std::move, std::forward, and std::unique_ptr
throughout Gecko?

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Standard C/C++ and Mozilla

2013-08-02 Thread Brian Smith
On Wed, Jul 31, 2013 at 7:41 PM, Joshua Cranmer  pidgeo...@gmail.comwrote:

 implementation, libc++, libstdc++, and stlport. Since most nice charts of
 C++11 compatibility focus on what the compiler needs to do, I've put
 together a high-level overview of the major additions to the standard
 library [3]:
 * std::function/std::bind -- Generalization of function pointers


Note that Eric Rescorla implemented his own std::bind polyfill when he was
working on WebRTC. I also have some new code I am working on where
std::bind is extremely helpful.


 Now that you have the background for what is or will be in standard C++,
 let me discuss the real question I want to discuss: how much of this should
 we be using in Mozilla?



 For purposes of discussion, I think it's worth breaking down the C++ (and
 C) standard library into the following components:
 * Containers--vector, map, etc.
 * Strings
 * I/O
 * Platform support (threading, networking, filesystems, locales)
 * Other helpful utilities (std::random, std::tuple, etc.)

 The iostream library has some issues with using (particularly static
 constructors IIRC), and is not so usable for most of the things that Gecko
 needs to do.


It is very useful for building a logging interface that is safer and more
convenient than NSPR's printf-style logging. Note that, again, Eric
Rescorla already built an (partial) iostream-based wrapper around NSPR for
WebRTC. I would say that, if there is no additional overhead, then we
should consider making iostream-based logging the default way of doing
things in Gecko because it is so much less error-prone.


 Even if fully using the standard library is untenable from a performance
 perspective, usability may be enhanced if we align some of our APIs which
 mimic STL functionality with the actual STL APIs. For example, we could add
 begin()/end()/push_back()/etc. methods to nsTArray to make it a fairly
 drop-in replacement for std::vector, or at least close enough to one that
 it could be used in other STL APIs (like std::sort, std::find, etc.).
 However, this does create massive incongruities in our API, since the
 standard library prefers naming stuff with this_kind_of_convention whereas
 most Mozilla style guides prefer ThisKindOfConvention.


Perhaps a more annoying issue--though not a showstoper--is that
unique_ptr::release() means something quite different than
nsXXXPtr::Release() means.


 With all of that stated, the questions I want to pose to the community at
 large are as follows:
 1. How much, and where, should we be using standard C++ library
 functionality in Mozilla code?


We should definitely prefer using the standard C++ library over writing any
new code for MFBT, *unless* there is consensus that the new thing we'd do
in MFBT is substantially clearer. (For example, I think some people
successfully argued that we should have our own atomic types because our
interface is clearly better than std::atomic.)

Even in the case where MFBT or XPCOM stuff is generally better, We should
*allow* using the standard C++ library anywhere that has additional
constraints that warrant a different tradeoff; e.g. needing to be built
separately from Gecko and/or otherwise needing to minimize Gecko
dependencies.


 3. How should we handle bridge support for standardized features not yet
 universally-implemented?


Generally, I would much rather we implement std::whatever ourselves than
implement mozilla::Whatever, all other things being equal. This saves us
from the massive rewrites later to s/mozilla::Whatever/std::whatever/;
while such rewrites are generally a net win, they are still disruptive
enough to warrant trying to avoid them when possible. In the case where it
is just STLPort being behind, we should just add the thing to STLPort (and
try to upstream it). in the case where the lack of support for a useful
standard library feature is more widespread, we should still implement
std::whatever if the language support we have enables us to do so. I am not
sure where such implementations should live.


 4. When should we prefer our own implementations to standard library
 implementations?


It is a judgement call. The default should be to use standard library
functions, but we shouldn't be shy about using our own stuff if it is
clearly better. On the other side, we shouldn't be shy about replacing uses
of same-thing-but-different Mozilla-specific libraries with uses of the
standard libraries, all things being equal.


 5. To what degree should our platform-bridging libraries
 (xpcom/mfbt/necko/nspr) use or align with the C++ standard library?


I am not sure why you include Necko in that list. Did you mean NSS? For
NSPR and NSS, I would like to include some very basic utilities like
ScopedPRFileDesc that are included directly in NSPR/NSS, so that we can use
them in GTest-based tests, even if NSPR and NSS otherwise stick with C.
But, I don't know if the module owners of those modules will accept them.


 6. Where support for an API 

Re: std::unique_ptr, std::move,

2013-08-02 Thread Brian Smith
On Sat, Aug 3, 2013 at 12:51 AM, Ehsan Akhgari ehsan.akhg...@gmail.comwrote:

 This adds too much risk of security patches failing to backport from

 mozilla-central to ESR 24. Remember that one of the design goals of ESR
 is to minimize the amount of effort we put into it so that ESR doesn't
 slow down real Firefox. AFAICT, most people don't even want ESR at all.
 So, a constraint to keep ESR 24 compatible with GCC needs to include
 some resources for doing the backports.


 How does this add too much risk?  Patches that we backport to ESR are
 usually fairly small, and there is already some risk involved as the
 codebases diverge, of course.


There are two kinds of risks: The risk that any developer would need to
waste time on ESR just to support a product that isn't even Firefox on a
platform that virtually nobody uses, and the risk that comes with making
any changes to the security fix that you are trying to backport. The ideal
case (assuming we can't just kill ESR) is that your backport consists of
hg graft and hg push and you're done. That is what we should optimize
for, as far as supporting ESR is concerned. You are right, of course, that
ESR and mozilla-central diverge as mozilla-central is improved and there
are likely to be merge conflicts. But, we should not contribute to that
divergence unnecessarily.

How many developers are even insisting on building Firefox on a Linux
distro that insists on using GCC 4.4, who are unwilling to upgrade their
compiler? We're talking about a very, very small minority of people,
AFAICT. I know one of those people is Mike, who is a very, very important
Mozillian who I definitely do not intend any insult. But, it really does
seem to me that instead of us trying to bending to the desires of the most
conservative distros, the rational decision is to ask those distros who
insist on using very old tools for very long periods of time to solve the
problem that they've caused themselves with their choices. I think we could
could still feel really good about how Linux-friendly we are even if we
shifted more of these kinds of burdens onto the distros.

Again, no offense intended for Mike or any other maintainer of any Linux
distro. I have nothing against Debian or any group.

Cheers,
Brian
-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: On indirect feedback

2013-08-02 Thread Brian Smith
On Sat, Aug 3, 2013 at 1:32 AM, Robert O'Callahan rob...@ocallahan.orgwrote:

 On Sat, Aug 3, 2013 at 9:13 AM, Gregory Szorc g...@mozilla.com wrote:

  Many of the complaints I've heard have been from overhearing hallway
  conversations, noticing non-directed complaints on IRC, having 3rd
 parties
  report anecdotes, etc. *Please, please, please voice your complaints
  directly at me and the build peers.* Indirectly complaining isn't a very
  effective way to get attention or to spur action.
 

 Yes! Indirect feedback is antisocial and destructive.

 http://robert.ocallahan.org/2013/05/over-time-ive-become-increasingly.htmlFWIW
 .

 Even if you're just the recipient of indirect feedback, you can help, by
 refusing to hear it until direct feedback has been given.


Rob,

I think some people may interpret what you say in that last paragraph the
opposite of how you intend. I am pretty sure you mean something like If
somebody starts to complain to you about somebody else, then stop them and
ask them to first talk to the person they were trying to complain about.

I recommend that, when you hear that people are giving indirect feedback
about you or your work to others, that you seek them out in person (or
video calling, if there's too much distance). I've also found that people
often assume that I'm going to be difficult to talk with because of the
direct way I write; seeking people out for face-to-face discussions seems
to have had the side-effect of making it easier for people to read my email
with the correct tone. For the same reason, I highly recommend showing up
at that person's desk over emailing them, if at all possible.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: std::unique_ptr, std::move,

2013-08-02 Thread Brian Smith
On Sat, Aug 3, 2013 at 12:50 AM, Ehsan Akhgari ehsan.akhg...@gmail.comwrote:

 On 2013-08-02 4:49 PM, Brian Smith wrote:

 That sounds reasonable to me. So, based on that then, let's get back to my
 original question that motivated the discussion of the policy: If we add
 std::move, std::forward, and std::unique_ptr to STLPort for Android and
 B2G, can we start using std::move, std::forward, and std::unique_ptr
 throughout Gecko?


 Yes, if they're available in all of our environments, I don't see why not.
  What we want to be careful with is how the STLport changes would work (we
 don't want to make builds fail if you just grab an Android NDK).


I am not quite sure what you mean. Here is the workflow that I was
envisioning for solving this problem:

1. Add std::move, std::forward, and std::unique_ptr to STLPort (backporting
them from STLPort's git master, with as few changes as possible).
2. Write a patch that changes something in Gecko to use std::move,
std::forward, and std::unique_ptr.
3. Push that patch to try (try: -b o -p all -u all -t none).
4. If all the builds build, and all the tests pass, then ask for review.
5. After r+, land on mozilla-inbound. If all the builds build, and all the
tests pass, then anybody/everybody is free to use std::move, std::forward,
and std::unique_ptr.

To me, this is the most (only?) reasonable way to decide when enough
configurations support a language feature/library we are considering using.

Cheers,
Brian



 Ehsan




-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Standard C/C++ and Mozilla

2013-08-02 Thread Brian Smith
On Sat, Aug 3, 2013 at 12:47 AM, Ehsan Akhgari ehsan.akhg...@gmail.comwrote:

 On 2013-08-02 5:21 PM, Brian Smith wrote:

 3. How should we handle bridge support for standardized features not yet
 universally-implemented?


 Generally, I would much rather we implement std::whatever ourselves than
 implement mozilla::Whatever, all other things being equal.


 Yes, but it's still not clear to me why you prefer this.


1. It avoids a phase of mass rewrites s/mozilla:Whatever/std::whatever/.
(See below).
2. It is reasonable to expect that std::whatever works as the C++ standard
says it should. It isn't reasonable to expect mozilla::Whatever to work
exactly like std::whatever. And, often, mozilla::Whatever isn't actually
the same as std::whatever.



  This saves us
 from the massive rewrites later to s/mozilla::Whatever/std::**whatever/;
 while such rewrites are generally a net win, they are still disruptive
 enough to warrant trying to avoid them when possible.


 Disruptive in what sense?  I recently did two of these kinds of
 conversions and nobody complained.


You have to rebase all your patches in your patch queue and/or run scripts
on your patches (that, IIRC, don't run on windows because mozilla-build
doesn't have sed -i). I'm not complaining about the conversions you've
done, because they are net wins. But, it's still less disruptive to avoid
unnecessary rounds of rewrites when possible, and
s/mozilla::Whatever/std::whatever/ seems unnecessary to me when we could
have just named mozilla::Whatever std::whatever to start with.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: std::unique_ptr, std::move,

2013-08-01 Thread Brian Smith
On Wed, Jul 31, 2013 at 2:19 PM, Mike Hommey m...@glandium.org wrote:

 On Wed, Jul 31, 2013 at 01:06:27PM +0200, Brian Smith wrote:
  On Wed, Jul 31, 2013 at 12:34 PM, Mike Hommey m...@glandium.org wrote:
 
   I strongly oppose to any requirement that would make ESR+2 (ESR31)
   not build on the current Debian stable (gcc 4.7) and make ESR+1
   (ESR24) not build on the old Debian stable (gcc 4.4). We're not
   going to change the requirements for the latter. And b2g still
   requires gcc 4.4 (with c++11) support anyways. Until they switch to
   the same toolchain as android, which is 4.7.
 
  Why are you so opposed? I feel like I can give a lot of good reasons
  why such constraints are a net loss for us, but I am not sure what is
  driving the imposition of such constraints on us.

 Because Mozilla is not the only entity that builds and distributes
 Gecko-derived products, including Firefox, and that we can't demand
 everyone to be using the latest shiny compiler.


You are not answering the question. You are just making your assertion in a
different way.

First of all, when we created ESR, there was the understanding that
ESR-related concerns would not hold back the mainline development. Any
discussion about ESR in the context of what we use for *mozilla-central* is
going against our original agreements for ESR.

FWIW, I talked about this issue with a group of ~10 Mozillians here in
Berlin and all of them (AFAICT) were in favor of requiring that the latest
versions of GCC be used, or even dropping GCC support completely in favor
of clang, if it means that we can use more C++ language features and if it
means we can avoid wasting time writing polyfills. Nobody saw installing a
new version of GCC as part of the build environment as being a significant
impediment.

Everybody using Windows or Windows as their development environment has to
download and install a multitude of programs and libraries in order to
build Gecko. I've never heard of a justification for why Linux needs to be
different. And, in fact, except for the compiler/linker/etc., Linux isn't
different; that's why we have bootstrap.py that downloads and installs a
bunch of stuff for Linux too. Why should only the compiler (including
linker, etc.) only on Linux be treated specially? What justifies the
reduced productivity that results from us wasting time writing unnecessary
polyfills and/or writing worse code to avoid language features that aren't
supported on some particular Linux distribution's version of GCC? How many
developers working on Firefox are even using Debian as their development
platform? What percentage of Firefox users are using Firefox on Debian?

My position is that we should be doing everything we can to improve
developer productivity, and that means using the best possible tools we
have available to us. I have a hard time seeing how any Linux
distributions' policies could possibly be more important than our
productivity.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: std::unique_ptr, std::move,

2013-08-01 Thread Brian Smith
On Wed, Jul 31, 2013 at 8:09 PM, Joshua Cranmer  pidgeo...@gmail.comwrote:


 More generally, nobody should be reasonably expected to write code that
 builds with any combination that isn't used on mozilla-central's TBPL. So,
 (clang, MSVC) is not really something to consider, for example.


 clang + MSVC is not a combination I expect us to support anytime soon. My
 main intent was to point out that library polyfilling is much harder than
 it is for compiler features. Look at the mess that is determining when we
 can use atomic.


I agree with you. That is exactly why I suggested using the latest versions
of every compiler whenever possible, and otherwise reducing the number of
compiler+library combinations we need to deal with.



  functionality. The only time we should be requiring less than the latest
 version of any compiler on any platform is when that compiler is the
 compiler used for official builds on that platform and the latest version
 doesn't work well enough.


 I disagree. My baseline recommendation is that we should support the
 newest compiler present on a stable distribution (I assume Debian stable
 for a given ESR. This amounts to gcc 4.7 in practice on Linux at the
 moment. Windows and OS X compiler support is harder to gauge, but I think
 we should at least support the last two released versions of a compiler at
 any given time. Clang releases roughly every 6 months and MSVC is moving to
 a roughly yearly release schedule. This means that we should generally
 expect to support any compiler version released in the last two or three
 years.


I am fine with you and Mike and others disagreeing with me. But, it is
frustrating that you are saying that we should/must do these things,
without providing any explanation of the reasoning behind your suggestions.
Please explain why you are suggesting these things.


 I think we need a single polyfill for C++ standard library features. NSPR
 was that for C and POSIX, but as we get increasingly powerful things in
 standard C++, it makes less sense to be using it for base platform support
 (asynchronous I/O and sockets are planned for a networking TS). I've been
 assuming that this C++ polyfill is MFBT, but it may make sense to separate
 the C++ polyfill (+ arguably some stuff like Assertions.h/Attributes.h)
 from the assorted other ADT stuff (like BloomFilter, SplayTree).


That sounds reasonable to me. But, I'd rather us upgrade compiler
requirements than have us write any new polyfills for MFBT that would be
unnecessary in the face of upgraded compiler requirements. Then the great
developers that are often writing this code for MFBT could be writing great
code to do something else. That isn't to say that we need to convert
everything that currently uses MFBT things to use standard library things,
if we think that the MFBT equivalent is substantially better than what the
standard library offers.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: std::unique_ptr, std::move,

2013-07-31 Thread Brian Smith
On Wed, Jul 31, 2013 at 6:53 AM, Joshua Cranmer  pidgeo...@gmail.comwrote:

 On 7/30/2013 10:39 PM, Brian Smith wrote:

 Yes: Then we can use std::unique_ptr in parts of Gecko that are intended
 to
 be buildable without MFBT (see below).


 One thing I want to point out is that, while compiler features are
 relatively easy to select based on catching macro versions, the C++
 standard library is not, since compiler versions don't necessarily
 correlate with standard library versions. We basically support 4 standard
 libraries (MSVC, libstdc++, stlport, and libc++); under the right
 conditions, clang could be using any of those four versions. This means
 it's hard to tell when #include'ing a standard header will give us the
 feature or not. The C++ committee is actively working on a consensus
 solution to this issue, but it would not be rolled out to production
 compilers until 2014 at the earliest.


Basically, I'm proposing that we add std::unique_ptr, std::move,
std::forward, and some nulllptr polyfill to STLPort with the intention that
we can assume these features work. That is, if some (compiler, standard
library) combination doesn't have these features then it would be an
unsupported combination.

More generally, nobody should be reasonably expected to write code that
builds with any combination that isn't used on mozilla-central's TBPL. So,
(clang, MSVC) is not really something to consider, for example.


 One of the goals of MFBT is to bridge over the varying support of
 C++11/C++14 in current compilers, although it also includes useful data
 structures that are not necessary for C++ compatibility. Since we have an
 increasing number of semi-autonomous C++ projects in mozilla-central, it
 makes sense that we should have a smallish (header-only, if possible?)
 compatibility bridge library, but if that is not MFBT, then I don't know
 what it is or should be. As it stands, we have a fair amount of duplication
 right now.


We should be more aggressive in requiring newer compiler versions whenever
practical, and we should choose to support as few compiler/library
combinations as we can get away with. That way we can use as many C++11/14
features (not just library features, but also language features) as
possible without any portability shims, and we can save developer effort by
avoiding adding code to MFBT that duplicates standard library
functionality. The only time we should be requiring less than the latest
version of any compiler on any platform is when that compiler is the
compiler used for official builds on that platform and the latest version
doesn't work well enough.

Anyway, it would be easier to swallow the dependency on MFBT if it wasn't
so large (over 100 files now), if it tried to be (just) a polyfill for
missing standard library features, and if it could easily be used
independently of the Gecko build system. But, none of those constraints is
reasonable to place on MFBT, so that means MFBT isn't a good choice for
most things that need to also be able to be built independently of Gecko.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: std::unique_ptr, std::move,

2013-07-31 Thread Brian Smith
On Wed, Jul 31, 2013 at 12:34 PM, Mike Hommey m...@glandium.org wrote:

 I strongly oppose to any requirement that would make ESR+2 (ESR31) not
 build on the current Debian stable (gcc 4.7) and make ESR+1 (ESR24) not
 build on the old Debian stable (gcc 4.4). We're not going to change the
 requirements for the latter. And b2g still requires gcc 4.4 (with c++11)
 support anyways. Until they switch to the same toolchain as android,
 which is 4.7.


Why are you so opposed? I feel like I can give a lot of good reasons why
such constraints are a net loss for us, but I am not sure what is driving
the imposition of such constraints on us.

Cheers,
Brian
--
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal: requiring build peer review for Makefile.in changes

2013-07-30 Thread Brian Smith
On Fri, Jul 26, 2013 at 9:36 PM, Brad Lassey blas...@mozilla.com wrote:

 On 7/26/13 9:30 AM, Ehsan Akhgari wrote:

 I've written up the review policy at [1] and filed bug 898089 [2] to
 enforce/communicate this policy via Mercurial hooks.


 While I supported the review policy change here, I'm fairly strongly
 opposed to the idea of the commit hook enforcing it.  I've commented on
 the
 bug.

 + 1
 I also think a commit hook is a bit of overkill


I also agree that a commit hook seems like overkill.

I have found the build system team to be very responsive and helpful
regarding my more involved build system change review requests, so in
general I agree with the idea of the proposed policy given the current
state of things, though I think it is worded more strictly than necessary.
For example, it is OK to make something build conditionally based on a flag
when previously it was always built. Similarly, if I'm just adding a new
subdirectory of code or tests or whatever to an existing module, or
re-arranging the files across directories in a module, it is hardly worth
anybody's time to get a build system peer to review it; if even that is so
prone to being problematic, then that is a bug in the build system that
should be corrected.

I think there is something more important than publishing a policy that you
could do: publish the guidelines for modifying the build system. I.e.
document the things that you'd say in a code review so that if/when I ask
you for a review, we're not rehashing stuff you have told 1,000 people
1,000 times.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


std::unique_ptr, std::move, and std::forward (was Re: Using C++0x auto)

2013-07-30 Thread Brian Smith
On Fri, Jul 19, 2013 at 4:46 AM, Mike Hommey m...@glandium.org wrote:

 Note that STL is another story. We're not using libstdc++ that comes
 with the compiler on android and b2g. We use STLport instead, and STLport
 has, afaik, no support for C++11 STL types. So, while we can now fix
 nsAutoPtr to use move semantics instead of copy semantics, we can't use
 std::unique_ptr.


I saw bug 896100 [1] wants to add mozilla::Move and mozilla::Forward.
Obviously, that is a clear improvement that we can build on.

But, shouldn't we just name these std::move and std::forward and use these
implementations only when we're using STLPort? I know we're not supposed to
add stuff to the std:: namespace normally, but that's exactly what STLPort
is doing.

And, more to the point, shouldn't we just add std::unique_ptr to STLPort
for Android so we can use std::unique_ptr everywhere? And/or just backport
the libstdc++ version to GCC 4.4. Isn't it all just templates?

Cheers,
Brian

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=896100
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: std::unique_ptr, std::move, and std::forward (was Re: Using C++0x auto)

2013-07-30 Thread Brian Smith
On Wed, Jul 31, 2013 at 4:50 AM, Ehsan Akhgari ehsan.akhg...@gmail.comwrote:

 On Tue, Jul 30, 2013 at 7:40 PM, Brian Smith br...@briansmith.orgwrote:

 But, shouldn't we just name these std::move and std::forward and use these
 implementations only when we're using STLPort? I know we're not supposed
 to
 add stuff to the std:: namespace normally, but that's exactly what STLPort
 is doing.


 We've avoided doing this so far in MFBT for everything in the language or
 the standard library that we had to reimplement ourselves.  I'm not aware
 of any practical problems in putting things in the std namespace (besides
 watching out for name clashes, which most standard library implementations
 avoid by either using nested namespaces for their implementation helpers,
 or symbols with underscore at the beginning of their name which are
 supposed to be reserved for implementations -- but real code in the while
 violates that all the time.)  But it still feels a bit unclean to put
 things into namespace std.  Is there a good reason why we should do that in
 this case?


Yes: Then we can use std::unique_ptr in parts of Gecko that are intended to
be buildable without MFBT (see below).



 (Also remember that STLport is an STL implementation, so it is entirely ok
 for them to put things into namespace std!)


To be clear, I am not proposing that we add std::move/forward/unique_ptr to
MFBT. I am suggesting that we add them to STLPort. We could even eventually
upstream them.

EDIT: I just saw Mike's post that STLPort upstream already has
unique_ptr/move/forward. Perhaps we can backport them into our STLPort tree.

FWIW, we have created a new certificate verification library written in
C++. One of my goals is to eventually make it so that it can be embedded in
server-side software to support things like OCSP stapling, short-lived
certificates, and auto-fixing of certificate chains in servers, which are
things that make SSL faster and easier to deploy. Basically, the idea is
that the server can (pre-)build/verify their certificate chain exactly as
Firefox would. There are also some security researchers interested in using
a real browser's certificate processing logic in studies they are doing.
This kind of research directly benefits my work on Gecko and I'm intending
to share this library with them so they can use it in their research.

For this sub-project, I've been trying to avoid any Gecko (including MFBT)
dependencies and I will be cutting down (removing?) the NSPR and NSS
dependencies over time. In order to avoid the MFBT dependency, I created my
own ScopedPtr class and cviecco added a hack for GCC 4.4's nullptr. We also
have been doing the typical hacky/dangerous stuff to deal with a world
without std::move()/forward() and without cstdint. Now we can use
cstdint (or at least stdint.h?), and I'm eager to fix these last mile
issues.

Besides that, in general I'd like to continue making Gecko's code less
foreign to C++ coders. In particular, I'd like to get rid of nsAutoPtrT
and mozilla::ScopedPtrT completely.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Making proposal for API exposure official

2013-06-26 Thread Brian Smith
Andrew Overholt wrote:
 On 25/06/13 10:11 AM, Brian Smith wrote:
  In the document, instead of creating a blacklist of web technologies to
  which the new policy would not apply (CSS, WebGL, WebRTC, etc.), please
  list the modules to which the policy would apply.
 
 I started building up a list of modules to which the policy would apply
 but it grew quickly and there are a lot of modules in Core.

There is an easy way to build this list. When a module owner agrees to change 
the decision making process for his/her module to incorporate the policy, 
he/she can add his module to the list himself.

 How do you feel about refining the definition of web-exposed feature
 and then saying it applies to all modules but the module owner has veto
 power for applicability?

Module owners choose how to make decisions in their modules, though they can be 
overridden by Brendan.

I **highly** recommend that you re-read this:
http://www.mozilla.org/hacking/module-ownership.html

And, in particular: We do not have an elaborate set of rules or procedures for 
how module owners manage their modules. If it works and the community is 
generally happy, great. If it doesn't, let's fix it and learn.

I understand that Brendan would like to have more/all web-facing functionality 
covered by some kind of guidelines similar to what you propose. I am not 
against that idea. However, I don't think the rules you write work very well 
for the modules I work in. For example, I don't think this part makes sense for 
networking or PKI: The Mozilla API review team will consist of Mozillians who 
have experience designing JS APIs and will have at least one representative 
from the JS team at all times. After the quarter is over, I am willing to 
spend time working with you to try to define a policy that might work better 
for modules that aren't DOM/JS-related.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Making proposal for API exposure official

2013-06-25 Thread Brian Smith
Robert O'Callahan wrote:
 On Tue, Jun 25, 2013 at 3:08 PM, Brian Smith bsm...@mozilla.com wrote:
 
  At the same time, I doubt such a policy is necessary or helpful for the
  modules that I am owner/peer of (PSM/Necko), at least at this time. In
  fact, though I haven't thought about it deeply, most of the recent evidence
  I've observed indicates that such a policy would be very harmful if applied
  to network and cryptographic protocol design and deployment, at least.
 
 
 I think you should elaborate, because I think we should have consistent
 policy across products and modules.

I don't think that you or I should try to block this proposal on the grounds 
that it must be reworked to be sensible to apply to all modules, especially 
when the document already says that that is a non-goal and already explicitly 
calls out some modules to which it does not apply: Note that at this time, we 
are specifically focusing on new JS APIs and not on CSS, WebGL, WebRTC, or 
other existing features/properties.

Somebody clarified privately that many DOM/JS APIs don't live in the DOM 
module. So, let me rework my request a little bit. In the document, instead of 
creating a blacklist of web technologies to which the new policy would not 
apply (CSS, WebGL, WebRTC, etc.), please list the modules to which the policy 
would apply.

It seems (from the subject line on this thread, the title of the proposal, and 
the text of the proposal) that the things I work on are probably intended to be 
out of scope of the proposal. That's the thing I want clarification on. If it 
is intended that the stuff I work on (networking protocols, security protocols, 
and network security protocols) be covered by the policy, then I will 
reluctantly debate that after the end of the quarter. (I have many things to 
finish this week to Q2 goals.)

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Making proposal for API exposure official

2013-06-25 Thread Brian Smith
Henri Sivonen wrote:
 On Tue, Jun 25, 2013 at 6:08 AM, Brian Smith bsm...@mozilla.com wrote:
  At the same time, I doubt such a policy is necessary or helpful for the
  modules that I am owner/peer of (PSM/Necko), at least at this time.
  In fact, though I haven't thought about it deeply, most of the recent
  evidence I've observed indicates that such a policy would be very
  harmful if applied to network and cryptographic protocol design and
  deployment, at least.
 
 It seems to me that HTTP headers at least could use the policy. Consider:
 X-Content-Security-Policy
 Content-Security-Policy
 X-WebKit-CSP
 :-(
 
 In retrospect, it should have been Content-Security-Policy from the
 moment it shipped on by default on the release channel and the X-
 variants should never have existed.
 
 Also: https://tools.ietf.org/html/rfc6648

I understand how X-Content-Security-Policy, et al., seems concerning to people, 
especially people who have had to deal with the horrors of web API prefixing 
and CSS prefixing. If people are concerned about HTTP header prefixes then we 
can handle policy for that separately from Andrew's proposal, in a much more 
lightweight fashion. For example, we could just Let's all follow the advise of 
RFC6648 whenever practical. on https://wiki.mozilla.org/Networking. Problem 
solved.

I am less concerned about the policy of prefixing or not prefixing HTTP headers 
and similar small things, than I am about the potential for this proposal to 
restrict the creation and development of new networking protocols like SPDY and 
the things that will come after SPDY.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Ordering shutdown observers?

2013-05-16 Thread Brian Smith
Ehsan Akhgari wrote:
 On 2013-05-15 5:18 PM, Vladan Djeric wrote:
  I'd like to know if these use-cases are sufficiently rare that we
  should just add new shutdown events when needed (e.g. we added
  profile-before-change2 for Telemetry in bug 844331), or if we
  should come up with a general way to impose order of shutdown
  based on dependencies?
 
 Do you have use cases besides these two?

Many things (and an increasing number) depend on PSM/NSS and the PSM team (a 
long time ago) implemented its own shutdown event registration scheme 
(nsNSSShutDownObject in nsNSSShutDown.h). There seems like there is at least 
one race due to NSS being shut down while things are still using NSS which is 
causing a crash or worse (presumably because there is not enough awareness of 
the need to implement nsNSSShutDownObject and/or it is too error-prone to do 
so). Also, NSS must be shut down in profile-before-change because it may write 
to the profile directory.

So, basically the nsNSSShutDownObject scheme is a variant of the explicit 
dependencies scheme and, not a very successful one. Perhaps there are other 
variants of explicit dependency schemes that would be less error prone, but I 
am skeptical. In general, generic dependency schemes of the upstart variety 
seem like very complicated solutions, considering we have global knowledge of 
all the components of Firefox that we could just hard-code in, if we can assume 
that addons do not affect the ordering.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: We should drop MathML

2013-05-06 Thread Brian Smith
Benoit Jacob wrote:
 Can we focus on the other conversation now: should the Web have a
 math-specific markup format at all? I claim it shouldn't; I mostly
 mentioned TeX as a if we really wanted one side note and let it go
 out of hand.
 
 How many specific domains will want to have their own domain-specific
 markup language next? Chemistry? Biology? Electronics? Music? Flow
 charts? Calligraphy?

I hope that all those subjects develop their own domain-specific markup 
languages. In fact, many of them have: there's MusicXML for Music, and OpenType 
for caligraphy, for example. Things that can help people convey the true 
meaning of information to each other and that can give machines the necessary 
assistance to understand that information are generally good.

I think the more important issue is whether browsers should have built-in 
support for all these things. I think we should make the platform flexible 
enough and powerful enough that web pages can render, edit, and manipulate the 
information without any built-in knowledge of the markup from the browser. 
However, unless/until we ship that, I don't think there should be a rush to 
remove MathML.

I mean no disrespect to the people who worked on pdf.js, but I have to admit 
that many frustrating experiences with pdf.js have convinced me that it is even 
more important than I originally thought to get people publishing scientific 
and technical writing *natively* in HTML as soon as possible. Simply, we are 
not there yet as far as render and edit it with your own JS code goes. 
Until we are there, IMO we have to get the web publishing content natively in 
HTML. That means we should be aiming for high-fidelity (perfect) and 
high-performance dvi-to-html (and even docx-to-html and xslx-to-html) 
conversion at a minimum. (For all the good things about pdf.js, high fidelity 
and high performance do not describe it, in my experience.)

 start saying no, and at that point, the exception made for math
 will seem unjustified.

I think eventually we could say the same thing about SVG (why not just have JS 
code render Adobe Illustrator drawings using canvas or even WebGL?) and quite a 
few other things we've built into the platform. We definitely should do what 
you suggest and improve the core parts of the platform to make such specialized 
built-in interpreters unnecessary. But, that seems quite far off; we want the 
web platform to be competitive with various native apps sooner than we can 
demonstrate success with that strategy.

 If tomorrow a competing browser solves these problems, and renders
 MathJax's HTML output fast, we will obviously have to follow. That
 can easily happen, especially as neither of our two main competitors
 is supporting MathML.

Sure. Nobody's arguing that we shouldn't make MathJax fast. I would argue, 
though, that we shouldn't remove MathML until there's a viable (equally-usable, 
equally-round-trippable, equally-performing) replacement.

 School children are only on the reading end of math typesetting, so
 for them, AFAICS, it doesn't matter that math is rendered with MathML
 or with MathJax's HTML+CSS renderer.

School children traditionally have been on the reading end of math typesetting 
because they get punished for writing in their math books. However, I fully 
expect that scribbling in online books will be highly encouraged going forward. 
School children are not going to write MathML or TeX markup. Instead they will 
use graphical WYSIWYG math editors. The importance of MathML vs. alternatives, 
then, will have to be judged by what those WYSIWYG end up using. WYSIWYG 
editing of even basic wiki pages is still almost completely unusable right now, 
so I don't think we're even close to knowing what's optimal as far as editing 
non-trivial mathematics goes.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal for an inbound2 branch

2013-05-03 Thread Brian Smith
L. David Baron wrote:
 On Saturday 2013-04-27 08:26 +1000, Nicholas Nethercote wrote:
  If I have a patch ready to land when inbound closes, what would be
  the sequence of steps that I need to do to land it on inbound2?
  Would I need to have an up-to-date inbound2 clone and transplant
  the patch across?  Or is it possible to push from an inbound clone?
 
 For what it's worth, what I'd do is qpop the patch, pull the tip of
 inbound2 into my inbound clone and update to tip, qpush and qfin the
 patch, and then hg out -rtip inbound2 (and after checking it's
 right), hg push -rtip inbound2.

I would:

hg pull inbound2
hg update -c tip (not always correct, but usually)
hg graft my-changesets

The horrible thing about qpop and qpush is that they deal with conflicts very, 
very poorly, whereas hg graft, hg rebase, and hg pull --rebase allow me to use 
my three-way merge tool to resolve conflicts. That ends up being a big time 
saver and IMO it is also much less error-prone.

Also, I am not sure -rtip is the safest thing to use when you have a 
multi-headed local repo. I always use -r . myself.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Fwd: NSPR/NSS/JSS migrated to HG and updated directory layout

2013-04-04 Thread Brian Smith
In addition to this change from CVS to Mercurial, the following changes will be 
made in mozilla-central the next time we update NSS:

dbm/ will be moved security/nss/lib/dbm/
security/coreconf/ will be moved to security/nss/coreconf/

This should reduce some confusion about what parts of the tree belong to NSS.

Cheers,
Brian

- Forwarded Message -
From: Kai Engert k...@kuix.de
To: nss-dev nss-...@mozilla.org
Cc: dev-tech-nspr dev-tech-n...@lists.mozilla.org
Sent: Thursday, March 21, 2013 4:17:16 PM
Subject: NSPR/NSS/JSS migrated to HG and updated directory layout

To all users of the NSPR, NSS and JSS libraries,

we would like to announce a few technical changes, that will require you
to adjust how you obtain and build the code.

We are no longer using Mozilla'a CVS server, but have migrated to
Mozilla's HG (Mercurial) server.

Each project now lives in its own separate space, they can be found at:
  https://hg.mozilla.org/projects/nspr/
  https://hg.mozilla.org/projects/nss/
  https://hg.mozilla.org/projects/jss/

We used this migration as an opportunity to change the layout of the
source directories.

For NSPR, mozilla/nsprpub has been removed from the directory
hierarchy, all files now live in the top directory of the NSPR
repository.

Likewise for NSS and JSS, mozilla/security has been removed and files
now live at the top level. In addition for NSS, we have merged the
contents of directories mozilla/dbm and mozilla/security/dbm into the
new directory lib/dbm.

Besides the new layout, the build system hasn't changed. Most parts of
the NSS build instructions remain valid, especially the instructions
about setting environment variables.

Updated instructions for building NSS with NSPR can be found at:
https://developer.mozilla.org/en-US/docs/NSS_reference/Building_and_installing_NSS/Build_instructions

It's best to refer to the above document to learn about the various
environment variables that you might have to set to build on your 
platform (this part hasn't changed).

However, below is a brief summary that shows how to checkout the 
source code and build both NSPR and NSS:
  mkdir workarea
  cd workarea
  hg clone https://hg.mozilla.org/projects/nspr
  hg clone https://hg.mozilla.org/projects/nss
  cd nss
  # set USE_64=1 on 64 bit architectures
  # set BUILD_OPT=1 to get an optimized build
  # on Windows set OS_TARGET=WIN95
  make nss_build_all

Note that the JSS project has been given a private copy of the former
mozilla/security/coreconf directory, allowing it to remain stable,
and only update its build system as necessary.

Because of the changes described above, we have decided to use a new 
series of (minor) version numbers. The first releases using the new code
layout will be NSPR 4.10 and NSS 3.15

Regards
Kai on behalf of the NSPR and NSS teams


__
nss-dev mailing list
nss-...@mozilla.org
https://mail.mozilla.org/listinfo/nss-dev
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Accessing @mozilla.org/xmlextras/xmlhttprequest;1 from content

2013-02-22 Thread Brian Smith
- Original Message -
 From: Matthew Gertner matt...@salsitasoft.com
 To: dev-platform@lists.mozilla.org
 Sent: Friday, February 22, 2013 7:02:40 AM
 Subject: Accessing @mozilla.org/xmlextras/xmlhttprequest;1 from content
 
 I have an extension that loads an HTML file into a hidden browser
 and runs script in the context of the hidden browser window. That
 script needs to be able to make crossdomain XHR requests to
 chrome:// and resource:// URLs that are apparently now blocked in
 Firefox 19 (they weren't blocked in Firefox 18).

I believe that the Addon SDK (a.k.a. JetPack) has special provisions for this; 
See [1], section Content Scripts. In particular, I think that if you inject a 
content script into the browser then the content script will be able to 
make cross-origin requests like you propose. At least, I know that the Addon 
SDK required an extension to the nsIPrincipal interface to support multi-origin 
principals for this case.

I am particularly interested if this strategy would work for you and other 
addon developers.

Cheers,
Brian

[1] 
https://addons.mozilla.org/en-US/developers/docs/sdk/1.12/dev-guide/guides/xul-migration.html
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: LOAD_ANONYMOUS + LOAD_NOCOOKIES

2013-02-22 Thread Brian Smith
bernhardr...@gmail.com wrote:
 i'm willing to fix
 https://bugzilla.mozilla.org/show_bug.cgi?id=836602
 
 Summary: The rest api should not send cookies and thus now uses the
 LOAD_ANONYMOUS flag. But this flag also denies (client side)
 authentication like my custom firefox sync requires.
 therefore firefox sync is broken for me since = F18.

Which modes of authentication does the Sync team wish to support in the product?

Currently it supports and requires (I think) HTTP authentication without 
cookies and without SSL client certificates.

The proposal (I think) is to support SSL client certificates with HTTP 
authentication. But, if you area already doing SSL client authentication then 
do you really need HTTP authentication too? Should that mode of operation be, 
instead, SSL client authentication without HTTP authentication and without 
cookies?

How would the Sync client decide whether to use SSL client certificates or HTTP 
authentication? Would there be some new UI?

I am willing to help with things (e.g. reviewing the tests) but it is up to the 
Sync team to decide on the prioritization of the work and decide what the 
testing requirements are. IMO, writing tests for this will be difficult as 
there's no framework for SSL client cert testing.

 i'm planing to add 2 new constants:
 
 const unsigned long LOAD_NOCOOKIES = 1  15;
 const unsigned long LOAD_NOAUTH  = 1  16;
 the second constant would be the fix for
 https://bugzilla.mozilla.org/show_bug.cgi?id=646686

I don't see a problem with adding these. But, we should be clear on what the 
final goal of this work is.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Cycle collection for workers

2013-02-13 Thread Brian Smith
Kyle Huey wrote:
1. Dealing with the different ownership model on worker threads
(no cycle collector, all owning references go through JS).
2. Dealing with things that are not available off the main thread
(no necko, no gfx APIs, etc).

FWIW, I think the networking team has a goal of allowing nsHttpChannel to be 
used off of the main thread, for performance reasons. Not sure on the timeline 
for that though.

- *Does this mean cycle collected objects can be multithreaded?*
No-ish.
All cycle collected objects will belong to one and only one
thread, and can only be AddRefed/Released on that thread.

At what point during XPCOM shutdown are workers destroyed?

This seems like a great tool for implementing the worker part of the W3C 
WebCrypto API. But, we have to deal with the fact that during XPCOM shutdown, 
we have to ensure all the crypto objects are destroyed, and that has to happen 
on the main thread. (We're notified about shutdown on the main thread and we 
have to finish destroying all the objects before we return from the 
notification.) So, it seems like we may need some kind of Destroy all the 
workers API that deals with this, if we don't have it already.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Cycle collection for workers

2013-02-13 Thread Brian Smith
Kyle Huey wrote:
 Brian Smith  bsm...@mozilla.com  wrote:
 
 At what point during XPCOM shutdown are workers destroyed?
 
 xpcom-shutdown-threads

NSS gets shut down way before then, because it can write to the profile. Same 
with Necko.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The future of PGO on Windows

2013-02-04 Thread Brian Smith
Ehsan Akhgari wrote:
 Brian Smith wrote:
  2. AFAICT, we did not seriously investigate the possibility of
  splitting things out of libxul more. So far we've tried cutting
  things off the top of the dependency tree. Maybe now we need to try
  cutting things off the bottom of the dependency tree.
 
 Can you please give some examples? Let's remember the days before
 libxul. It's hard to always make sure that you're accessing things
 that are properly exported from the target library, deal with
 internal and external APIs, etc.

Any of the non-XPCOM code (most of it imported?) like ipc/ a huge chunk of the 
underlying code for WebRTC. (I know we already are planning to split out that 
WebRTC code from libxul.)

Unfortunately, I am mostly a post-libxul Mozillian, so it would be better to 
have the XPCOM old-timers weigh in on the difficulty of having multiple 
libraries in Gecko. I guess the problem with splitting is that almost 
everything depends on XPCOM which is at the bottom of the libxul dependency 
tree. So, if we try to split things from the bottom, the first thing we'd have 
to move is XPCOM. But then we have to deal with internal vs. external XPCOM 
interfaces. Because I didn't have to deal with that issue before (which was 
pre-libxul), I cannot estimate how much effort and/or how much performance cost 
(e.g. relocation overhead) that would have. Also, I am not sure how much the 
internal vs. external issue was to deal with the need for a stable ABI vs. 
solving other problems like relocation overhead. Definitely we wouldn't need to 
have the stable ABI requirement for a split libxul, so perhaps the internal vs. 
external issue wouldn't be as painful as before? Also, from looking at a
  few parts of the code, it looks like we're already having to deal with this 
internal vs. external API problem to a certain extent for WebRTC. I do agree 
that this would be a non-trivial project.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The future of PGO on Windows

2013-02-04 Thread Brian Smith
Ehsan Akhgari wrote:
 On 2013-02-04 11:44 AM, Ehsan Akhgari wrote:
  3. What is the performance difference between Visual Studio
  2012 PGO builds and Visual Studio 2010 builds? IMO, before
  we decide whether to disable PGO on Windows, we need to get
  good benchmark results for Visual Studio **2012** PGO builds,
  to make sure we're not throwing away wins that could come
  just solving this problem in a different way + upgrading
  the compiler.
 
 
  That's something that we should probably measure as well.  Filed
  bug 837724 for that.
 
 Note that I misread this and thought you're talking about VS2010 PGO
 builds versus VS2012 non-PGO builds, and that's what bug 837724 is
 about.  As I've already said in this thread, VS2012 uses more memory
 for PGO compilations than VS2010, so upgrading to that for PGO builds
 is out of the question.

That seems to be assuming that there is nothing reasonable we can do to make 
VS2012 PGO builds work. However, in order to know what is a reasonable amount 
of effort, you have to know what the benefits would be. For example, let's say 
we lived in a magical alternate universe where VS2012 PGO builds cut Firefox's 
memory usage by 50% and made everything twice as fast compared to VS2010 PGO 
builds. Then, we would consider even man-years of effort to be reasonable. On 
the other hand, if Firefox were twice as slow when built with VS2012 PGO 
builds, then no amount of effort would be reasonable. So, you have to know the 
performance difference between VS2012 PGO builds and VS2010 PGO builds before 
we can reject the possibility of VS2012 PGO.

Also, I want to echo khuey's comment: It seems like a lot of the argument 
against PGO is that, while our benchmarks are faster, users won't actually 
notice any difference. If that is true, then I agree with khuey that that is a 
massive systemic failure; we shouldn't be guiding development based on 
benchmarks that don't correlate positively with user-visible improvement. If 
all of our benchmarks showing the benefits of PGO are useless and there really 
isn't any difference between PGO and non-PGO builds, then I'm not going to push 
for us to continue doing PGO builds any more. But, in that case I hope we also 
come up with a plan for making better benchmarks.

And, also, if PGO doesn't have a significant positive performance difference, I 
would be very curious as to why not. Is PGO snake oil in general? Is there 
something about our codebase that is counter-productive to PGO? And, if the 
latter, then is there anything we can do to undo that counter-productivity?

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Let's never, ever, shut down NSS -- even in debug builds

2013-01-29 Thread Brian Smith
Benjamin Smedberg wrote:
 On 1/28/2013 6:39 PM, Brian Smith wrote:
  This will greatly simplify lots of code--not just code in
  security/manager, but also code in WebRTC, DOM (DOMCrypt), Toolkit
  (toolkit/identity), and Necko (netwerk/). We already have a
  significant amount of code that is already running in Firefox
  every day, but which doesn't handle NSS shutdown correctly.

 Could you elaborate on the kinds of things that client code currently
 shouldn't do?

Basically, you cannot call any NSS function without (a) having ensured that You 
have initialized the psm;1 component, and (b) having acquired the 
nsNSSShutdownPreventionLock, and (c) checking that NSS hasn't already been shut 
down. I am not trying to solve any problems related to (a), as that's generally 
handled correctly.

Looking at the code, I have seen the following problematic patterns:

1. Implementations of nsNSSShutdownObject that call NSS functions in their 
destructor without acquiring the nsNSSShutdownPreventionLock--basically, every 
implementation except one.
2. Quite a few PSM functions that acquire the nsNSSShutdownPreventionLock and 
then fail to check if NSS has been shut down before calling any NSS functions.
3. Code that is not destroying all its references to NSS objects during 
shutdown (i.e. not implementing nsNSSShutdownObject / CryptoTask correctly), or 
which is calling into NSS completely oblivious to NSS shutdown.

 Is this related to network activity, or something else?
 As far as I understand things, the network gets shut down before we
 get to NSS shutdown, so there really shouldn't be any clients calling
 into PSM via the network by the time we get to that point anyway.

Should we be requiring networking to be shut down at all? Or, should networking 
be made oblivious to shutdown (besides putting the HTTP cache in a read-only 
state, perhaps) and run all the way up to exit(0)?

Many (most?) of the crashes we've seen during NSS shutdown clearly show network 
activity happening concurrently on another thread. If we really need to make 
sure networking is shut down cleanly before we exit(0), then we can fix the 
networking code to not do that. That's actually what prompted this 
investigation in the first place. But, if the only reason we want networking to 
shut down cleanly is so that it can handle NSS shutdown properly, then I'd 
rather just avoid the issue entirely by keeping NSS available all the time and 
avoid the issue.

 If there are other APIs (cert management or whatnot) that are getting
 called late, wouldn't it be ok to just make them fail?

That's what's already supposed to happen. The problem is that many of the very 
many implementations of the just make them fail is missing or incorrect.

 This is indeed a concern. We have already given up on leak reporting
 in release builds, but we do still track leaks from debug builds and
 even back out patches that cause leaks. I would be concerned about
 patches that make it impossible to track leaks.

I share that concern. On the other hand, having perfect leak detection means 
writing and maintaining a *lot* of otherwise-unnecessary code.

  2. Because NSS reads and writes to files in the profile directory,
  the profile directory must be readable and writable up until
  process exit. The current rules for XPCOM shutdown say that
  services must stop doing disk I/O well before then; we would need
  to change the rules. It is safe to do so?
 Why is NSS doing this? We should be done with activities like
 certificate management well before... can we ask NSS to sync the
 important data to disk and then stop touching the disk? If NSS is
 actually touching the disk at process exit, doesn't that leave us in
 a possibly-inconsistent state?

One example: we write certificates that we received from NSS to the NSS 
certificate database as we parse them from the SSL handshake. So, as long as it 
is possible for an SSL handshake to occur, we will write to the database. Also, 
like Bob noted, the dbm-based NSS database we use seems to like to write to the 
database at shutdown just for fun; this is something that could be fixed.

Obviously we need to make sure that we don't corrupt the NSS databases at 
shutdown by exiting in the middle of a write. It is something that will need to 
be investigated going forward. Presumably switching to the SQLite-based 
database would solve this problem.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Investigation undergoing on the future of PGO

2013-01-28 Thread Brian Smith
Ehsan Akhgari wrote:
 I have started an effort to gather some information on what options
 we have with regard to using PGO on Windows in the longer term[.]

 If you have ideas
 which are not covered by the bugs on file, please do let me know.

Minimizing startup time is one of the biggest reasons we combine as much as 
possible into libxul. But, if we look at NSPR and NSS, on their own they 
account for 12 separate DLLs, all(?) of which are loaded at startup time.

It should be very, very simple to combine these DLLs together:

plc4.dll + plds4.dll + nspr4.dll = combined-nspr4.dll
nss3.dll + ssl3.dll  + smime3.dll = combined-nss3.dll
softokn3.dll + nssdbm3.dll + freebl3.dll = softokn3.dll

With just a little more work, we could combine things further:

nssutil3.dll + combined-nspr4.dll = combined-nspr4-and-nssutil3.dll

With just a little work, then, we'd have reduced the number of NSPR and NSS 
DLLs from 12 to 4. That should be a perf win for cold startup (especially on 
spinning rust disks).

If we're willing to spend some of that perf win to solve this problem, then we 
could factor out some of the bottom-most parts of libxul (xpcom/, parts of 
ipc/, parts of netwerk/, maybe parts of security/manager, and other things with 
no/few dependencies) into a separate DLL. We'd have to measure the startup time 
impact very carefully, and the negative impact from the cross-library calls, 
but if we can find enough stuff to throw into that split-off DLL then it might 
be a big enough win to hold us over for a while. Possible negatives: pelocation 
overhead may increase and inter-module inlinining may decrease.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Let's never, ever, shut down NSS -- even in debug builds

2013-01-28 Thread Brian Smith
Hi all,

After seeing many, many bugs about difficulty of writing code that properly 
handles NSS shutdown during XPCOM profile teardown, I think the only reasonable 
way forward is to simply make it so that NSS never shuts down--including in 
debug builds.

This will greatly simplify lots of code--not just code in security/manager, but 
also code in WebRTC, DOM (DOMCrypt), Toolkit (toolkit/identity), and Necko 
(netwerk/). We already have a significant amount of code that is already 
running in Firefox every day, but which doesn't handle NSS shutdown correctly.

Changing PSM so that NSS never shuts down is an easy thing for me to do. But it 
would have the following negative implications:

1. If we don't We will need to mask out lots of memory leaks in 
memleak-reporting builds, because we'll have a lot of memory allocated that 
would only get freed during NSS_Shutdown.

2. Because NSS reads and writes to files in the profile directory, the profile 
directory must be readable and writable up until process exit. The current 
rules for XPCOM shutdown say that services must stop doing disk I/O well before 
then; we would need to change the rules. It is safe to do so?

3. Some buggy PKCS#11 modules (such as smart card readers) may need to be 
updated to support this. (A PKCS#11 module should gracefully handle the process 
exiting without proper shutdown, because it has to deal with programs 
crashing, but I've heard rumors that some don't.)

The positive benefits:

1. Code that uses NSS would avoid race conditions at shutdown.

2. Code that uses NSS would be much easier to write (especially code that runs 
off the main thread). No more nsNSSShutDownObject, no more 
nsNSSShutdownPreventionLock, etc.

3. Code that uses NSS, including our SSL stack in particular, would be much 
efficient, because we could avoid using synchronization primitives (mutexes, 
semaphores) just to check for and prevent NSS shutdown.

4. Shutdown will be faster.

5. We will likely resolve a lot of shutdown-related bugs very easily.

Please let me know if you have any questions or objections to doing this. FWIW, 
we would not be the first browser based on NSS to take this road.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Let's never, ever, shut down NSS -- even in debug builds

2013-01-28 Thread Brian Smith
[+taras]

Kyle Huey wrote:
 2. Because NSS reads and writes to files in the profile directory,
 the profile directory must be readable and writable up until process
 exit. The current rules for XPCOM shutdown say that services must
 stop doing disk I/O well before then; we would need to change the
 rules. It is safe to do so?
 
 This is almost certainly incompatible with the perf team's plans to
 shutdown via exit(0).

My understanding is that the perf team wants to prevent things from writing 
during shutdown so that they become confident that it is safe to shut down 
early.

But, AFAICT, it is OK to do I/O during shutdown as long as you don't care 
whether you get interrupted or not.

Now, one question is whether it is safe to let an exit(0) on the main thread 
interrupt an NSS database write that happens off the main thread. That's 
something I will look into.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Supporting the Windows Certificate Store

2013-01-27 Thread Brian Smith
Joshua Toon wrote:
 I know that there are probably well thought out reasons that this
 isn't a features already...BUT! Lot's of US Government users can't
 use Firefox because it doesn't use the Windows certificate store.

Please explain why NSS's trusted root store doesn't work for them. Is it 
because Microsoft's builtin root store has some CAs that we don't? Or, is it 
because the US Government uses Windows' group policy stuff to add their own 
custom CAs to every PC, and we don't pick up those custom CAs.

 Would anyone be totally opposed to adding this feature and having it
 enabled via group policy? That would allow some IT shops to roll it
 out with their preferred smart card middleware...like ActivClient.

Or, is the problem that these users cannot use their smartcards (doing client 
authentication)?

The most controversial thing would be to support using Microsoft's builtin root 
CA list instead of NSS's, even as an option. The compatibility problems due to 
our set not matching Microsoft's are painful but also people will object to the 
idea of switching to Microsoft's root list wholesale, because it hurts 
Mozilla's position at the negotiating table to improve CA-related policy stuff. 
That is something that is best discussed on dev.security.policy.

I would very much welcome any assistance in getting better support for 
administrator-added root certificates into Firefox. I am not sure how we can, 
using Microsoft's APIs, distinguish roots that are trusted because they are 
built in Microsoft's built-in list from roots that are trusted because a user 
or sysadmin explicitly added then. If there is a way to make such a 
distinction, then I would gladly help with a feature that allowed us to 
seamlessly trust the sysadmin-/user-added roots in the Windows certificate 
database.

I also think it would be *great* and (almost) totally non-controversial to add 
support for using CAPI/CNG instead of NSS for smartcard authentication on 
Windows, and I would welcome the patches and help push them along. (Chromium 
already has patches to allow NSS's libssl to do client authentication using 
CAPI smartcards, IIRC, and I would be glad to help integrate them into NSS 
upstream if there is somebody that wants to help with the Firefox UI 
integration with CAPI/CNG.)

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++11 atomics in Mozilla

2013-01-27 Thread Brian Smith
Joshua Cranmer wrote:
 On 1/27/2013 11:48 PM, Brian Smith wrote:
  FWIW, in cases like this, I would rather we just use the C++11 API
  directly even if it means dropping support for common but
  out-of-date compilers like gcc 4.4 and VS2010.
 
 I personally prefer an API style where the memory ordering
 constraints of a variable are part of the type declaration as
 opposed to an optional parameter on the access methods
 (which means operator overloading will only ever give you
 sequentially consistent).

OK, I didn't realize you were proposing the new API as a permanent 
replacement/wrapper of the C++11 API.  If the Mozilla Coding Style Guide is 
going to discourage the use of the C++11 API permanently anyway even after all 
supported compilers provide it, then it doesn't really matter how the 
Mozilla-specific API is implemented.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Integrating ICU into Mozilla build

2012-12-07 Thread Brian Smith
Jean-Marc Desperrier wrote:
 ICU is a massive, huge juggernaut. It fits the bill in professional
 application that have no download size constraints, and no
 requirement to support the low end of installed memory size. OS
 support is incredibly more efficient.

Jean-Marc, I don't agree with everything you said but I do agree with this 
part, which I think people might be glossing over too easily. I don't 
understand the fixation on ICU as *the* solution to this problem. If the 
EcmaScript specification is so complicated and so unusual in its design that it 
cannot be easily implemented using widely-deployed system i18n APIs, then IMO 
that spec is broken. But, I highly suspect that it is quite possible to 
implement that spec with OS-provided libraries. Window and Mac have extensive 
internationalization APIs. Why not use them?

I am also unsure about the comments that say imply that even stock ICU is not a 
good choice for implementing this API. Is it *required* to modify ICU to 
implement the JS API, or is it just inconvenient or (slightly?) inefficient to 
use the stock ICU API?

We can ship ICU as a system library on B2G. Some Linux distributions apparently 
ship ICU as a system library so we may be able to make an ICU system library a 
runtime prerequisite for Firefox on Linux, or we could just make Firefox on 
Linux 20% bigger (I don't think Linux users are that particular about the 
download size).

According to previous messages in this thread, Android has ICU as a system 
library, that just isn't exposed as an official NDK library. However, I've read 
that it is possible to dlopen the system libraries and use them; you should 
just be extra-careful about handling the case where the libraries are different 
or missing (e.g. renamed). I think it is worth exploring doing this, and 
falling back to no JS i18n support or we must download a bunch of ICU data 
when things fail. Also, Android is similar to an open-source project. Perhaps 
we could contribute the glue to provide a usable system ICU to NDK applications 
as a long-term solution. Then the pain and uncertainty for Android would be 
somewhat bounded in time.

Granted, the above ideas are a lot more work than just using ICU everywhere. I 
don't know *how* much more work it would be. But, I think that if an engineer 
came to us and said Give me one year and I will reduce your download size by 
20% in one year then I hope we'd consider hiring him to do that. So, IMO, the 
extra work to save download size is justifiable if the feature itself is really 
a high priority.

We may be able to just take the 20% hit on download size on Mac too without 
being too concerned. We didn't/aren't implementing stub installer on Mac, 
right? And, we've been shipping universal binaries on Mac (did we stop that 
yet). Those two things indicate to me we're less concerned about download size 
on Mac. If so, then we may be able to get away with just two implementations: 
One ICU, and one Windows API.

Even if we decided that ICU is the only choice for all platforms, there is a 
middle ground between Must block the startup of Firefox during installation on 
the download of ICU data and Delay downloading the ICU data until a web page 
requests it. We could add an updater for the ICU data that 
downloads/installs/updates the ICU data into the Firefox profile separately 
from Firefox installation and update (so the user doesn't have to wait on the 
ICU data to download to use Firefox during installation or update). Note that 
we already do (did?) something similar, downloading ~45MB of safe browsing data 
on first use. (Actually, I think that we could maybe do something like this for 
WebRTC stuff too, which IIRC is about ~1.5MB of object code.)

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal: Remove Linux PGO Testing

2012-10-12 Thread Brian Smith
David Anderson wrote:
 It's still unclear to me what our Linux PGO builds mean. Do
 distributions use them? If not, are they using the exact same
 compiler version and PGO environment data? If not, then they have a
 different configuration that we haven't tested.

I agree that we should make sure that we are testing the configuration(s) that 
users are using, and that Linux distros might be using a different 
configuration than what we're testing. That is a separate issue of whether the 
right configuration is PGO or not. If PGO is the fastest and Linux distributors 
are not distributing PGO builds, then we should help the distros start doing 
PGO builds.

IMO we should help Linux distributors use the configuration we're testing, and 
(only) when we we're successful at that should the result be called Firefox. 
I don't mean that we should dictate a configuration; rather, we should work 
together with the Linux packagers so we all agree on the optimal, supported, 
build configuration, and then incorporate that into the trademark rules.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Why we avoid making private modifications to NSPR and NSS (was Re: Imported code)

2012-10-11 Thread Brian Smith
Randell quoted:
 Ehsan wrote:
 It is entirely unreasonable to render ourselves unable to modify
 our imported code (just look at the current situation with NSPR
 which causes developers to go through huge pain in order to work
  around things which would be very simply dealt with if only we
  had the option of fixing NSPR...).

First, I think that a lot of Mozillian's concerns about how we can(not) change 
NSPR and NSS are based on old events and circumstances that do not apply now. 
For example, the time where NSS 3.whatever was being FIPS validated seemed to 
cause a lot of unfortunate misunderstandings all around. But, that was 
literally years ago. I encourage people to be more optimistic about things NSPR 
and NSS related. And, please keep in mind that there is already progress being 
made (if not a definitive agreement) on the Mercurial vs. CVS issue.

There is a very practical reason for avoiding making private changes to NSPR 
and NSS. The most obvious reason is that it makes it esay to merge changes 
between upstream and our codebase. However, the more critical reason is that we 
support --use-system-nss and --use-system-nspr, and some Linux developers build 
their Firefox packages with those options. As long as we support 
--use-system-nss and --use-system-nspr, we need to make sure the upstream NSPR 
and NSS contain every bugfix and every feature that we require.

If we were to give up on the idea of supporting --use-system-nspr and 
--use-system-nss, then we could gain more flexibility in how we change NSPR 
and/or NSS in mozilla-central. However, at least in theory, --use-system-nss 
and --use-system-nspr should be superior performance-wise, on Linux, because 
usually the system NSPR and NSS libraries are already loaded before Firefox 
starts. Thus, our startup time should be lower. Also, according to some 
conversation on dev-tech-crypto, the system NSS may eventually provide better 
platform integration regarding centralized certificate and smartcard handling, 
allowing us to share various security-related features with other applications 
(between Firefox and Thunderbird, or Firefox and 
whatever-the-Gnome-email-app-is). So, I think there are still advantages to 
supporting system NSS.

In the event that we really do need to make private changes to NSPR and NSS, we 
should be able to do so. I think there's been a lot of unnecessary 
misunderstanding about this. If you *need* a change made to NSPR or NSS before 
we're ready to make a NSPR or NSS release, please make sure that the NSPR and 
NSS teams are aware of that, so we can help. And, whenever possible, try to 
avoid creating emergency situations. I have found everybody to be quite 
reasonable about it, especially when my request didn't involve me needing them 
to drop their work to do a code review for me. Usually we upstream those 
changes into NSS and NSPR first, because we need NSPR and NSS peers to do the 
review anyway. We try to avoid making temporary fixes only in 
mozilla-central, because, well, what happens if they don't get accepted 
upstream? Then we've broken --use-system-nspr and --use-system-nss. Still, I 
think there is more flexibility here than people realize.

In general, it is harder to get changes made to NSS and NSPR than it is to get 
changes made to the rest of Gecko. One reason is that the reviewing standards 
are different/stricter in these modules than they are in some/many (but not 
all) Gecko modules. I actually prefer the stricter NSPR/NSS reviews and I hope 
that doesn't change. Another reason is that, generally mozilla-central is 
primarily geared towards Mozilla products (especially Firefox), whereas NSPR 
and NSS are shared between us, Chrome, and all Linux distributions. NSPR and 
NSS are part of the Linux Standard Base, which means that it is difficult to 
make compatibility-breaking changes to them.

Sometimes when people suggest changes to NSPR and/or NSS, it isn't clear how 
urgent that change is. Definitely, I have sat on a review for too long because 
I didn't realize it was actually as urgent as other work I am doing. Because 
many of the NSPR and NSS peers do not work for Mozilla, and do not work on 
Mozilla stuff full-time, it definitely isn't as obvious to them what is a 
high-priority request and what isn't. And, also, because they have their own 
jobs and their own schedules, sometimes schedules are not aligned as well as we 
would like/need them to be. IMO, the solution to that is to have more MoCo 
employees and other Mozillians become peers on the NSPR and NSS projects, so 
that we can help with the code reviews in these projects. I know on the NSS 
part, we're definitely trying to make progress in getting more Mozilla people 
involved.

For NSPR, my understanding is that we're generally migrating away from using 
NSPR in Gecko, except for networking. Lots of stuff that's in NSPR already has 
replacements in mfbt and/or in ISO C/C++. One reason for doing this is that we 
can make use of 

Re: Proposal: Remove Linux PGO Testing

2012-10-11 Thread Brian Smith
Zack Weinberg wrote:
 Link-time optimization is described as an experimental new feature in
 the GCC 4.5.0 release notes[1].  The 4.6.0 release notes[2] say that
 it has now stabilized to the point of being usable, and the 4.7.0
 release notes[3] describe it as still further improved both in
 reliability and code quality.  If we're trying to use the 4.5 LTO,
 I'm not at all surprised to hear it's causing more trouble than it's
 worth.
 
 PGO is not the same thing as LTO, of course, but GCC's PGO was kind
 of an unloved stepchild until they got serious about LTO, so that,
 too, is likely to be much improved in 4.7.

I think it is important to give Linux users the fastest browser we can give 
them, because:

1. Linux users tend to be disproportionately influential in the markets we care 
the most about (web developers, techies)
2. Linux is the foundation of B2G and Firefox for Android, where we 
*definitely* must deliver the fastest product we can

Now, if it were up to me, I'd try to reproduce this on a build built with the 
latest stable GCC or latest stable clang, and if that fixes the issue, I'd 
consider this a big motivation for upgrading to GCC 4.5 to a better compiler, 
which we need to do anyway for language feature support. Definitely, I don't 
think we should be adding hacks to our code to work around GCC problems that 
are already fixed in later releases of GCC. It's better to just make the build 
fail when the user attempts to use one of those older GCC releases.

Now, if PGO doesn't result in the fastest browser, then of course we should 
disable PGO.

Or, if there is no better compiler possible, then yes, I think it makes sense 
to disable PGO temporarily until there is a better compiler available. (And/or, 
help fix the compiler, either by contributing a patch, or by commissioning 
somebody else to contribute a patch.)

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Moving Away from Makefile's

2012-08-24 Thread Brian Smith
Gregory Szorc wrote:
 4. Native support for list and maps. Make files only support strings.
 The hacks this results in are barely tolerable.

 5. Ability to handle conditionals. We need to be able to
 conditionally define things based on the presence or value of certain
  variables.
 e.g. if the current OS is Linux, append this value to this list. I
 quote variables because there may not be a full-blown variable
 system here, just magic values that come from elsewhere and are
 addressed by some convention.

 6. Ability to perform ancillary functionality, such as basic string
 transforms. I'm not sure exactly what would be needed here. Looking
 at make's built-in functions might be a good place to start. We may
 be able to work around this by shifting functionality to side-effects
 from specially named variables, function calls, etc. I really don't
 know.

 7. Evaluation must be free from unknown side-effects. If there are
 unknown side-effects from evaluation, this could introduce race
 conditions, order dependency, etc. We don't want that. Evaluation
 must either be sandboxed to ensure nothing can happen or must be able
 to be statically analyzed by computers to ensure it doesn't do anything
 it isn't supposed to.

...

 On the other end of the spectrum, we could have the build manifest
 files be Python scripts. This solves a lot of problems around
 needing functionality in the manifest files. But, it would be a
 potential foot gun. See requirement #7.

I do not think it is reasonable to require support for alternate build systems 
for all of Gecko/Firefox.

But, let's say were were to divide the build into three phases:
1. Generate any generated C/C++ source files.
2. Build all the C/C++ code into libraries and executables
3. Do everything else (build omnijar, etc.)

(I imagine phase 3 could probably run 100% concurrently with the first two 
phases).

It would be very nice if phase #2 ONLY could support msbuild (building with 
Visual Studio project files, basically), because this would allow smart 
editors'/IDEs' code completion and code navigation features to work very well, 
at least for the C/C++ source code. I think this would also greatly simplify 
the deployment of any static analysis tools that we would develop.

In addition, potentially it would allow Visual Studio's Edit and Continue 
feature to work. (Edit and Continue is a feature that allows you to make 
changes to the C++ source code and relink those changes into a running 
executable while execution is paused at a breakpoint, without restarting the 
executable.)

I think that if you look at the limitations of gyp, some (all?) of them are at 
least partially driven by the desire to provide such support. I am sure the 
advanced features that you list in (4), (5), (6), (7) are helpful, but they may 
make it difficult to support these secondary use cases.

That said, getting the build system to build as fast as it can is much more 
important.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed policy change: reusability of tests by other browsers

2012-08-24 Thread Brian Smith
Aryeh Gregor wrote:
 1) Decide on guidelines for whether a test is internal or reusable.
 As a starting point, I suggest that all tests that are regular
 webpages that don't use any Mozilla-specific features should be
 candidates for reuse.  Examples of internal tests would be tests
 written in XUL and unit tests.  In particular, I think we should
 write
 tests for reuse if they cover anything that other browsers implement
 or might implement, even if there's currently no standard for it.
 Other browsers should still be able to run these tests, even if they
 might decide not to follow them.  Also, tests that currently use
 prefixed web-exposed properties should still be made reusable, since
 the properties should eventually be unprefixed.

Which other browser makers are going to follow these guidelines, so that we 
benefit from them? Generally, this is a great idea if it makes it faster and 
easier to improve Firefox. But, like Asa, I also interpreted this proposal 
along the lines of Spend resources, and slow down Firefox development, to help 
other browsers. That seems totally in line with our values, but doesn't seem 
great as far as competitiveness is concerned.

Also, are you saying if you are going to write a mochitest, then try to write 
a reusable test or if you are going to write a test, write a reusable test? 
The reason I ask is that we're supposed to write xpcshell tests in preference 
to mochitests when possible, and I'd hate the preference to change to be in 
favor of mochitests, because xpcshell tests are much more convenient (and 
faster) to write and run.

Thanks,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform