Re: Policing dead/zombie code in m-c

2014-05-28 Thread Trevor Saunders
  This is at least in part due to bug 915735, you can go and read the bug to 
  see that I did the best I could there, given the constraints that I had 
  (not understanding how the magic of the interaction of ICU and our build 
  system worked, not understanding which parts of ICU we actually want to 
  use, the performance and code quality limitations of MSVC's PGO compiler, 
  the fact that increasing the amount of time we allow our builds to progress 
  is rocket science, etc.
 
 I don’t have numbers for this change specifically, but during my 
 investigation I found that on Mac statically linking ICU into libmozjs 
 reduced the binary size by almost 600KB, compared to building ICU as a 
 separate DLL.

 We should probably be able to make that difference smaller with a bit
 of work.  We just need to get ICU to build in a configuration where
 the library only exports what we need, of course it would be better to
 just not build what we don't need.

 On Apr 25, 2014, at 0:31 , Henri Sivonen hsivo...@hsivonen.fi wrote:
 
  Therefore, it looks like we should turn off (if we haven't already):
  * The ICU LayoutEngine.
  * Ustdio
  * ICU encoding converters and their mapping tables.
  * ICU break iterators and their data.
  * ICU transliterators and their data.
 
 All done, except for a few remaining encoding converters:
 
  However, that flag is misdesigned in the sense that it considers
  US-ASCII, ISO-8859-1, UTF-7, UTF-32, CESU-8, SCSU and BOCU-1 as
  non-legacy, even though, frankly, those are legacy, too. (UTF-16 is
  legacy also, but it's legacy we need, since both ICU and Gecko are
  UTF-16 legacy code bases!)
  http://mxr.mozilla.org/mozilla-central/source/intl/icu/source/common/unicode/uconfig.h#267
 
 UTF-16 may be legacy from the point of view of encoding HTML pages, but 
 otherwise it’s an essential part of most of today’s computing environments – 
 and the alternative in many cases would be UTF-32. But at least four of the 
 above are clearly obsolete.

for talking to  win32 API sure, but win32 is a crazy beast, and we
aren't using ICU for that anyway.

  Also, If the ICU build system is an configurable enough, I think we
  should consider identifying what parts of ICU we can delete even
  though the build system doesn't let us to and then automate the
  deletion as a script so that it can be repeated with future imports of
  ICU.
 
 That seems worth investigating, but getting the build system to strip out as 
 much as possible might be more effective – see the next item.

easier probably, better no think of the hundreds of people who are stuck
building it all the time.

 On Apr 25, 2014, at 8:12 , Trevor Saunders trev.saund...@gmail.com wrote:
 
  Well, it really depends how you approach the problem of finding code
  thats dead.  One way is to decide code really should be dead by reading
  it and knowing what it accomplishes doesn't matter.  Another is to
  prove to your self that code is unreachable and so can be removed
  without effecting anything.  Both of these require a pretty good bit of
  knowledge and skill.  On the other hand the below question prompts
  another approach.
 
 Is there a way to give the linker a list of functions that you want to have 
 as public entry points of a dynamically linked library, and have it strip out 
 everything that can’t be reached from these functions? That’s essentially 
 what happens when you statically link a library into another, and the list of 
 ICU functions that Mozilla code calls wouldn’t be excessively long. The bulk 
 of them can be found in this stubs section:
 http://mxr.mozilla.org/mozilla-central/source/js/src/builtin/Intl.cpp#62

yes, though how hard they are to use with the ICU build system is left
as an exercise for the reader.

Trev

 On Apr 28, 2014, at 5:59 , Henri Sivonen hsivo...@hsivonen.fi wrote:
 
  I haven't looked into Ustdio.
 
 That’s disabled by the --enable-icuio=no flag in
 http://mxr.mozilla.org/mozilla-central/source/build/autoconf/icu.m4#213
 
  However, I noticed that we aren't turning off ICU IDNA but we should:
  https://bugzilla.mozilla.org/show_bug.cgi?id=1002435
 
 Thanks for finding and fixing this issue, as well as 1002437!
 
 
 On Apr 25, 2014, at 0:31 , Henri Sivonen hsivo...@hsivonen.fi wrote:
 
  Using system ICU seems wrong in terms of correctness. That's the
  reason why we don't use system ICU on Mac and desktop Linux, right?
 
 Correctness from the user’s point of view – quality of locale and time zone 
 data – is clearly an issue (conformance with the Ecma standard is not, as 
 Jeff pointed out). In addition, a system ICU isn’t always available to us: On 
 Mac, it’s not public API; on some Linux versions, it’s built with version 
 numbers baked into function names, which we can’t link against.
 
 
 On May 1, 2014, at 12:25 , Jeff Walden jwalden+...@mit.edu wrote:
 
  On 04/28/2014 05:59 AM, Henri Sivonen wrote:
  Hopefully we didn't remove collation rules, since that's the part we
  are supposed to 

Re: Policing dead/zombie code in m-c

2014-05-28 Thread Nathan Froyd
- Original Message -
 Is there a way to give the linker a list of functions that you want to have
 as public entry points of a dynamically linked library, and have it strip
 out everything that can’t be reached from these functions? That’s
 essentially what happens when you statically link a library into another,
 and the list of ICU functions that Mozilla code calls wouldn’t be
 excessively long. The bulk of them can be found in this stubs section:
 http://mxr.mozilla.org/mozilla-central/source/js/src/builtin/Intl.cpp#62

Assuming that ICU is already compiled with the moral equivalent of GCC's 
-ffunction-sections -fdata-sections or MSVC's /Gy, then statically linking ICU 
into libxul should already strip out all the un-needed ICU bits (when using the 
appropriate linker option).  I see that intl/icu/source/configure.ac has bits 
to turn those on for Linux, but not for Mac or Windows.  I can't tell whether 
we enable all the flags to convince ICU to use those options, though we might 
be passing some of Gecko's compilation flags into ICU's build process, in which 
case this is a moot point.

For the dynamic library case, if you're using symbol maps (Linux) or export 
lists (Windows), using -ffunction-sections -fdata-sections or equivalent should 
given you the same effect, assuming you trim down your mapfile appropriately.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: OMTC on Windows

2014-05-28 Thread Vladimir Vukicevic
(Note: I have not looked into the details of CART/TART and their interaction 
with OMTC)

It's entirely possible that (b) is true *now* -- the test may have been good 
and proper for the previous environment, but now the environment 
characteristics were changed such that the test needs tweaks.  Empirically, I 
have not seen any regressions on any of my Windows machines (which is basically 
all of them); things like tab animations and the like have started feeling 
smoother even after a long-running browser session with many tabs.  I realize 
this is not the same as cold hard numbers, but it does suggest to me that we 
need to take another look at the tests now.

- Vlad

- Original Message -
 From: Gijs Kruitbosch gijskruitbo...@gmail.com
 To: Bas Schouten bschou...@mozilla.com, Gavin Sharp 
 ga...@gavinsharp.com
 Cc: dev-tech-...@lists.mozilla.org, mozilla.dev.platform group 
 dev-platform@lists.mozilla.org, release-drivers
 release-driv...@mozilla.org
 Sent: Thursday, May 22, 2014 4:46:29 AM
 Subject: Re: OMTC on Windows
 
 Looking on from m.d.tree-management, on Fx-Team, the merge from this
 change caused a 40% CART regression, too, which wasn't listed in the
 original email. Was this unforeseeen, and if not, why was this
 considered acceptable?
 
 As gavin noted, considering how hard we fought for 2% improvements (one
 of the Australis folks said yesterday 1% was like Christmas!) despite
 our reasons of why things were really OK because of some of the same
 reasons you gave (e.g. running in ASAP mode isn't realistic, TART is
 complicated, ...), this hurts - it makes it seem like (a) our
 (sometimes extremely hacky) work was done for no good reason, or (b) the
 test is fundamentally flawed and we're better off without it, or (c)
 when the gfx team decides it's OK to regress it, it's fine, but not when
 it happens to other people, quite irrespective of reasons given.
 
 All/any of those being true would give me the sad feelings. Certainly it
 feels to me like (b) is true if this is really meant to be a net
 perceived improvement despite causing a 40% performance regression in
 our automated tests.
 
 ~ Gijs
 
 On 18/05/2014 19:47, Bas Schouten wrote:
  Hi Gavin,
 
  There have been several e-mails on different lists, and some communication
  on some bugs. Sadly the story is at this point not anywhere in a condensed
  form, but I will try to highlight a couple of core points, some of these
  will be updated further as the investigation continues. The official bug
  is bug 946567 but the numbers and the discussion there are far outdated
  (there's no 400% regression ;)):
 
  - What OMTC does to tart scores differs wildly per machine, on some
  machines we saw up to 10% improvements, on others up to 20% regressions.
  There also seems to be somewhat more of a regression on Win7 than there is
  on Win8. What the average is for our users is very hard to say, frankly I
  have no idea.
  - One core cause of the regression is that we're now dealing with two D3D
  devices when using Direct2D since we're doing D2D drawing on one thread,
  and D3D11 composition on the other. This means we have DXGI locking
  overhead to synchronize the two. This is unavoidable.
  - Another cause is that we're now having two surfaces in order to do double
  buffering, this means we need to initialize more resources when new layers
  come into play. This again, is unavoidable.
  - Yet another cause is that for some tests we composite 'ASAP' to get
  interesting numbers, but this causes some contention scenario's which are
  less likely to occur in real-life usage. Since the double buffer might
  copy the area validated in the last frame from the front buffer to the
  backbuffer in order to prevent having to redraw much more. If the
  compositor is compositing all the time this can block the main thread's
  rasterization. I have some ideas on how to improve this, but I don't know
  how much they'll help TART, in any case, some cost here will be
  unavoidable as a natural additional consequence of double buffering.
  - The TART number story is complicated, sometimes it's hard to know what
  exactly they do, and don't measure (which might be different with and
  without OMTC) and how that affects practical performance. I've been told
  this by Avi and it matches my practical experience with the numbers. I
  don't know the exact reasons and Avi is probably a better person to talk
  about this than I am :-).
 
  These are the core reasons that we were able to identify from profiling.
  Other than that the things I said in my previous e-mail still apply. We
  believe we're offering significant UX improvements with async video and
  are enabling more significant improvements in the future. Once we've fixed
  the obvious problems we will continue to see if there's something that can
  be done, either through tiling or through other improvements, particularly
  in the last point I mentioned there might be some, not 'too' complex
 

Re: OMTC on Windows

2014-05-28 Thread Gavin Sharp
Who's responsible for looking into the test/regression? Bas? Does the
person looking into it need help from the performance or desktop teams?

What bug is tracking that work?

Gavin


On Wed, May 28, 2014 at 12:12 PM, Vladimir Vukicevic
vladi...@mozilla.comwrote:

 (Note: I have not looked into the details of CART/TART and their
 interaction with OMTC)

 It's entirely possible that (b) is true *now* -- the test may have been
 good and proper for the previous environment, but now the environment
 characteristics were changed such that the test needs tweaks.  Empirically,
 I have not seen any regressions on any of my Windows machines (which is
 basically all of them); things like tab animations and the like have
 started feeling smoother even after a long-running browser session with
 many tabs.  I realize this is not the same as cold hard numbers, but it
 does suggest to me that we need to take another look at the tests now.

 - Vlad

 - Original Message -
  From: Gijs Kruitbosch gijskruitbo...@gmail.com
  To: Bas Schouten bschou...@mozilla.com, Gavin Sharp 
 ga...@gavinsharp.com
  Cc: dev-tech-...@lists.mozilla.org, mozilla.dev.platform group 
 dev-platform@lists.mozilla.org, release-drivers
  release-driv...@mozilla.org
  Sent: Thursday, May 22, 2014 4:46:29 AM
  Subject: Re: OMTC on Windows
 
  Looking on from m.d.tree-management, on Fx-Team, the merge from this
  change caused a 40% CART regression, too, which wasn't listed in the
  original email. Was this unforeseeen, and if not, why was this
  considered acceptable?
 
  As gavin noted, considering how hard we fought for 2% improvements (one
  of the Australis folks said yesterday 1% was like Christmas!) despite
  our reasons of why things were really OK because of some of the same
  reasons you gave (e.g. running in ASAP mode isn't realistic, TART is
  complicated, ...), this hurts - it makes it seem like (a) our
  (sometimes extremely hacky) work was done for no good reason, or (b) the
  test is fundamentally flawed and we're better off without it, or (c)
  when the gfx team decides it's OK to regress it, it's fine, but not when
  it happens to other people, quite irrespective of reasons given.
 
  All/any of those being true would give me the sad feelings. Certainly it
  feels to me like (b) is true if this is really meant to be a net
  perceived improvement despite causing a 40% performance regression in
  our automated tests.
 
  ~ Gijs
 
  On 18/05/2014 19:47, Bas Schouten wrote:
   Hi Gavin,
  
   There have been several e-mails on different lists, and some
 communication
   on some bugs. Sadly the story is at this point not anywhere in a
 condensed
   form, but I will try to highlight a couple of core points, some of
 these
   will be updated further as the investigation continues. The official
 bug
   is bug 946567 but the numbers and the discussion there are far outdated
   (there's no 400% regression ;)):
  
   - What OMTC does to tart scores differs wildly per machine, on some
   machines we saw up to 10% improvements, on others up to 20%
 regressions.
   There also seems to be somewhat more of a regression on Win7 than
 there is
   on Win8. What the average is for our users is very hard to say,
 frankly I
   have no idea.
   - One core cause of the regression is that we're now dealing with two
 D3D
   devices when using Direct2D since we're doing D2D drawing on one
 thread,
   and D3D11 composition on the other. This means we have DXGI locking
   overhead to synchronize the two. This is unavoidable.
   - Another cause is that we're now having two surfaces in order to do
 double
   buffering, this means we need to initialize more resources when new
 layers
   come into play. This again, is unavoidable.
   - Yet another cause is that for some tests we composite 'ASAP' to get
   interesting numbers, but this causes some contention scenario's which
 are
   less likely to occur in real-life usage. Since the double buffer might
   copy the area validated in the last frame from the front buffer to the
   backbuffer in order to prevent having to redraw much more. If the
   compositor is compositing all the time this can block the main thread's
   rasterization. I have some ideas on how to improve this, but I don't
 know
   how much they'll help TART, in any case, some cost here will be
   unavoidable as a natural additional consequence of double buffering.
   - The TART number story is complicated, sometimes it's hard to know
 what
   exactly they do, and don't measure (which might be different with and
   without OMTC) and how that affects practical performance. I've been
 told
   this by Avi and it matches my practical experience with the numbers. I
   don't know the exact reasons and Avi is probably a better person to
 talk
   about this than I am :-).
  
   These are the core reasons that we were able to identify from
 profiling.
   Other than that the things I said in my previous e-mail still apply. We
   believe we're offering 

Uncaught rejections in xpcshell will now cause orange

2014-05-28 Thread David Rajchenbach-Teller
Dear everyone,

  After weeks tracking and fixing blockers, we are in the process of
attempting to land bug 976205. If this sticks, it will change the
behavior of xpcshell tests with respect to uncaught asynchronous errors:
uncaught promise rejections using Promise.jsm will cause
TEST-UNEXPECTED-FAIL.


Rationales:
1. uncaught exceptions have always been treated as errors, there is no
reason to assume that promise rejections, which implement the same
behavior as exceptions, are different;
2. experience has shown that ignoring promise rejections could very
easily hide a real bug.


Precautions:
In order to make sure that all uncaught rejections are reported and to
make it easier to pinpoint their sources, we had to make a single
assumption. Namely, we assume if a call to `add_task` leaves a rejected
promise without any error-handler, this rejection will remain uncaught.
So far, this heuristic hasn't caused any false positive.


We intend to progressively extend this policy to:
- mochitests (bug 1016387);
- addon-sdk tests (bug 998277);
- DOM Promise (bug 989960).

Cheers,
 David

-- 
David Rajchenbach-Teller, PhD
 Performance Team, Mozilla



signature.asc
Description: OpenPGP digital signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


PSA: Refcounted classes should have a non-public destructor should be MOZ_FINAL where possible

2014-05-28 Thread Daniel Holbert
Hi dev-platform,

PSA: if you are adding a concrete class with AddRef/Release
implementations (e.g. via NS_INLINE_DECL_REFCOUNTING), please be aware
of the following best-practices:

 (a) Your class should have an explicitly-declared non-public
destructor. (should be 'private' or 'protected')

 (b) Your class should be labeled as MOZ_FINAL (or, see below).


WHY THIS IS A GOOD IDEA
===
We'd like to ensure that refcounted objects are *only* deleted via their
::Release() methods.  Otherwise, we're potentially susceptible to
double-free bugs.

We can go a long way towards enforcing this rule at compile-time by
giving these classes non-public destructors.  This prevents a whole
category of double-free bugs.

In particular: if your class has a public destructor (the default), then
it's easy for you or someone else to accidentally declare an instance on
the stack or as a member-variable in another class, like so:
MyClass foo;
This is *extremely* dangerous. If any code wraps 'foo' in a nsRefPtr
(say, if some function that we pass 'foo' or 'foo' into declares a
nsRefPtr to it for some reason), then we'll get a double-free. The
object will be freed when the nsRefPtr goes out of scope, and then again
when the MyClass instance goes out of scope. But if we give MyClass a
non-public destructor, then it'll make it a compile error (in most code)
to declare a MyClass instance on the stack or as a member-variable.  So
we'd catch this bug immediately, at compile-time.

So, that explains why a non-public destructor is a good idea. But why
MOZ_FINAL?  If your class isn't MOZ_FINAL, then that opens up another
route to trigger the same sort of bug -- someone can come along and add
a subclass, perhaps not realizing that they're subclassing a refcounted
class, and the subclass will (by default) have a public destructor,
which means then that anyone can declare
  MySubclass foo;
and run into the exact same problem with the subclass.  A MOZ_FINAL
annotation will prevent that by keeping people from naively adding
subclasses.

BUT WHAT IF I NEED SUBCLASSES
=
First, if your class is abstract, then it shouldn't have AddRef/Release
implementations to begin with.  Those belong on the concrete subclasses
-- not on your abstract base class.

But if your class is concrete and refcounted and needs to have
subclasses, then:
 - Your base class *and each of its subclasses* should have virtual,
protected destructors, to prevent the MySubclass foo; problem
mentioned above.
 - Your subclasses themselves should also probably be declared as
MOZ_FINAL, to keep someone from naively adding another subclass
without recognizing the above.
 - Your subclasses should definitely *not* declare their own
AddRef/Release methods. (They should share the base class's methods 
refcount.)

For more information, see
https://bugzilla.mozilla.org/show_bug.cgi?id=984786 , where I've fixed
this sort of thing in a bunch of existing classes.  I definitely didn't
catch everything there, so please feel encouraged to continue this work
in other bugs. (And if you catch any cases that look like potential
double-frees, mark them as security-sensitive.)

Thanks!
~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Refcounted classes should have a non-public destructor should be MOZ_FINAL where possible

2014-05-28 Thread Benoit Jacob
Awesome work!

By the way, I just figured a way that you could static_assert so that at
least on supporting C++11 compilers, we would automatically catch this.

The basic C++11 tool here is std::is_destructible from type_traits, but
it has a problem: it only returns false if the destructor is deleted, it
doesn't return false if the destructor is private. However, the example
below shows how we can still achieve what we want by using wrapping the
class that we are interested in as a member of a helper templated struct:



#include type_traits
#include iostream

class ClassWithDeletedDtor {
  ~ClassWithDeletedDtor() = delete;
};

class ClassWithPrivateDtor {
  ~ClassWithPrivateDtor() {}
};

class ClassWithPublicDtor {
public:
  ~ClassWithPublicDtor() {}
};

template typename T
class IsDestructorPrivateOrDeletedHelper {
  T x;
};

template typename T
struct IsDestructorPrivateOrDeleted
{
  static const bool value =
!std::is_destructibleIsDestructorPrivateOrDeletedHelperT::value;
};

int main() {
#define PRINT(x) std::cerr  #x  =   (x)  std::endl;

  PRINT(std::is_destructibleClassWithDeletedDtor::value);
  PRINT(std::is_destructibleClassWithPrivateDtor::value);
  PRINT(std::is_destructibleClassWithPublicDtor::value);

  std::cerr  std::endl;

  PRINT(IsDestructorPrivateOrDeletedClassWithDeletedDtor::value);
  PRINT(IsDestructorPrivateOrDeletedClassWithPrivateDtor::value);
  PRINT(IsDestructorPrivateOrDeletedClassWithPublicDtor::value);
}


Output:


std::is_destructibleClassWithDeletedDtor::value = 0
std::is_destructibleClassWithPrivateDtor::value = 0
std::is_destructibleClassWithPublicDtor::value = 1

IsDestructorPrivateOrDeletedClassWithDeletedDtor::value = 1
IsDestructorPrivateOrDeletedClassWithPrivateDtor::value = 1
IsDestructorPrivateOrDeletedClassWithPublicDtor::value = 0


If you also want to require classes to be final, C++11 type_traits also
has std::is_final for that.

Cheers,
Benoit


2014-05-28 16:24 GMT-04:00 Daniel Holbert dholb...@mozilla.com:

 Hi dev-platform,

 PSA: if you are adding a concrete class with AddRef/Release
 implementations (e.g. via NS_INLINE_DECL_REFCOUNTING), please be aware
 of the following best-practices:

  (a) Your class should have an explicitly-declared non-public
 destructor. (should be 'private' or 'protected')

  (b) Your class should be labeled as MOZ_FINAL (or, see below).


 WHY THIS IS A GOOD IDEA
 ===
 We'd like to ensure that refcounted objects are *only* deleted via their
 ::Release() methods.  Otherwise, we're potentially susceptible to
 double-free bugs.

 We can go a long way towards enforcing this rule at compile-time by
 giving these classes non-public destructors.  This prevents a whole
 category of double-free bugs.

 In particular: if your class has a public destructor (the default), then
 it's easy for you or someone else to accidentally declare an instance on
 the stack or as a member-variable in another class, like so:
 MyClass foo;
 This is *extremely* dangerous. If any code wraps 'foo' in a nsRefPtr
 (say, if some function that we pass 'foo' or 'foo' into declares a
 nsRefPtr to it for some reason), then we'll get a double-free. The
 object will be freed when the nsRefPtr goes out of scope, and then again
 when the MyClass instance goes out of scope. But if we give MyClass a
 non-public destructor, then it'll make it a compile error (in most code)
 to declare a MyClass instance on the stack or as a member-variable.  So
 we'd catch this bug immediately, at compile-time.

 So, that explains why a non-public destructor is a good idea. But why
 MOZ_FINAL?  If your class isn't MOZ_FINAL, then that opens up another
 route to trigger the same sort of bug -- someone can come along and add
 a subclass, perhaps not realizing that they're subclassing a refcounted
 class, and the subclass will (by default) have a public destructor,
 which means then that anyone can declare
   MySubclass foo;
 and run into the exact same problem with the subclass.  A MOZ_FINAL
 annotation will prevent that by keeping people from naively adding
 subclasses.

 BUT WHAT IF I NEED SUBCLASSES
 =
 First, if your class is abstract, then it shouldn't have AddRef/Release
 implementations to begin with.  Those belong on the concrete subclasses
 -- not on your abstract base class.

 But if your class is concrete and refcounted and needs to have
 subclasses, then:
  - Your base class *and each of its subclasses* should have virtual,
 protected destructors, to prevent the MySubclass foo; problem
 mentioned above.
  - Your subclasses themselves should also probably be declared as
 MOZ_FINAL, to keep someone from naively adding another subclass
 without recognizing the above.
  - Your subclasses should definitely *not* declare their own
 AddRef/Release methods. (They should share the base class's methods 
 refcount.)

 For more information, see
 https://bugzilla.mozilla.org/show_bug.cgi?id=984786 , where I've fixed
 this sort of thing in 

Re: PSA: Refcounted classes should have a non-public destructor should be MOZ_FINAL where possible

2014-05-28 Thread Benoit Jacob
Actually that test program contradicts what I said --- my
IsDestructorPrivateOrDeleted produces exactly the same result as
!is_destructible,  and is_destructible does return 0 for the class with
private destructor. So you could just use that!

Benoit


2014-05-28 16:51 GMT-04:00 Benoit Jacob jacob.benoi...@gmail.com:

 Awesome work!

 By the way, I just figured a way that you could static_assert so that at
 least on supporting C++11 compilers, we would automatically catch this.

 The basic C++11 tool here is std::is_destructible from type_traits, but
 it has a problem: it only returns false if the destructor is deleted, it
 doesn't return false if the destructor is private. However, the example
 below shows how we can still achieve what we want by using wrapping the
 class that we are interested in as a member of a helper templated struct:



 #include type_traits
 #include iostream

 class ClassWithDeletedDtor {
   ~ClassWithDeletedDtor() = delete;
 };

 class ClassWithPrivateDtor {
   ~ClassWithPrivateDtor() {}
 };

 class ClassWithPublicDtor {
 public:
   ~ClassWithPublicDtor() {}
 };

 template typename T
 class IsDestructorPrivateOrDeletedHelper {
   T x;
 };

 template typename T
 struct IsDestructorPrivateOrDeleted
 {
   static const bool value =
 !std::is_destructibleIsDestructorPrivateOrDeletedHelperT::value;
 };

 int main() {
 #define PRINT(x) std::cerr  #x  =   (x)  std::endl;

   PRINT(std::is_destructibleClassWithDeletedDtor::value);
   PRINT(std::is_destructibleClassWithPrivateDtor::value);
   PRINT(std::is_destructibleClassWithPublicDtor::value);

   std::cerr  std::endl;

   PRINT(IsDestructorPrivateOrDeletedClassWithDeletedDtor::value);
   PRINT(IsDestructorPrivateOrDeletedClassWithPrivateDtor::value);
   PRINT(IsDestructorPrivateOrDeletedClassWithPublicDtor::value);
 }


 Output:


 std::is_destructibleClassWithDeletedDtor::value = 0
 std::is_destructibleClassWithPrivateDtor::value = 0
 std::is_destructibleClassWithPublicDtor::value = 1

 IsDestructorPrivateOrDeletedClassWithDeletedDtor::value = 1
 IsDestructorPrivateOrDeletedClassWithPrivateDtor::value = 1
 IsDestructorPrivateOrDeletedClassWithPublicDtor::value = 0


 If you also want to require classes to be final, C++11 type_traits also
 has std::is_final for that.

 Cheers,
 Benoit


 2014-05-28 16:24 GMT-04:00 Daniel Holbert dholb...@mozilla.com:

 Hi dev-platform,

 PSA: if you are adding a concrete class with AddRef/Release
 implementations (e.g. via NS_INLINE_DECL_REFCOUNTING), please be aware
 of the following best-practices:

  (a) Your class should have an explicitly-declared non-public
 destructor. (should be 'private' or 'protected')

  (b) Your class should be labeled as MOZ_FINAL (or, see below).


 WHY THIS IS A GOOD IDEA
 ===
 We'd like to ensure that refcounted objects are *only* deleted via their
 ::Release() methods.  Otherwise, we're potentially susceptible to
 double-free bugs.

 We can go a long way towards enforcing this rule at compile-time by
 giving these classes non-public destructors.  This prevents a whole
 category of double-free bugs.

 In particular: if your class has a public destructor (the default), then
 it's easy for you or someone else to accidentally declare an instance on
 the stack or as a member-variable in another class, like so:
 MyClass foo;
 This is *extremely* dangerous. If any code wraps 'foo' in a nsRefPtr
 (say, if some function that we pass 'foo' or 'foo' into declares a
 nsRefPtr to it for some reason), then we'll get a double-free. The
 object will be freed when the nsRefPtr goes out of scope, and then again
 when the MyClass instance goes out of scope. But if we give MyClass a
 non-public destructor, then it'll make it a compile error (in most code)
 to declare a MyClass instance on the stack or as a member-variable.  So
 we'd catch this bug immediately, at compile-time.

 So, that explains why a non-public destructor is a good idea. But why
 MOZ_FINAL?  If your class isn't MOZ_FINAL, then that opens up another
 route to trigger the same sort of bug -- someone can come along and add
 a subclass, perhaps not realizing that they're subclassing a refcounted
 class, and the subclass will (by default) have a public destructor,
 which means then that anyone can declare
   MySubclass foo;
 and run into the exact same problem with the subclass.  A MOZ_FINAL
 annotation will prevent that by keeping people from naively adding
 subclasses.

 BUT WHAT IF I NEED SUBCLASSES
 =
 First, if your class is abstract, then it shouldn't have AddRef/Release
 implementations to begin with.  Those belong on the concrete subclasses
 -- not on your abstract base class.

 But if your class is concrete and refcounted and needs to have
 subclasses, then:
  - Your base class *and each of its subclasses* should have virtual,
 protected destructors, to prevent the MySubclass foo; problem
 mentioned above.
  - Your subclasses themselves should also probably be 

Re: PSA: Refcounted classes should have a non-public destructor should be MOZ_FINAL where possible

2014-05-28 Thread Daniel Holbert
Nice!  So it sounds like we could use std::is_destructible to harden our
refcounting-impl macros (like NS_INLINE_DECL_REFCOUNTING), by having
those macros include a static_assert which enforces that they're only
invoked by classes with private/protected destructors.

(I'll bet that this static_assert wouldn't transfer to subclasses - e.g.
if class Foo has NS_INLINE_DECL_REFCOUNTING and Foo has a subclass Bar,
I'll bet the static_assert would only check Foo, and not Bar.
Fortunately, it's rare to have concrete refcounted classes with
subclasses, so this shouldn't be a huge source of trouble.)

For now, our code isn't clean enough for this sort of static_assert to
be doable. :-/ And we have at least one instance of a refcounted class
that's semi-intentionally (albeit carefully) declared on the stack:
gfxContext, per https://bugzilla.mozilla.org/show_bug.cgi?id=742100

Still, the static_assert could be a good way of finding (in a local
build) all the existing refcounted classes that want a non-public
destructor, I suppose.

~Daniel

On 05/28/2014 01:51 PM, Benoit Jacob wrote:
 Awesome work!
 
 By the way, I just figured a way that you could static_assert so that at
 least on supporting C++11 compilers, we would automatically catch this.
 
 The basic C++11 tool here is std::is_destructible from type_traits,
 but it has a problem: it only returns false if the destructor is
 deleted, it doesn't return false if the destructor is private. However,
 the example below shows how we can still achieve what we want by using
 wrapping the class that we are interested in as a member of a helper
 templated struct:
 
 
 
 #include type_traits
 #include iostream
 
 class ClassWithDeletedDtor {
   ~ClassWithDeletedDtor() = delete;
 };
 
 class ClassWithPrivateDtor {
   ~ClassWithPrivateDtor() {}
 };
 
 class ClassWithPublicDtor {
 public:
   ~ClassWithPublicDtor() {}
 };
 
 template typename T
 class IsDestructorPrivateOrDeletedHelper {
   T x;
 };
 
 template typename T
 struct IsDestructorPrivateOrDeleted
 {
   static const bool value =
 !std::is_destructibleIsDestructorPrivateOrDeletedHelperT::value;
 };
 
 int main() {
 #define PRINT(x) std::cerr  #x  =   (x)  std::endl;
 
   PRINT(std::is_destructibleClassWithDeletedDtor::value);
   PRINT(std::is_destructibleClassWithPrivateDtor::value);
   PRINT(std::is_destructibleClassWithPublicDtor::value);
 
   std::cerr  std::endl;
 
   PRINT(IsDestructorPrivateOrDeletedClassWithDeletedDtor::value);
   PRINT(IsDestructorPrivateOrDeletedClassWithPrivateDtor::value);
   PRINT(IsDestructorPrivateOrDeletedClassWithPublicDtor::value);
 }
 
 
 Output:
 
 
 std::is_destructibleClassWithDeletedDtor::value = 0
 std::is_destructibleClassWithPrivateDtor::value = 0
 std::is_destructibleClassWithPublicDtor::value = 1
 
 IsDestructorPrivateOrDeletedClassWithDeletedDtor::value = 1
 IsDestructorPrivateOrDeletedClassWithPrivateDtor::value = 1
 IsDestructorPrivateOrDeletedClassWithPublicDtor::value = 0
 
 
 If you also want to require classes to be final, C++11 type_traits
 also has std::is_final for that.
 
 Cheers,
 Benoit
 
 
 2014-05-28 16:24 GMT-04:00 Daniel Holbert dholb...@mozilla.com
 mailto:dholb...@mozilla.com:
 
 Hi dev-platform,
 
 PSA: if you are adding a concrete class with AddRef/Release
 implementations (e.g. via NS_INLINE_DECL_REFCOUNTING), please be aware
 of the following best-practices:
 
  (a) Your class should have an explicitly-declared non-public
 destructor. (should be 'private' or 'protected')
 
  (b) Your class should be labeled as MOZ_FINAL (or, see below).
 
 
 WHY THIS IS A GOOD IDEA
 ===
 We'd like to ensure that refcounted objects are *only* deleted via their
 ::Release() methods.  Otherwise, we're potentially susceptible to
 double-free bugs.
 
 We can go a long way towards enforcing this rule at compile-time by
 giving these classes non-public destructors.  This prevents a whole
 category of double-free bugs.
 
 In particular: if your class has a public destructor (the default), then
 it's easy for you or someone else to accidentally declare an instance on
 the stack or as a member-variable in another class, like so:
 MyClass foo;
 This is *extremely* dangerous. If any code wraps 'foo' in a nsRefPtr
 (say, if some function that we pass 'foo' or 'foo' into declares a
 nsRefPtr to it for some reason), then we'll get a double-free. The
 object will be freed when the nsRefPtr goes out of scope, and then again
 when the MyClass instance goes out of scope. But if we give MyClass a
 non-public destructor, then it'll make it a compile error (in most code)
 to declare a MyClass instance on the stack or as a member-variable.  So
 we'd catch this bug immediately, at compile-time.
 
 So, that explains why a non-public destructor is a good idea. But why
 MOZ_FINAL?  If your class isn't MOZ_FINAL, then that opens up another
 route to 

B2G, email, and SSL/TLS certificate exceptions for invalid certificates

2014-05-28 Thread Andrew Sutherland
tl;dr: We need to figure out how to safely allow for invalid 
certificates to be used in the Firefox OS Gaia email app.  We do want 
all users to be able to access their email, but not by compromising the 
security of all users.  Read on if you work in the security field / care 
about certificates / invalid certificates.



== Invalid Certificates in Email Context

Some mail servers out there use self-signed or otherwise invalid SSL 
certificates.  Since dreamhost replaced their custom CA with valid 
certificates 
(http://www.dreamhoststatus.com/2013/05/09/secure-certificate-changes-coming-for-imap-and-pop-on-homiemail-sub4-and-homiemail-sub5-email-clusters-on-may-14th/) 
and StartCom started offering free SSL certificates 
(https://www.startssl.com/?app=1), the incidence of invalid certificates 
has decreased.  However, there are still servers out there with invalid 
certificates.  With deeply regrettable irony, a manufacturer of Firefox 
OS devices and one of the companies that certifies Firefox OS devices 
both run mail servers with invalid certificates and are our existing 
examples of the problem.


The Firefox OS email app requires encrypted connections to servers. 
Unencrypted connections are only legal in our unit tests or to 
localhost.  This decision was made based on a risk profile of devices 
where we assume untrusted/less than 100% secure wi-fi is very probable 
and the cellular data infrastructure is only slightly more secure 
because there's a higher barrier to entry to setting up a fake cell 
tower for active attacks.


In general, other email apps allow both unencrypted connections and 
adding certificate exceptions with a friendly/dangerous flow.  I can 
speak to Thunderbird as an example.  Historically, Thunderbird and its 
predecessors allowed certificate exceptions.  Going into Thunderbird 
3.0, Firefox overhauled its exception mechanism and for a short time 
Thunderbird's process required significantly greater user intent to add 
an exception.  (Preferences, Advanced, Certificates, View Certificates, 
Servers, Add Exception.)  At this time DreamHost had invalid 
certificates, free certificates were not available, invalid certificates 
were fairly common, Thunderbird's autoconfig security model already 
assumed a reliable network connection, Thunderbird could only run on 
desktops/laptops which were more likely to have a secure network, etc.  
We relented and Thunderbird ended up where it is now.  Thunderbird 
immediately displays the Add Security Exception UI; the user only 
needs to click Confirm Security Exception.  (Noting that Thunderbird's 
autoconfig process is significantly more multi-step.)



== Certificate Exception Mechanisms in Platform / Firefox OS

Currently, the only UI affordance to add certificate exceptions is 
exposed by the browser app/subystem for HTTPS sites.  I assert that this 
is a different risk profile and we wouldn't want to follow it blindly 
and don't actually want to follow it at all[1].


There are general bugs filed on being able to import a new CA or 
certificate at https://bugzil.la/769183 and https://bugzil.la/867899.  
Users with adb push access also have the potentially to manually import 
certificates from the command line, see 
https://groups.google.com/forum/#!msg/mozilla.dev.b2g/B57slgVO3TU/G5TA-PiFI_EJ


It is my understanding that:
* there is only a single certificate store on the device and therefore 
that all exceptions are device-wide

* only one exception can exist per domain at a time
* the exception is per-domain, not per-port, so if we add an exception 
for port 993 (imaps), that would also impact https.


And it follows from the above points that exceptions added by the email 
app/on the behalf of the email app affect and therefore constitute a 
risk to all other apps on the device.  This is significant because 
creating an email account may result in us wanting to talk to a 
different domain than the user's email address because of the 
autoconfiguration process and vanity domains, etc.



== The email app situation

In bug https://bugzil.la/874346 the requirement that is coming from 
partners is that:

- we need to imminently address the certificate exception problem
- the user needs to be able to add the exception from the account setup 
flow.  (As opposed to requiring the user to manually go to the settings 
app and add an exception.  At least I think that's the request.)


Taking this as a given, our goal then becomes to allow users to connect 
to servers using invalid certificates without compromising the security 
of the users who do use servers with valid certificates or of other apps 
on the phone.


There are two main problems that we need solutions to address:

1) Helping the user make an informed and safe decision about whether to 
add an exception and what exception to add.  I strongly assert that in 
order to do this we need to be able to tell the user with some 
confidence whether we believe the server actually has an 

Re: B2G, email, and SSL/TLS certificate exceptions for invalid certificates

2014-05-28 Thread David Keeler
Regarding the current certificate exception mechanism:

 * there is only a single certificate store on the device and therefore
 that all exceptions are device-wide

This is an implementation detail - it would not be difficult to change
exceptions to per-principal-per-app rather than just per-principal.

 * only one exception can exist per domain at a time

In combination with point 3, is this a limitation? Do we want to support
this? If so, again, it wouldn't be that hard.

 * the exception is per-domain, not per-port, so if we add an exception
 for port 993 (imaps), that would also impact https.

I don't think this is the case. Either way, it shouldn't be the case.
In summary, it would not be difficult to ensure that the certificate
exception service operates on a per-principal-per-app basis, which would
allow for what we want, I believe (e.g. exceptions for
{email-app}/example.com:993 would not affect {browser-app}/example.com:443).

In terms of solving the issue at hand, we have a great opportunity to
not implement the press this button to MITM yourself paradigm that
desktop browsers currently use. The much safer option is to ask the user
for the expected certificate fingerprint. If it matches the certificate
the server provided, then the exception can safely be added. The user
will have to obtain that fingerprint out-of-band over a hopefully secure
channel.
I would be wary of implementing a more involved scheme that involves
remote services.

Cheers,
David

On 05/28/2014 03:30 PM, Andrew Sutherland wrote:
 tl;dr: We need to figure out how to safely allow for invalid
 certificates to be used in the Firefox OS Gaia email app.  We do want
 all users to be able to access their email, but not by compromising the
 security of all users.  Read on if you work in the security field / care
 about certificates / invalid certificates.
 
 
 == Invalid Certificates in Email Context
 
 Some mail servers out there use self-signed or otherwise invalid SSL
 certificates.  Since dreamhost replaced their custom CA with valid
 certificates
 (http://www.dreamhoststatus.com/2013/05/09/secure-certificate-changes-coming-for-imap-and-pop-on-homiemail-sub4-and-homiemail-sub5-email-clusters-on-may-14th/)
 and StartCom started offering free SSL certificates
 (https://www.startssl.com/?app=1), the incidence of invalid certificates
 has decreased.  However, there are still servers out there with invalid
 certificates.  With deeply regrettable irony, a manufacturer of Firefox
 OS devices and one of the companies that certifies Firefox OS devices
 both run mail servers with invalid certificates and are our existing
 examples of the problem.
 
 The Firefox OS email app requires encrypted connections to servers.
 Unencrypted connections are only legal in our unit tests or to
 localhost.  This decision was made based on a risk profile of devices
 where we assume untrusted/less than 100% secure wi-fi is very probable
 and the cellular data infrastructure is only slightly more secure
 because there's a higher barrier to entry to setting up a fake cell
 tower for active attacks.
 
 In general, other email apps allow both unencrypted connections and
 adding certificate exceptions with a friendly/dangerous flow.  I can
 speak to Thunderbird as an example.  Historically, Thunderbird and its
 predecessors allowed certificate exceptions.  Going into Thunderbird
 3.0, Firefox overhauled its exception mechanism and for a short time
 Thunderbird's process required significantly greater user intent to add
 an exception.  (Preferences, Advanced, Certificates, View Certificates,
 Servers, Add Exception.)  At this time DreamHost had invalid
 certificates, free certificates were not available, invalid certificates
 were fairly common, Thunderbird's autoconfig security model already
 assumed a reliable network connection, Thunderbird could only run on
 desktops/laptops which were more likely to have a secure network, etc. 
 We relented and Thunderbird ended up where it is now.  Thunderbird
 immediately displays the Add Security Exception UI; the user only
 needs to click Confirm Security Exception.  (Noting that Thunderbird's
 autoconfig process is significantly more multi-step.)
 
 
 == Certificate Exception Mechanisms in Platform / Firefox OS
 
 Currently, the only UI affordance to add certificate exceptions is
 exposed by the browser app/subystem for HTTPS sites.  I assert that this
 is a different risk profile and we wouldn't want to follow it blindly
 and don't actually want to follow it at all[1].
 
 There are general bugs filed on being able to import a new CA or
 certificate at https://bugzil.la/769183 and https://bugzil.la/867899. 
 Users with adb push access also have the potentially to manually import
 certificates from the command line, see
 https://groups.google.com/forum/#!msg/mozilla.dev.b2g/B57slgVO3TU/G5TA-PiFI_EJ
 
 
 It is my understanding that:
 * there is only a single certificate store on the device and therefore
 that all 

Re: B2G, email, and SSL/TLS certificate exceptions for invalid certificates

2014-05-28 Thread Karl Tomlinson
Thanks for the overview of a real problem, Andrew.
(I recall having to add an exception for a Mozilla Root CA to
access email at one time.)

Andrew Sutherland writes:

 I propose that we use a certificate-observatory-style mechanism to
 corroborate any invalid certificates by attempting the connection
 from 1 or more trusted servers whose identity can be authenticated
 using the existing CA infrastructure.

Although this can identify a MITM between the mail client and the
internet, I assume it won't identify one between the mail server
and the internet.

 *** it looks like you are behind a corporate firewall that MITMs
 you, you should add the firewall's CA to your device.  Send the
 user to a support page to help walk them through these steps if
 that seems right.
 *** it looks like the user is under attack

I wonder how to distinguish these two situations and whether they
really should be distinguished.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: OMTC on Windows

2014-05-28 Thread jmaher

https://bugzilla.mozilla.org/show_bug.cgi?id=1013262 tracks all the Talos 
performance adjustments 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: B2G, email, and SSL/TLS certificate exceptions for invalid certificates

2014-05-28 Thread Karl Dubost
Andrew,

Le 29 mai 2014 à 09:13, Andrew Sutherland asutherl...@asutherland.org a écrit 
:
 My imagined rationale for why someone would use a self-signed certificate 
 amounts to laziness.

being one of those persons using a self-signed certificate, let's enrich your 
use cases list ;)
I use a self-signed certificate because the server that I'm managing is used by 
a handful of persons which are part of community. This community can be friends 
and/or family. The strong link here is the *human trust* in between people, 
which is higher than the trust of a third party. 


-- 
Karl Dubost, Mozilla
http://www.la-grange.net/karl/moz

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: B2G, email, and SSL/TLS certificate exceptions for invalid certificates

2014-05-28 Thread Andrew Sutherland

On 05/28/2014 07:22 PM, Karl Tomlinson wrote:

(I recall having to add an exception for a Mozilla Root CA to
access email at one time.)


It's fairly common that there exist multiple aliases to access a mail 
server but the server does not have certificates available for all of 
them.  In the specific Mozilla case, this was probably 
https://bugzil.la/815771.



Andrew Sutherland writes:

I propose that we use a certificate-observatory-style mechanism to
corroborate any invalid certificates by attempting the connection
from 1 or more trusted servers whose identity can be authenticated
using the existing CA infrastructure.

Although this can identify a MITM between the mail client and the
internet, I assume it won't identify one between the mail server
and the internet.


I understand your meaning to be that we won't detect if the mail 
server's outbound SMTP connections to other domains and inbound SMTP 
connections from other SMTP servers either support, strongly request, or 
require use of TLS (likely via STARTTLS upgrade).


I confirm the above and that the issue is somewhat orthogonal.  This is 
something we would probably want to do as in-app advocacy via 
extension/opt-in either by scraping transmission headers or downloading 
a prepared database and cross-checking.



*** it looks like you are behind a corporate firewall that MITMs
you, you should add the firewall's CA to your device.  Send the
user to a support page to help walk them through these steps if
that seems right.
*** it looks like the user is under attack

I wonder how to distinguish these two situations and whether they
really should be distinguished.


What I imagined here was that the certificate would identify itself as 
allegedly originating from the given vendor.  We could treat that as a 
sufficient hint using RegExps, or analyze the entire chain to cover 
cases where the vendor uses their own trust root that we can add to a 
small database.  In the very bad cases where all of the vendor's devices 
use the same certificate, that's also easy to identify.


I think it's a meaningful distinction to make since we are able to tell 
the user You should be able to talk privately with the mail server, but 
the network you are using won't let you and wants to hear everything you 
say.  Your options are to use a different network or configure your 
device to use the proxy.  For example, you might want to use cellular 
data rather than wi-fi or pick a different wi-fi network, like a guest 
network.


I'm not sure it's a must-have-on-first-landing feature, especially since 
I don't think Firefox OS devices are particularly enterprise friendly 
right now.  For example, in order to actually authenticate MoCo's 
non-guest wi-fi you need to be able to import the certificate (or add an 
exception! :) which are what two of those bugs I linked to are about.  
But I'd want to make sure we could evolve towards supporting that 
direction better.


Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: B2G, email, and SSL/TLS certificate exceptions for invalid certificates

2014-05-28 Thread Joshua Cranmer 

On 5/28/2014 5:30 PM, Andrew Sutherland wrote:
tl;dr: We need to figure out how to safely allow for invalid 
certificates to be used in the Firefox OS Gaia email app. We do want 
all users to be able to access their email, but not by compromising 
the security of all users.  Read on if you work in the security field 
/ care about certificates / invalid certificates.


Certificate verification failures, I think, can happen for one of the 
following reasons, sorted by rough order that I expect they happen:

* Untrusted CA/self-signed certificate
* Domain name mismatch (e.g., a certificate for foobar.mozilla.org is 
used on site foobar.allizom.org)

* Expired certificate
* Insufficiently secure certificate (e.g., certificates that violate 
CA/Browser Forum rules or the like. I don't know if we actually consider 
this a failure right now, but it's a reasonable distinct failure class IMHO)

* Explicitly revoked
* Malformed certificate

It seems to me that some of these are more tolerable than others. There 
is a much different risk profile to accepting a certificate that expired 
two days ago versus one that fails an OCSP validation check.


* the exception is per-domain, not per-port, so if we add an exception 
for port 993 (imaps), that would also impact https.


It's per-host:port. I recently had to add a cert override for an NNTPS 
server by hand.


In bug https://bugzil.la/874346 the requirement that is coming from 
partners is that:

- we need to imminently address the certificate exception problem
- the user needs to be able to add the exception from the account 
setup flow.  (As opposed to requiring the user to manually go to the 
settings app and add an exception.  At least I think that's the request.)


I know you and I discussed this in another context not too long ago, and 
the common consensus we had agreed then was that doing a non-automatic 
manual labor to set up the exception was ideal. Requiring it from the 
account setup flow is... slightly disconcerting to me.



== Proposed solution for informed, safe decision-making


This is a problem I've been thinking about in a slightly different 
situation (specifically, for S/MIME certificates). Unfortunately, my 
blog post containing these thoughts is still in the process of being 
written, so I can't refer you to that post yet.


We have an excellent chance to try to rethink CA infrastructure in this 
process beyond the notion of a trusted third-party CA system (which is 
already more or less broken, but that's beside the point). My own views 
on this matter is that the most effective notion of trust is some sort 
of key pinning: using a different key is a better indicator of an attack 
than having a valid certificate; under this model the CA system is 
largely information about how to trust a key you've never seen before. 
There is a minor gloss point here in that there are legitimate reasons 
to need to re-key servers (e.g., Heartbleed or the Debian OpenSSL 
entropy issue), and I don't personally have the security experience to 
be able to suggest a solution here.


I'm not necessarily asking that we immediately find the best solution 
right now (particularly given partners' demands for immediacy), but 
rather that we think about a solution that can eventually be proposed 
for standardization in appropriate standards bodies, and also that we 
design a solution that ought to require mostly technical fixes as 
opposed to architectural redesigns. I know the IETF is currently working 
on an extension for key pinning in HTTP that may be relevant here: 
https://datatracker.ietf.org/doc/draft-ietf-websec-key-pinning/.


Another relevant fact for the design future is the eventual introduction 
of DANE and DNSSEC, which allows the DNS records to indicate the 
intended keys of TLS connections. That's not feasible for the near 
future, though, especially as I don't think Mozilla has even started 
implementing DANE.


I propose that we use a certificate-observatory-style mechanism to 
corroborate any invalid certificates by attempting the connection from 
1 or more trusted servers whose identity can be authenticated using 
the existing CA infrastructure.


Doesn't the EFF's SSL Observatory already track the SSL certificates to 
indicate potential MITMs?


For example, if the email app contacts sketchy.example.com and finds 
the certificate does not validate, I propose that:


*  TCPSocket/XHR with mozSystem return information on the 
certificate/chain that we received.


I think that, in any case, the exact reason for certificate errors [in 
terms of the taxonomy I described above] should be returned if 
validation fails.


[ I should also point out that I'm planning on asking the WebCrypto 
working group about standardizing some certificate stuff--particularly 
validation--for S/MIME, since I really, really, really don't want to try 
write an OCSP verifier in JS. The discussion of how to identify 
certificate details may be relevant for that effort as 

Re: B2G, email, and SSL/TLS certificate exceptions for invalid certificates

2014-05-28 Thread Andrew Sutherland

On 05/28/2014 08:37 PM, Karl Dubost wrote:

Le 29 mai 2014 à 09:13, Andrew Sutherland asutherl...@asutherland.org a écrit 
:

My imagined rationale for why someone would use a self-signed certificate 
amounts to laziness.

being one of those persons using a self-signed certificate, let's enrich your 
use cases list ;)
I use a self-signed certificate because the server that I'm managing is used by 
a handful of persons which are part of community. This community can be friends 
and/or family. The strong link here is the *human trust* in between people, 
which is higher than the trust of a third party.


Trusting you as a human doesn't translate into protecting the users of 
your server from man-in-the-middle attacks.  How do you translate the 
human trust into the technical trust infrastructure supported by Firefox 
and Thunderbird and the rest of the Internet?


Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: B2G, email, and SSL/TLS certificate exceptions for invalid certificates

2014-05-28 Thread Karl Dubost
Andrew,

Le 29 mai 2014 à 09:50, Andrew Sutherland asutherl...@asutherland.org a écrit 
:
 Trusting you as a human doesn't translate into protecting the users of your 
 server from man-in-the-middle attacks.
  How do you translate the human trust into the technical trust infrastructure 
 supported by Firefox and Thunderbird and the rest of the Internet?

I was replying to the self-signed certificate == laziness.
What I'm saying is that if you have 4 users on your server. You may decide to 
create a certificate yourself and educate your users about what message the 
certificate will send to their mail client and teach them why they can accept 
the warning message in this case.


-- 
Karl Dubost, Mozilla
http://www.la-grange.net/karl/moz

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: B2G, email, and SSL/TLS certificate exceptions for invalid certificates

2014-05-28 Thread Joshua Cranmer 

On 5/28/2014 7:13 PM, Andrew Sutherland wrote:
My imagined rationale for why someone would use a self-signed 
certificate amounts to laziness.  (We've been unable to determine what 
the rationale is for using invalid certificates in these cases as of 
yet.)  For example, they install dovecot on Debian/Ubuntu, it 
generates a self-signed certificate, they're fine with that.  Or they 
created a self-signed certificate years ago before they were free and 
don't want to update them now. Under this model, it's very unlikely 
that there's a server farm of servers each using different self-signed 
certificates, which would be the case where we want multiple 
certificates.  (Such a multi-exception scenario would also not work 
with my proposed trusted server thing.)


Two more possible rationales:
1. The administrator is unwilling to pay for an SSL certificate and 
unaware of low-cost or free SSL certificate providers.
2. The administrator has philosophical beliefs about CAs, or the CA 
trust model in general, and is unwilling to participate in it. 
Neglecting the fact that encouraging click-through behavior of users can 
only weaken the trust model.


[ Discovered in the course of reading a few CACert root certificate 
request bugs. ]
[ Secondary note: most of my thoughts on X.509 certificates are geared 
towards its relation to S/MIME, which shares similar but not quite 
identical concerns. ]


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: B2G, email, and SSL/TLS certificate exceptions for invalid certificates

2014-05-28 Thread David Keeler
On 05/28/2014 06:01 PM, Karl Dubost wrote:
 Andrew,
 
 Le 29 mai 2014 à 09:50, Andrew Sutherland asutherl...@asutherland.org a 
 écrit :
 Trusting you as a human doesn't translate into protecting the users of your 
 server from man-in-the-middle attacks.
  How do you translate the human trust into the technical trust 
 infrastructure supported by Firefox and Thunderbird and the rest of the 
 Internet?
 
 I was replying to the self-signed certificate == laziness.
 What I'm saying is that if you have 4 users on your server. You may decide to 
 create a certificate yourself and educate your users about what message the 
 certificate will send to their mail client and teach them why they can accept 
 the warning message in this case.

But without verifying that the certificate they received is the
certificate you created, those users are open to attack. On desktop we
unfortunately allowed this to become common. We have an opportunity here
to not make the same mistake on mobile.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: B2G, email, and SSL/TLS certificate exceptions for invalid certificates

2014-05-28 Thread Brian Smith
On Wed, May 28, 2014 at 5:13 PM, Andrew Sutherland 
asutherl...@asutherland.org wrote:

 On 05/28/2014 07:16 PM, David Keeler wrote:

 * there is only a single certificate store on the device and therefore
 that all exceptions are device-wide

 This is an implementation detail - it would not be difficult to change
 exceptions to per-principal-per-app rather than just per-principal.


 It's good to know this should be easy, thank you!


IIRC, different apps can share a single HTTPS connection. So, for HTTPS,
you'd also need to modify the HTTP transaction manager so that it doesn't
mix transactions from apps with different cert override settings on the
same connection.


My imagined rationale for why someone would use a self-signed certificate
 amounts to laziness.


We encourage websites and mail servers to use invalid and self-signed
certificates by making it easy to override the cert error.


 A theoretical (but probably not in reality) advantage of only storing one
 per domain:port is that in the event the key A is compromised and a new key
 B is generated, the user would be notified when going back to A from B.


This actually happens regularly in real life. If you accumulate all the
cert error overrides for a host then you end up permanently letting every
captive portal the user has accessed the site through MitM the user.


 David Keeler wrote:



  In terms of solving the issue at hand, we have a great opportunity to
 not implement the press this button to MITM yourself paradigm that
 desktop browsers currently use. The much safer option is to ask the user
 for the expected certificate fingerprint. If it matches the certificate
 the server provided, then the exception can safely be added. The user
 will have to obtain that fingerprint out-of-band over a hopefully secure
 channel.


David, I would like to agree with you but even I myself have never checked
the fingerprint of a certificate before adding a cert error override for a
site, and I suspect that implementing the solution you propose would be the
equivalent of doing nothing for the vast majority of cases, due to
usability issues.


  I agree this is a safe approach and the trusted server is a significant
 complication in this whole endeavor.  But I can think of no other way to
 break the symmetry of am I being attacked or do I just use a poorly
 configured mail server?


It would be pretty simple to build a list of mail servers that are known to
be using valid TLS certificates. You can build that list through port
scanning, in conjunction with the auto-config data you already have. That
list could be preloaded into the mail app and/or dynamically
retrieved/updated. Even if we seeded this list with only the most common
email providers, we'd still be protecting a lot more users than by doing
nothing, since email hosting is heavily consolidated and seems to be
becoming more consolidated over time.


 NB: I do think that if we must make it possible to insecurely add a
 certificate exception, then making it harder for users to do so is
 desirable.  My original hope was that we'd just provide a mechanism in the
 settings app to let users add exceptions and we'd never link the user
 directly to this from the email app.  Instead we'd bounce them to a support
 page first which would require a-hassle-but-not-ridiculous steps along the
 lines of the long flow via Thunderbird preferences.  It's unlikely a gmail
 vanity domain user would decide to actively take all those steps to
 compromise their security.


I don't think that making things difficult for the users of our software is
going to improve things too much because users will blame us for being
harder to use than our competitors.

One way to discourage the use of non-trusted certificates is to have a
persistent UI indication that the certificate is bad in the app, along with
a link to more info so that the user can learn why using such certificates
is a bad idea. This way, even if we make adding cert error overrides easy
for users, we're still putting pressure on the server administrator to use
a valid certificate.

Regarding DANE: Any TLS registry can apply to be a trust anchor in
Mozilla's CA program and we'll add them if they meet our requirements. We
can constrain them to issuing certificates that are trusted only for their
own TLDs; we've done this with some CAs in our program already. Any CA can
give away free certificates to any subset of websites (e.g. any website
within a TLD). Consequently, there really isn't much different about the CA
system we already have and DANE, as far as the trust model or costs are
concerned.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: B2G, email, and SSL/TLS certificate exceptions for invalid certificates

2014-05-28 Thread Karl Dubost

Le 29 mai 2014 à 10:25, David Keeler dkee...@mozilla.com a écrit :
 But without verifying that the certificate they received is the
 certificate you created, those users are open to attack.

agreed. 
My intent in the discussion is NOT Let's not verify the certificate is valid
but to allow the scenario  This self-signed certificate is from blah and we 
checked it.

Basically, to have mechanisms where the trust is not a question of 
centralization. 
Centralized trust systems have their own set of weakness and consequences for 
the infrastructure.


-- 
Karl Dubost, Mozilla
http://www.la-grange.net/karl/moz

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Style guide clarity on C++-isms

2014-05-28 Thread Nicholas Nethercote
On Tue, Jan 7, 2014 at 12:53 PM, Robert O'Callahan rob...@ocallahan.org wrote:

 We have a lot of places where we write void Method() { ... } all on one
 line for trivial setters and getters. I wonder if we should permit that.

The conclusion for this question was no, but I will ask for it to be
reconsidered.

In bug 1014377 I'm converting MFBT to Gecko style. Here are some real
one-line functions that I would have to convert to four lines each:

 static T inc(T aPtr) { return IntrinsicAddSubT::add(aPtr, 1); }
 static T dec(T aPtr) { return IntrinsicAddSubT::sub(aPtr, 1); }

 static T or_( T aPtr, T aVal) { return __sync_fetch_and_or(aPtr, aVal); }
 static T xor_(T aPtr, T aVal) { return __sync_fetch_and_xor(aPtr, aVal); }
 static T and_(T aPtr, T aVal) { return __sync_fetch_and_and(aPtr, aVal); }

 static ValueType inc(ValueType aPtr) { return add(aPtr, 1); }
 static ValueType dec(ValueType aPtr) { return sub(aPtr, 1); }

When it comes to questions of style, IMO clarity should trump almost
everything else, and spotting typos in functions like this is *much*
harder when they're multi-line.

Furthermore, one-liners like this are actually pretty common, and
paving the cowpaths is often a good thing to do.

Thoughts?

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Style guide clarity on C++-isms

2014-05-28 Thread Bobby Holley
On Wed, May 28, 2014 at 9:03 PM, Nicholas Nethercote n.netherc...@gmail.com
 wrote:

 Furthermore, one-liners like this are actually pretty common, and
 paving the cowpaths is often a good thing to do.

 Thoughts?

 Nick


+1. In a lot of cases, it seems pretty ridiculous to use 4 lines. See
things like the OwningCompileOptions setters in jsapi.h:
http://mxr.mozilla.org/mozilla-central/source/js/src/jsapi.h#3656
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Style guide clarity on C++-isms

2014-05-28 Thread L. David Baron
On Wednesday 2014-05-28 21:03 -0700, Nicholas Nethercote wrote:
  static T inc(T aPtr) { return IntrinsicAddSubT::add(aPtr, 1); }
  static T dec(T aPtr) { return IntrinsicAddSubT::sub(aPtr, 1); }
 
  static T or_( T aPtr, T aVal) { return __sync_fetch_and_or(aPtr, aVal); }
  static T xor_(T aPtr, T aVal) { return __sync_fetch_and_xor(aPtr, aVal); }
  static T and_(T aPtr, T aVal) { return __sync_fetch_and_and(aPtr, aVal); }
 
  static ValueType inc(ValueType aPtr) { return add(aPtr, 1); }
  static ValueType dec(ValueType aPtr) { return sub(aPtr, 1); }
 
 When it comes to questions of style, IMO clarity should trump almost
 everything else, and spotting typos in functions like this is *much*
 harder when they're multi-line.
 
 Furthermore, one-liners like this are actually pretty common, and
 paving the cowpaths is often a good thing to do.

I'd be happy for this to be allowed; I've used this style quite a
bit, for similar reasons (clarity and density).

I also like a similar two-line style for code that doesn't fit on
one line, as in:
  bool WidthDependsOnContainer() const
{ return WidthCoordDependsOnContainer(mWidth); }
  bool MinWidthDependsOnContainer() const
{ return WidthCoordDependsOnContainer(mMinWidth); }
  bool MaxWidthDependsOnContainer() const
{ return WidthCoordDependsOnContainer(mMaxWidth); }
from 
http://hg.mozilla.org/mozilla-central/file/9506880e4879/layout/style/nsStyleStruct.h#l1321
but I think that variant may be less widely used.

-David

-- 
턞   L. David Baron http://dbaron.org/   턂
턢   Mozilla  https://www.mozilla.org/   턂
 Before I built a wall I'd ask to know
 What I was walling in or walling out,
 And to whom I was like to give offense.
   - Robert Frost, Mending Wall (1914)


signature.asc
Description: Digital signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform