Re: OMTC on Windows

2014-05-29 Thread Bas Schouten
Hi Gavin,

I initially responded to Gijs' e-mail a while ago, before it got through 
dev-platform moderation. Since then I've discovered one bug which I believe is 
related. This one is tracked in bug 1017298. In particular this occurs when a 
user first has more tabs open than the window can fit, i.e. the tab strip 
becomes 'scrollable', and then starts closing them, without interruption, to 
the point where they do fit, and they start 'growing'. This causes layers to 
rapidly change size (every animation frame), since the tab strip in this 
scenario 'remains actively layerized' for a little while (unlike when it was 
never overflowing in the first place, and therefor never got layerized).

There are reasons to belief the new architecture performs somewhat worse than 
the old architecture when resizing layers. Usually this is a rare situation but 
I believe the situation described above -might- be exactly what happens during 
the TART tab close tests. In general I don't think many users will run into 
this, but it might explain part of that. Matt is looking into whether something 
can be done here.

I'll continue looking into what might affect our performance numbers. If either 
people from the performance teams or the desktop teams are interested in 
investigating our tests and how well they measure practical performance, I 
think that would be very valuable, and it would help us a lot in identifying 
for what sort of interactions we need to investigate the performance.

In addition to that if the performance or desktop teams have good ideas of 
other interactions (than tab-close) which seem to have regressed a lot in 
particular, that will also help a lot in our investigations. My knowledge of 
TART is limited.

Bug 1013262 is the general tracking bug but there's not too much info there to 
be honest.

Thanks!

Bas

- Original Message -
From: Gavin Sharp ga...@gavinsharp.com
To: Vladimir Vukicevic vladi...@mozilla.com
Cc: Gijs Kruitbosch gijskruitbo...@gmail.com, Bas Schouten 
bschou...@mozilla.com, dev-tech-...@lists.mozilla.org, mozilla.dev.platform 
group dev-platform@lists.mozilla.org, release-drivers 
release-driv...@mozilla.org
Sent: Wednesday, May 28, 2014 7:15:09 PM
Subject: Re: OMTC on Windows

Who's responsible for looking into the test/regression? Bas? Does the
person looking into it need help from the performance or desktop teams?

What bug is tracking that work?

Gavin


On Wed, May 28, 2014 at 12:12 PM, Vladimir Vukicevic
vladi...@mozilla.comwrote:

 (Note: I have not looked into the details of CART/TART and their
 interaction with OMTC)

 It's entirely possible that (b) is true *now* -- the test may have been
 good and proper for the previous environment, but now the environment
 characteristics were changed such that the test needs tweaks.  Empirically,
 I have not seen any regressions on any of my Windows machines (which is
 basically all of them); things like tab animations and the like have
 started feeling smoother even after a long-running browser session with
 many tabs.  I realize this is not the same as cold hard numbers, but it
 does suggest to me that we need to take another look at the tests now.

 - Vlad

 - Original Message -
  From: Gijs Kruitbosch gijskruitbo...@gmail.com
  To: Bas Schouten bschou...@mozilla.com, Gavin Sharp 
 ga...@gavinsharp.com
  Cc: dev-tech-...@lists.mozilla.org, mozilla.dev.platform group 
 dev-platform@lists.mozilla.org, release-drivers
  release-driv...@mozilla.org
  Sent: Thursday, May 22, 2014 4:46:29 AM
  Subject: Re: OMTC on Windows
 
  Looking on from m.d.tree-management, on Fx-Team, the merge from this
  change caused a 40% CART regression, too, which wasn't listed in the
  original email. Was this unforeseeen, and if not, why was this
  considered acceptable?
 
  As gavin noted, considering how hard we fought for 2% improvements (one
  of the Australis folks said yesterday 1% was like Christmas!) despite
  our reasons of why things were really OK because of some of the same
  reasons you gave (e.g. running in ASAP mode isn't realistic, TART is
  complicated, ...), this hurts - it makes it seem like (a) our
  (sometimes extremely hacky) work was done for no good reason, or (b) the
  test is fundamentally flawed and we're better off without it, or (c)
  when the gfx team decides it's OK to regress it, it's fine, but not when
  it happens to other people, quite irrespective of reasons given.
 
  All/any of those being true would give me the sad feelings. Certainly it
  feels to me like (b) is true if this is really meant to be a net
  perceived improvement despite causing a 40% performance regression in
  our automated tests.
 
  ~ Gijs
 
  On 18/05/2014 19:47, Bas Schouten wrote:
   Hi Gavin,
  
   There have been several e-mails on different lists, and some
 communication
   on some bugs. Sadly the story is at this point not anywhere in a
 condensed
   form, but I will try to highlight a couple of core points, some of
 

Re: error running mochitest-browser

2014-05-29 Thread Gijs Kruitbosch

Hi,

I would agree with the folks in #introduction that it's OK if it passes 
on try, and that you can generally run the tests relevant to your work 
if you're doing things locally. When in doubt about test coverage, you 
should always be able to ask the person mentoring you and/or the bug in 
question.


If you feel like it, you could try to figure out why the error in 
question happens in the automated test run on your machine - it might be 
a real problem, but with the information we have so far it's hard to be 
sure what the issue is. If you want to dive into this further, it'd 
probably be useful to file a bug (product: Firefox, component: Toolbars 
 Customization) and provide more details, such as if you're just 
building Firefox desktop, or Fx OS simulator, or something else, whether 
this is an opt or debug build, if you can reproduce the issue when 
running just one or two tests, what the stack for the error is, and 
anything else you think is relevant. If you decide to do this, please 
feel free to CC/needinfo me and we can take it from there.


Hope that helps,
Gijs

On 28/05/2014 17:49, thills wrote:

Hi Gijs,

I'm still seeing this even when I don't use the sudo.

I also asked about this on the #introduction a few days ago and it was 
mentioned that if this was a real issue, it would be something seen 
also on the try servers.  The advice they gave me was to just try 
running the specific tests needed to test my work vs. the whole 
suite.  This is a bit concerning to me as a newbie as I want to make 
sure I've got all the proper tests chosen.


Thanks,

-tamara

On 5/26/14, 8:44 AM, Gijs Kruitbosch wrote:

On 22/05/2014 14:56, thills wrote:

Hi Gijs:
Thanks for your response.  Please see inline.
On 5/22/14, 9:46 AM, Gijs Kruitbosch wrote:

Some questions:

What platform is this on?

This is on Mac OSX Mavericks

How are you invoking the test suite?

I'm doing a sudo ./mach mochitest-browser from the command line.

Don't use sudo. :-)



Which test is running when this error happens?

The test that executes right before is this one:
 6:17.02 TEST-PASS | 
chrome://mochitests/content/browser/browser/components/customizableui/test/browser_932928_show_notice_when_palette_empty.js 
| The empty palette notice should not be shown when there are items 
in the palette.


After that executes, it gets stuck in this loop just repetitively 
printing out that error with the type error below.

Are you using mozilla-central, fx-team, mozilla-inbound, ...?

I'm using mozilla-central.

Can you paste a copy of your .mozconfig configuration?

It's empty


Please let us know (on the newsgroup/mailing list as well - I've 
looped them back in) if you're still seeing this if not using sudo.


~ Gijs




___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Style guide clarity on C++-isms

2014-05-29 Thread Ehsan Akhgari

On 2014-05-29, 1:20 AM, L. David Baron wrote:

On Wednesday 2014-05-28 21:03 -0700, Nicholas Nethercote wrote:

static T inc(T aPtr) { return IntrinsicAddSubT::add(aPtr, 1); }
static T dec(T aPtr) { return IntrinsicAddSubT::sub(aPtr, 1); }



static T or_( T aPtr, T aVal) { return __sync_fetch_and_or(aPtr, aVal); }
static T xor_(T aPtr, T aVal) { return __sync_fetch_and_xor(aPtr, aVal); }
static T and_(T aPtr, T aVal) { return __sync_fetch_and_and(aPtr, aVal); }



static ValueType inc(ValueType aPtr) { return add(aPtr, 1); }
static ValueType dec(ValueType aPtr) { return sub(aPtr, 1); }


When it comes to questions of style, IMO clarity should trump almost
everything else, and spotting typos in functions like this is *much*
harder when they're multi-line.

Furthermore, one-liners like this are actually pretty common, and
paving the cowpaths is often a good thing to do.


I'd be happy for this to be allowed; I've used this style quite a
bit, for similar reasons (clarity and density).

I also like a similar two-line style for code that doesn't fit on
one line, as in:
   bool WidthDependsOnContainer() const
 { return WidthCoordDependsOnContainer(mWidth); }
   bool MinWidthDependsOnContainer() const
 { return WidthCoordDependsOnContainer(mMinWidth); }
   bool MaxWidthDependsOnContainer() const
 { return WidthCoordDependsOnContainer(mMaxWidth); }
from 
http://hg.mozilla.org/mozilla-central/file/9506880e4879/layout/style/nsStyleStruct.h#l1321
but I think that variant may be less widely used.


OK, both sound fine.  Let's add them to the styles.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Uncaught rejections in xpcshell will now cause orange

2014-05-29 Thread Paolo Amadini
On 5/28/2014 8:30 PM, David Rajchenbach-Teller wrote:
 uncaught promise rejections using Promise.jsm will cause
 TEST-UNEXPECTED-FAIL.

Fantastic! Promise.jsm rocks!

 We intend to progressively extend this policy to:
 - DOM Promise (bug 989960).

Excellent, this will be a step forward in resolving the dependencies of
bug 939636 that will allow us to use DOM Promise instead of Promise.jsm
throughout the codebase in the future.

For now I wanted to remind to use Promise.jsm instead of DOM Promise in
back-end code that is exercised by tests, because doing otherwise will
make it more difficult for us to implement the mentioned bug 989960,
since DOM Promise won't currently detect asynchronous errors.

Cheers,
Paolo
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: B2G, email, and SSL/TLS certificate exceptions for invalid certificates

2014-05-29 Thread Mike Hoye

On 2014-05-28, 9:07 PM, Joshua Cranmer  wrote:


Two more possible rationales:
1. The administrator is unwilling to pay for an SSL certificate and 
unaware of low-cost or free SSL certificate providers.
2. The administrator has philosophical beliefs about CAs, or the CA 
trust model in general, and is unwilling to participate in it. 
Neglecting the fact that encouraging click-through behavior of users 
can only weaken the trust model.


3. The administrator doesn't actually believe SSL certs protect you from 
any real harm, and is generating a cert using the least effort possible 
to make a user-facing dialog box go away.


It's become clear in the last few months that the overwhelmingly most 
frequent users of MITM attacks are state actors with privileged network 
positions either obtaining or coercing keys from CAs, using attacks that 
the CA model effectively endorses, using tech you can buy off the shelf. 
In that light, it's not super-obvious what SSL certs protect you from 
apart from some jackass sniffing the coffeeshop wifi.


- mhoye
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: B2G, email, and SSL/TLS certificate exceptions for invalid certificates

2014-05-29 Thread avihal
On Thursday, May 29, 2014 1:30:20 AM UTC+3, somb...@gmail.com wrote:
 We do want
all users to be able to access their email, but not by compromising the
security of all users. ...

 This decision was made based on a risk profile of ...

So it looks like we know well enough what the best approach should be in 
general.


 ... With deeply regrettable irony, a manufacturer of Firefox 
 OS devices and one of the companies that certifies Firefox OS devices 
 both run mail servers with invalid certificates and are our existing 
 examples of the problem.

 In bug https://bugzil.la/874346 the requirement that is coming from 
 partners is that:
 - we need to imminently address the certificate exception problem
 - the user needs to be able to add the exception from the account setup 
 flow.  (As opposed to requiring the user to manually go to the settings 
 app and add an exception.  At least I think that's the request.)


I'd interpret it as follows: The partners which we cherish say that the current 
behavior is beyond a red line of theirs. They'd prefer if it never showed any 
warning, but they're willing to live with it if the warning is part of the flow.


So combining those two, it looks like the highest priority is satisfying the 
partners, and second priority is to maybe improve the general flow of adding 
exceptions.

I think it would help to focus on one issue, and for now it seems like that's 
the partner's issue.

As for solutions, while contacting a trusted server every time there's an 
exception is an option, it does make the process more complex.

What if there was a trusted server which the app contacted periodically (where 
the first time happens on first run etc), and download from it a list of :

1. Certificates to trust.
2. Certificates to revoke.

It might be overall simpler to design and implement because it separates the 
the exception management (periodically with a trusted server) from the 
sensitive flow when users need to approve of exceptions.

- avih
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: B2G, email, and SSL/TLS certificate exceptions for invalid certificates

2014-05-29 Thread Camilo Viecco


Seems like we will have to implement some sort allow invalid certs (it makes
me really sad that the system administrators and/or managers of tcl and 
telefonica seem

slow to understand the risks of allowing MITM for user credentials).

I like Brian Smith's proposal to add some visual indicator when using
a non-secure connection and I also agree that making users type the
fingerprint (on a mobile device) is also a non-solution.
I propose making the user type something like:

'I understand that my password to use $mailprovider cannot be protected by
firefoxOS'

And then allow the 'one click' override. (ditto for non SSL connections).

Also in all these variations we have not discussed what should happen
if the untrusted cert changes, my guess is that we should show some delta
of the two certs (once we have detected we are not behind a captive portal)

Camilo



On 5/28/14, 4:16 PM, David Keeler wrote:

Regarding the current certificate exception mechanism:


* there is only a single certificate store on the device and therefore
that all exceptions are device-wide

This is an implementation detail - it would not be difficult to change
exceptions to per-principal-per-app rather than just per-principal.


* only one exception can exist per domain at a time

In combination with point 3, is this a limitation? Do we want to support
this? If so, again, it wouldn't be that hard.


* the exception is per-domain, not per-port, so if we add an exception
for port 993 (imaps), that would also impact https.

I don't think this is the case. Either way, it shouldn't be the case.
In summary, it would not be difficult to ensure that the certificate
exception service operates on a per-principal-per-app basis, which would
allow for what we want, I believe (e.g. exceptions for
{email-app}/example.com:993 would not affect {browser-app}/example.com:443).

In terms of solving the issue at hand, we have a great opportunity to
not implement the press this button to MITM yourself paradigm that
desktop browsers currently use. The much safer option is to ask the user
for the expected certificate fingerprint. If it matches the certificate
the server provided, then the exception can safely be added. The user
will have to obtain that fingerprint out-of-band over a hopefully secure
channel.
I would be wary of implementing a more involved scheme that involves
remote services.

Cheers,
David

On 05/28/2014 03:30 PM, Andrew Sutherland wrote:

tl;dr: We need to figure out how to safely allow for invalid
certificates to be used in the Firefox OS Gaia email app.  We do want
all users to be able to access their email, but not by compromising the
security of all users.  Read on if you work in the security field / care
about certificates / invalid certificates.


== Invalid Certificates in Email Context

Some mail servers out there use self-signed or otherwise invalid SSL
certificates.  Since dreamhost replaced their custom CA with valid
certificates
(http://www.dreamhoststatus.com/2013/05/09/secure-certificate-changes-coming-for-imap-and-pop-on-homiemail-sub4-and-homiemail-sub5-email-clusters-on-may-14th/)
and StartCom started offering free SSL certificates
(https://www.startssl.com/?app=1), the incidence of invalid certificates
has decreased.  However, there are still servers out there with invalid
certificates.  With deeply regrettable irony, a manufacturer of Firefox
OS devices and one of the companies that certifies Firefox OS devices
both run mail servers with invalid certificates and are our existing
examples of the problem.

The Firefox OS email app requires encrypted connections to servers.
Unencrypted connections are only legal in our unit tests or to
localhost.  This decision was made based on a risk profile of devices
where we assume untrusted/less than 100% secure wi-fi is very probable
and the cellular data infrastructure is only slightly more secure
because there's a higher barrier to entry to setting up a fake cell
tower for active attacks.

In general, other email apps allow both unencrypted connections and
adding certificate exceptions with a friendly/dangerous flow.  I can
speak to Thunderbird as an example.  Historically, Thunderbird and its
predecessors allowed certificate exceptions.  Going into Thunderbird
3.0, Firefox overhauled its exception mechanism and for a short time
Thunderbird's process required significantly greater user intent to add
an exception.  (Preferences, Advanced, Certificates, View Certificates,
Servers, Add Exception.)  At this time DreamHost had invalid
certificates, free certificates were not available, invalid certificates
were fairly common, Thunderbird's autoconfig security model already
assumed a reliable network connection, Thunderbird could only run on
desktops/laptops which were more likely to have a secure network, etc.
We relented and Thunderbird ended up where it is now.  Thunderbird
immediately displays the Add Security Exception UI; the user only
needs to click Confirm Security Exception. 

Re: B2G, email, and SSL/TLS certificate exceptions for invalid certificates

2014-05-29 Thread Andrew Sutherland

On 05/28/2014 09:30 PM, Brian Smith wrote:

On Wed, May 28, 2014 at 5:13 PM, Andrew Sutherland

  I agree this is a safe approach and the trusted server is a significant
complication in this whole endeavor.  But I can think of no other way to
break the symmetry of am I being attacked or do I just use a poorly
configured mail server?


It would be pretty simple to build a list of mail servers that are known to
be using valid TLS certificates. You can build that list through port
scanning, in conjunction with the auto-config data you already have. That
list could be preloaded into the mail app and/or dynamically
retrieved/updated. Even if we seeded this list with only the most common
email providers, we'd still be protecting a lot more users than by doing
nothing, since email hosting is heavily consolidated and seems to be
becoming more consolidated over time.


This is a good proposal, thank you.  To restate my understanding, I 
think the key points of this versus the proposal I've made here or the 
variant in the https://bugzil.la/874346#c11 ISPDB proposal are:


* If we don't know the domain should have a valid certificate, let it 
have an invalid certificate.


* Preload more of the ISPDB on the device or maybe just an efficient 
mechanism for indicating a domain requires a valid certificate.


* Do not provide strong (any?) guarantees about the ISPDB being able to 
indicate the current invalid certificate the server is expected to use.


It's not clear what decision you'd advocate in the event we are unable 
to make a connection to the ISPDB server.  The attacker does end up in 
an interesting situation where if we tighten up the autoconfig mechanism 
and do not implement subdomain guessing (https://bugzil.la/823640), an 
attacker denying access to the ISPDB ends up forcing the user to perform 
manual account setup.  I'm interested in your thoughts here.


Implementation-wise I understand your suggestion to be leaning more 
towards a static implementation, although dynamic mechanisms are 
possible.  The ISPDB currently intentionally uses static files checked 
into svn for implementation simplicity/security, a decision I agree 
with.  The exception is our MX lookup mechanism at 
https://mx.thunderbird.net/dns/mx/mozilla.com


I should note that the current policy for the ISPDB has effectively been 
try and get people to host their own autoconfig entries with an 
advocacy angle which includes rejecting submissions.  What's you've 
suggested here (and I on comment 11) implies a substantiative change to 
that.  This seems reasonable to me and when I raised the question about 
whether such changes would be acceptable to Thunderbird last year, 
people generally seemed to either not care or be on board:


https://mail.mozilla.org/pipermail/tb-planning/2013-August/002884.html
https://mail.mozilla.org/pipermail/tb-planning/2013-September/002887.html
https://mail.mozilla.org/pipermail/tb-planning/2013-September/002891.html

I should also note that I think the automation to populate the ISPDB is 
still likely to require sizable engineering effort but is likely to have 
positive externalities in terms of drastically increasing our autoconfig 
coverage and allowing us to reduce the duration of the autoconfig 
probing process.  For example, we could establish direct mappings for 
all dreamhost mail clusters.


Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: B2G, email, and SSL/TLS certificate exceptions for invalid certificates

2014-05-29 Thread Brian Smith
On Thu, May 29, 2014 at 2:03 PM, Andrew Sutherland 
asutherl...@asutherland.org wrote:

 This is a good proposal, thank you.  To restate my understanding, I think
 the key points of this versus the proposal I've made here or the variant in
 the https://bugzil.la/874346#c11 ISPDB proposal are:

 * If we don't know the domain should have a valid certificate, let it have
 an invalid certificate.


Right. But, I would make the decision of whether to allow an invalid
certificate only at configuration time, instead of every time you connect
to the server like Thunderbird does. Though you'd have to solve the problem
of dealing with a server that changed from one untrusted certificate to
another.



 * Preload more of the ISPDB on the device or maybe just an efficient
 mechanism for indicating a domain requires a valid certificate.


Right.


 * Do not provide strong (any?) guarantees about the ISPDB being able to
 indicate the current invalid certificate the server is expected to use.


Right. It would be better for us to spend more effort improving security
for secure servers that are trying to do something reasonable, instead of
spending time papering over fundamental security problems with the server.


 It's not clear what decision you'd advocate in the event we are unable to
 make a connection to the ISPDB server.  The attacker does end up in an
 interesting situation where if we tighten up the autoconfig mechanism and
 do not implement subdomain guessing (https://bugzil.la/823640), an
 attacker denying access to the ISPDB ends up forcing the user to perform
 manual account setup.  I'm interested in your thoughts here.


I think guessing is a bad idea in almost any/every situation because it is
easy to guess wrong (and/or get tricked) and really screw up the user's
config.

Maybe it would be better to crowdsource configuration information much like
location services do: get a few users to opt-in to
reporting/verifying/voting on a mapping of MX records to server settings so
that you can build a big centralized database of configuration data for
(basically) every mail server in existence. Then, when users get
auto-configured with that crowdsourced data, have them report back on
whether the automatic configuration worked.

Until we could do that, it seems reasonable to just make sure that ISPDB
has the configuration data for the most common commodity email providers in
the target markets for FirefoxOS, since FirefoxOS is primarily a
consumer-oriented product.



 Implementation-wise I understand your suggestion to be leaning more
 towards a static implementation, although dynamic mechanisms are possible.
  The ISPDB currently intentionally uses static files checked into svn for
 implementation simplicity/security, a decision I agree with.  The exception
 is our MX lookup mechanism at
 https://mx.thunderbird.net/dns/mx/mozilla.com



 I should note that the current policy for the ISPDB has effectively been
 try and get people to host their own autoconfig entries with an advocacy
 angle which includes rejecting submissions.  What's you've suggested here
 (and I on comment 11) implies a substantiative change to that.  This seems
 reasonable to me and when I raised the question about whether such changes
 would be acceptable to Thunderbird last year, people generally seemed to
 either not care or be on board:


It seems like you would be able to answer this as part of the scan of the
internet, by trying to retrieve the self-hosted autoconfig file if it is
available. I suspect you will find that almost nobody is self-hosting it.

I should also note that I think the automation to populate the ISPDB is
 still likely to require sizable engineering effort but is likely to have
 positive externalities in terms of drastically increasing our autoconfig
 coverage and allowing us to reduce the duration of the autoconfig probing
 process.  For example, we could establish direct mappings for all dreamhost
 mail clusters.


Autopopulating all the autoconfig information is a lot of work, I'm sure.
But, it should be possible to create good heuristics for deciding whether
to accept certs issued by untrusted issuers in an email app. For example,
if you don't have the (full) autoconfig data for an MX server, you could
try creating an SMTP connection to the server(s) indicated in the MX
records and then use STARTTLS to switch to TLS. If you successfully
validate the certificate from that SMTP server, then assume that the
IMAP/POP/LDAP/etc. servers use valid certificates too, even if you don't
know what those servers are.

Again, I think if you made sure that GMail, Outlook.com, Yahoo Mail, 163.com,
Fastmail, TuffMail, and the major analogs in the B2G markets were all
marked TLS-only-with-valid-certificate then I think a huge percentage of
users would be fully protected from whatever badness allowing cert error
overrides would cause.

Or, perhaps you could just create a whitelist of servers that are allowed
to have cert error 

Re: OMTC on Windows

2014-05-29 Thread avihal
So, wrt TART, I now took the time to carefully examine tab animation visually 
on one system.

TL;DR:
- I think OMTC introduces a clearly visible regression with tab animation 
compared to without OMTC.
- I _think_ it regresses more with tab close than with tab open animation.
- The actual throughput regression is probably bigger than indicated by TART 
numbers.


The reason for the negative bias is that the TART results are an average of 10 
different animations, but only one of those is close to pure graphics perf 
numbers, and when you look only on this test, the regression is bigger than 
50-100% (more like 100-400%).

The details:

System: Windows 8.1 x64, i7-4500u, using Intel's iGPU (HD4400), and with 
official Firefox nightly 32bit (2014-05-29).

First, visually: both with and without ASAP mode, to my eyes, tab animation 
with OMTC is less smooth, and seems to have lower frame rate than without OMTC.

As for what TART measures, of all the TART subtests, there are 3 which are most 
suitable for testing pure graphics performance - they test the css fade-in and 
fade-out (that's the close/open animation) of a tab without actually opening or 
closing a browser tab, so whatever performance it has, the limit comes only 
from the animation itself and it doesn't include other overheads.

These tests are the ones which have fade in their name, and only one of them 
is enabled by default in talos - the other two are available only when running 
TART locally and then manually selecting animations to run.

I'll focus on a single number which is the average frame interval of the entire 
animation (these are the .all numbers), for the fade animation at default DPI 
(which is 1 on my system - so the most common).

What TART measures locally on my system:

OMTC without ASAP mode (as out of the box config as it gets):
iconFade-close-DPIcurrent.allAverage (5): 18.91 stddev: 0.86
iconFade-open-DPIcurrent.all Average (5): 17.61 stddev: 0.78

OMTC with ASAP:
iconFade-close-DPIcurrent.allAverage (5): 18.47 stddev: 0.46
iconFade-open-DPIcurrent.all Average (5): 10.08 stddev: 0.46

While this is an average of only 5 runs, stddev shows it's reasonably 
consistent, and the results are also consistent when I tried more.

We can already tell that close animation just doesn't get below ~18.5ms/frame 
on this system, ASAP doesn't affect it at all. We can also see that the open 
animation is around 60fps without ASAP (17.6 can happen with our inaccurate 
interval timers) and with ASAP it goes down to about 10ms/frame.

Without OMTC and without ASAP:
iconFade-close-DPIcurrent.allAverage (5): 16.54 stddev: 0.16
iconFade-open-DPIcurrent.all Average (5): 16.52 stddev: 0.12

Without OMTC and with ASAP:
iconFade-close-DPIcurrent.allAverage (5): 5.53 stddev: 0.07
iconFade-open-DPIcurrent.all Average (5): 6.37 stddev: 0.08

The results are _much_ more stable (stddev), and quite lower (in ASAP) and 
closer to 16.7 in normal mode.

While I obviously can't visually notice differences when the frame rate is 
higher than my screen's 60hz, from what I've seen so far, both visually and at 
the numbers, I think TART is not less reliable than before, it doesn't look to 
me as if ASAP introduces very bad bias (I couldn't deduct any), and OMTC does 
seem regress tab animations meaningfully.

- avih
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: OMTC on Windows

2014-05-29 Thread Andreas Gal

I think we should shift the conversation to how we actually animate here. 
Animating by trying to reflow and repaint with 60fps is just a bad idea. This 
might work on very high end hardware, but it will cause poor performance on the 
low-end Windows notebooks people buy these days. In other words, I am pretty 
sure our animation here was bad for a lot of our users pre-OMTC.

OMTC enables us to do smooth 60fps animations even under high load and even on 
very low end hardware, as long we do the animation right. So lets focus on that 
and figure out how to draw a tab strip that doesn’t hit pathological repainting 
paths.

I see two options here. We have to change our UX such that we can execute a 
smooth animation on the compositor (transforms, opacity changes, filter 
effects, etc), or we should draw the tab strip with canvas, which is more 
suitable for complex custom animations than reflow.

Andreas

On May 29, 2014, at 10:14 PM, avi...@gmail.com wrote:

 So, wrt TART, I now took the time to carefully examine tab animation visually 
 on one system.
 
 TL;DR:
 - I think OMTC introduces a clearly visible regression with tab animation 
 compared to without OMTC.
 - I _think_ it regresses more with tab close than with tab open animation.
 - The actual throughput regression is probably bigger than indicated by TART 
 numbers.
 
 
 The reason for the negative bias is that the TART results are an average of 
 10 different animations, but only one of those is close to pure graphics perf 
 numbers, and when you look only on this test, the regression is bigger than 
 50-100% (more like 100-400%).
 
 The details:
 
 System: Windows 8.1 x64, i7-4500u, using Intel's iGPU (HD4400), and with 
 official Firefox nightly 32bit (2014-05-29).
 
 First, visually: both with and without ASAP mode, to my eyes, tab animation 
 with OMTC is less smooth, and seems to have lower frame rate than without 
 OMTC.
 
 As for what TART measures, of all the TART subtests, there are 3 which are 
 most suitable for testing pure graphics performance - they test the css 
 fade-in and fade-out (that's the close/open animation) of a tab without 
 actually opening or closing a browser tab, so whatever performance it has, 
 the limit comes only from the animation itself and it doesn't include other 
 overheads.
 
 These tests are the ones which have fade in their name, and only one of 
 them is enabled by default in talos - the other two are available only when 
 running TART locally and then manually selecting animations to run.
 
 I'll focus on a single number which is the average frame interval of the 
 entire animation (these are the .all numbers), for the fade animation at 
 default DPI (which is 1 on my system - so the most common).
 
 What TART measures locally on my system:
 
 OMTC without ASAP mode (as out of the box config as it gets):
 iconFade-close-DPIcurrent.allAverage (5): 18.91 stddev: 0.86
 iconFade-open-DPIcurrent.all Average (5): 17.61 stddev: 0.78
 
 OMTC with ASAP:
 iconFade-close-DPIcurrent.allAverage (5): 18.47 stddev: 0.46
 iconFade-open-DPIcurrent.all Average (5): 10.08 stddev: 0.46
 
 While this is an average of only 5 runs, stddev shows it's reasonably 
 consistent, and the results are also consistent when I tried more.
 
 We can already tell that close animation just doesn't get below ~18.5ms/frame 
 on this system, ASAP doesn't affect it at all. We can also see that the open 
 animation is around 60fps without ASAP (17.6 can happen with our inaccurate 
 interval timers) and with ASAP it goes down to about 10ms/frame.
 
 Without OMTC and without ASAP:
 iconFade-close-DPIcurrent.allAverage (5): 16.54 stddev: 0.16
 iconFade-open-DPIcurrent.all Average (5): 16.52 stddev: 0.12
 
 Without OMTC and with ASAP:
 iconFade-close-DPIcurrent.allAverage (5): 5.53 stddev: 0.07
 iconFade-open-DPIcurrent.all Average (5): 6.37 stddev: 0.08
 
 The results are _much_ more stable (stddev), and quite lower (in ASAP) and 
 closer to 16.7 in normal mode.
 
 While I obviously can't visually notice differences when the frame rate is 
 higher than my screen's 60hz, from what I've seen so far, both visually and 
 at the numbers, I think TART is not less reliable than before, it doesn't 
 look to me as if ASAP introduces very bad bias (I couldn't deduct any), and 
 OMTC does seem regress tab animations meaningfully.
 
 - avih
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: B2G, email, and SSL/TLS certificate exceptions for invalid certificates

2014-05-29 Thread Andrew Sutherland

On 05/29/2014 07:12 PM, Brian Smith wrote:

On Thu, May 29, 2014 at 2:03 PM, Andrew Sutherland 
asutherl...@asutherland.org wrote:

It seems like you would be able to answer this as part of the scan of the
internet, by trying to retrieve the self-hosted autoconfig file if it is
available. I suspect you will find that almost nobody is self-hosting it.


I agree with your premise that the number of people self-hosting 
autoconfig entries is so low as to not be a concern other than not 
breaking them and allowing that to be an override mechanism for the ISPDB.


Also, https://scans.io/ has a number of useful internet scans we can use 
already, so I don't think we need to do the scan ourselves for our first 
round.  While the port 993/995 scans at https://scans.io/study/sonar.cio 
are somewhat out-of-date (2013-03-30), the DNS dumps and port 443 scans 
are modern and should be sufficient to achieve a fairly comprehensive 
database. Especially if we make the simplifying assumption that all 
relevant mail servers have been operational at the same domain name 
since at least then.  (Obviously the IP addresses may have changed so 
we'll need to use a reverse DNS dump from the appropriate time period.)




Autopopulating all the autoconfig information is a lot of work, I'm sure.
But, it should be possible to create good heuristics for deciding whether
to accept certs issued by untrusted issuers in an email app. For example,
if you don't have the (full) autoconfig data for an MX server, you could
try creating an SMTP connection to the server(s) indicated in the MX
records and then use STARTTLS to switch to TLS. If you successfully
validate the certificate from that SMTP server, then assume that the
IMAP/POP/LDAP/etc. servers use valid certificates too, even if you don't
know what those servers are.


Very interesting idea on this!  Thanks!

Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to Implement: Encrypted Media Extensions

2014-05-29 Thread Jonas Sicking
Given the number of firefox users that are choosing to use equivalent
technologies through flash today, I believe this is the right thing to
do. I definitely think we should have some form of UI that gives users
a choice and provides an opportunity for education though.

So go for it!

/ Jonas

On Tue, May 27, 2014 at 7:44 PM, Chris Pearce cpea...@mozilla.com wrote:
 Summary:

 Encrypted Media Extensions specifies a JavaScript interface for interacting
 with plugins that can be used to facilitate playback of DRM protected media
 content. We will also be implementing the plugin interface itself. We will
 be working in partnership with Adobe who are developing a compatible DRM
 plugin; the Adobe Access CDM.

 For more details:
 https://hsivonen.fi/eme/
 https://hacks.mozilla.org/2014/05/reconciling-mozillas-mission-and-w3c-eme/
 https://blog.mozilla.org/blog/2014/05/14/drm-and-the-challenge-of-serving-users/


 Bug:

 Main tracking bug:
 https://bugzilla.mozilla.org/show_bug.cgi?id=1015800


 Link to standard:

 https://dvcs.w3.org/hg/html-media/raw-file/default/encrypted-media/encrypted-media.html

 This spec is being implemented in IE, Safari, and Chrome.
 MS and Google are actively participating in the W3C working group;
 public-html-media.
 Blink's Intent To Implement: http://bit.ly/1mnELkX


 Platform coverage:

 Firefox on desktop.


 Estimated or target release:

 Firefox 36.


 Preference behind which this will be implemented:

 media.eme.enabled
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: B2G, email, and SSL/TLS certificate exceptions for invalid certificates

2014-05-29 Thread Andrew Sutherland

On 05/28/2014 06:30 PM, Andrew Sutherland wrote:

== Proposed solution for exceptions / allowing connections

There are a variety of options here, but I think one stands above the 
others.  I propose that we make TCPSocket and XHR with mozSystem take 
a dictionary that characterizes one or more certificates that should 
be accepted as valid regardless of CA validation state. Ideally we 
could allow pinning via this mechanism (by forbidding all certificates 
but those listed), but that is not essential for this use-case.  Just 
a nice side-effect that could help provide tighter security guarantees 
for those who want it.


Note: I've sent an email to the W3C sysapps list (the group 
standardizing http://www.w3.org/2012/sysapps/tcp-udp-sockets/) about 
this.  It can be found in the archive at 
http://lists.w3.org/Archives/Public/public-sysapps/2014May/0033.html


Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: OMTC on Windows

2014-05-29 Thread Matt Woodrow

Thanks Avi!

I can reproduce a regression like this (~100% slower on 
iconFade-close-DPIcurrent.all) with my machine forced to use the intel 
GPU, but not with the Nvidia one.


This suggests it's very much a driver/hardware specific problem, rather 
than a general regression with OMTC, which matches Bas' original 
observations.


Doing some profiling using my intel GPU suggests that my specific 
regression has to do with uploading and drawing shadows. I'm seeing ~45% 
of the OMTC profile [1] in nsDisplayBoxShadowOuter::Paint vs ~8% in the 
non-OMTC profile [2]. It's hard to tell exactly where the slowdown is 
because samples within the driver are breaking our unwinding code, but I 
suspect it's probably the upload to the GPU not handling the 
threads/contention well. I suspect a simple box-shadow cache would work 
wonders here.


I don't know how will this will translate to the TBPL results though, 
does anyone know what GPU's and drivers we are running there?


- Matt


On 30/05/14 2:14 pm, avi...@gmail.com wrote:

So, wrt TART, I now took the time to carefully examine tab animation visually 
on one system.

TL;DR:
- I think OMTC introduces a clearly visible regression with tab animation 
compared to without OMTC.
- I _think_ it regresses more with tab close than with tab open animation.
- The actual throughput regression is probably bigger than indicated by TART 
numbers.


The reason for the negative bias is that the TART results are an average of 10 
different animations, but only one of those is close to pure graphics perf 
numbers, and when you look only on this test, the regression is bigger than 
50-100% (more like 100-400%).

The details:

System: Windows 8.1 x64, i7-4500u, using Intel's iGPU (HD4400), and with 
official Firefox nightly 32bit (2014-05-29).

First, visually: both with and without ASAP mode, to my eyes, tab animation 
with OMTC is less smooth, and seems to have lower frame rate than without OMTC.

As for what TART measures, of all the TART subtests, there are 3 which are most 
suitable for testing pure graphics performance - they test the css fade-in and 
fade-out (that's the close/open animation) of a tab without actually opening or 
closing a browser tab, so whatever performance it has, the limit comes only 
from the animation itself and it doesn't include other overheads.

These tests are the ones which have fade in their name, and only one of them 
is enabled by default in talos - the other two are available only when running TART 
locally and then manually selecting animations to run.

I'll focus on a single number which is the average frame interval of the entire animation 
(these are the .all numbers), for the fade animation at default DPI (which is 
1 on my system - so the most common).

What TART measures locally on my system:

OMTC without ASAP mode (as out of the box config as it gets):
iconFade-close-DPIcurrent.allAverage (5): 18.91 stddev: 0.86
iconFade-open-DPIcurrent.all Average (5): 17.61 stddev: 0.78

OMTC with ASAP:
iconFade-close-DPIcurrent.allAverage (5): 18.47 stddev: 0.46
iconFade-open-DPIcurrent.all Average (5): 10.08 stddev: 0.46

While this is an average of only 5 runs, stddev shows it's reasonably 
consistent, and the results are also consistent when I tried more.

We can already tell that close animation just doesn't get below ~18.5ms/frame 
on this system, ASAP doesn't affect it at all. We can also see that the open 
animation is around 60fps without ASAP (17.6 can happen with our inaccurate 
interval timers) and with ASAP it goes down to about 10ms/frame.

Without OMTC and without ASAP:
iconFade-close-DPIcurrent.allAverage (5): 16.54 stddev: 0.16
iconFade-open-DPIcurrent.all Average (5): 16.52 stddev: 0.12

Without OMTC and with ASAP:
iconFade-close-DPIcurrent.allAverage (5): 5.53 stddev: 0.07
iconFade-open-DPIcurrent.all Average (5): 6.37 stddev: 0.08

The results are _much_ more stable (stddev), and quite lower (in ASAP) and closer to 16.7 
in normal mode.

While I obviously can't visually notice differences when the frame rate is 
higher than my screen's 60hz, from what I've seen so far, both visually and at 
the numbers, I think TART is not less reliable than before, it doesn't look to 
me as if ASAP introduces very bad bias (I couldn't deduct any), and OMTC does 
seem regress tab animations meaningfully.

- avih
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: OMTC on Windows

2014-05-29 Thread Matt Woodrow

On 30/05/14 5:22 pm, Matt Woodrow wrote:


Doing some profiling using my intel GPU suggests that my specific 
regression has to do with uploading and drawing shadows. I'm seeing 
~45% of the OMTC profile [1] in nsDisplayBoxShadowOuter::Paint vs ~8% 
in the non-OMTC profile [2]. It's hard to tell exactly where the 
slowdown is because samples within the driver are breaking our 
unwinding code, but I suspect it's probably the upload to the GPU not 
handling the threads/contention well. I suspect a simple box-shadow 
cache would work wonders here.




Oops,

[1] 
http://people.mozilla.org/~bgirard/cleopatra/#report=87718d90b7d4d4cea6714a2c6de3458151e467b3
[2] 
http://people.mozilla.org/~bgirard/cleopatra/#report=506e206b173801970896fb3a3dc7fb2974755dcd

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform