Re: Intent to implement: WebGL 2.0

2014-05-08 Thread Anne van Kesteren
On Wed, May 7, 2014 at 6:19 PM, Benoit Jacob jacob.benoi...@gmail.com wrote:
 The Implementations are free to return a context that implements a higher
 version part violates the above requirement 1. in your email, The WebGL
 working group wants web pages to opt in to the WebGL2 specific parts of the
 functionality explicitly.

 For that reason, a necessary modification to this plan is to remove the
 sentence, Implementations are free to return a context that implements a
 higher version [...].

 I agree with everything else.

It seems like you want to be able to do that going forward so you
don't have to maintain a large matrix forever, but at some point say
you drop the idea that people will want 1 and simply return N if they
ask for 1.


-- 
http://annevankesteren.nl/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WebGL 2.0

2014-05-08 Thread Benoit Jacob
2014-05-08 5:53 GMT-04:00 Anne van Kesteren ann...@annevk.nl:

 It seems like you want to be able to do that going forward so you
 don't have to maintain a large matrix forever, but at some point say
 you drop the idea that people will want 1 and simply return N if they
 ask for 1.


Yes, that's what we agreed on in the last conversation mentioned by Ehsan
yesterday. In the near future (for the next decade), there will be
webgl-1-only devices around, so allowing getContext(webgl) to
automatically give webgl2 would create accidentaly compatibility problems.
But in the longer term, there will (probably) eventually be a time when
webgl-1-only devices won't exist anymore, and then, we could decide to
allow that.

2014-05-08 5:53 GMT-04:00 Anne van Kesteren ann...@annevk.nl:

 Are we forever going to mint new version strings or are we going to
 introduce a version parameter which is observed (until we decide to
 prune the matrix a bit), this time around?


Agreed: if we still think that a version parameter would have been
desirable if not for the issue noted above, then now would be a good time
to fix it.

If we're doing the latter,
 maybe we should call the context id 3d this time around...


WebGL is low-level and generalistic enough that it is not specifically a
3d graphics API. I prefer to call it a low-level or generalistic graphics
API.

(*plug*) this might be useful reading:
https://hacks.mozilla.org/2013/04/the-concepts-of-webgl/

Benoit






 --
 http://annevankesteren.nl/
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WebGL 2.0

2014-05-08 Thread Anne van Kesteren
On Thu, May 8, 2014 at 12:56 PM, Benoit Jacob jacob.benoi...@gmail.com wrote:
 WebGL is low-level and generalistic enough that it is not specifically a
 3d graphics API. I prefer to call it a low-level or generalistic graphics
 API.

Fair, forgot about that argument. webgles or some such might be
better then. Or enshrine webgl2 forever, but that seems rather
weird.


 (*plug*) this might be useful reading:
 https://hacks.mozilla.org/2013/04/the-concepts-of-webgl/

Ta


-- 
http://annevankesteren.nl/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WebGL 2.0

2014-05-08 Thread Gervase Markham
On 08/05/14 12:56, Benoit Jacob wrote:
 (*plug*) this might be useful reading:
 https://hacks.mozilla.org/2013/04/the-concepts-of-webgl/

Comedy. I just read that article, and thought this article is awesomely
useful. I then looked at the comments, and it turned out that the first
comment is from me a year ago, saying this article is awesomely
useful. :-)

Gerv

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Disabling strict warnings as errors in xpcshell

2014-05-08 Thread Gavin Sharp
 I think what we should do is confirm that strict warnings as errors is
 actually turned on for xpcshell, and if so, where this happens.

Indeed. Can you set a breakpoint in toggleWerror and see what trips it?

Gavin

On Thu, May 8, 2014 at 7:02 AM, Eddy Bruel ejpbr...@mozilla.com wrote:
 To be clear, I don't actually know where the werror flag for xpcshell tests
 is set.

 What I've observed is:
 -If we Cu.import a JSM that requires module X, and then also require
 module X from head_dbg.js after that, all is well.
 -If we require module X from head_dbg.js first, and then also require
 module X from the JSM after that, we get strict errors.

 What I've gathered from reading the code is:
 -Each JSContext has a werror flag, which is set to false by default.
 -Each JSScript inherits the werror flag from the JSContext in which it
 is compiled.
 -loadSubScript uses the JSContext of its caller.
 -Cu.import loads JSM's in a separate JSContext, which doesn't have the
 flag set.
 -Our CommonJS loader uses loadSubScript under the hood, and caches
 modules.

 The conclusion I drew from that is that apparently, the werror flag is set
 for the context in which head_dbg.js is compiled: if the module gets loaded
 from that context first, it inherits the werror flag, causing strict
 warnings to be turned into errors. Conversely, if the JSM gets to load the
 module first, it gets loaded from the JSM's context, which doesn't have the
 flag set, and everything is fine.

 This is the best explanation I could come up with based on the behavior I've
 observed. It could be that I missed something, and the werror flag isn't
 actually set for xpcshell tests, but if that's the case, they are still
 coming from *somewhere*, and this only strenghtens my argument that it's
 extremely confusing what causes them to be enabled. So my main point
 remains, and this is still an issue for us.

 I think what we should do is confirm that strict warnings as errors is
 actually turned on for xpcshell, and if so, where this happens.

 CC'ing bholley, because he probably knows where to look.


 On 08/05/14 03:45, Gavin Sharp wrote:

 To elaborate:
 - Bug 524781 is still open
 - I don't see any reference to -werror or -S in runxpcshelltests.py

 Gavin

 On Wed, May 7, 2014 at 4:48 PM, Gavin Sharp ga...@gavinsharp.com wrote:

 When xpcshell tests are run, they flip a bit on the initial JSContext
 that's
 off by default that tells spidermonkey make the strict warning messages
 into error messages.

 Do you have a pointer to where this happens? I've never heard of this,
 and couldn't find it MXRing.

 Gavin


 On Wed, May 7, 2014 at 4:39 PM, Fitzgerald, Nick
 nfitzger...@mozilla.com wrote:

 On 5/7/14, 4:21 PM, Gavin Sharp wrote:

 What does get rid of strict warnings as errors for xpcshell tests
 mean in practice?

 It means that our non-standard spidermonkey strict mode (not the
 actual
 strict mode) console warnings would continue to simply be console
 warning
 messages rather than console error messages in xpcshell tests.


 I don't understand how you're getting into the situation of
 accidentally turn[ing] on strict warnings as errors.

 Eddy can explain this better than me because he's been deep in these
 trenches the last couple weeks, but I'll give it a shot.

 When xpcshell tests are run, they flip a bit on the initial JSContext
 that's
 off by default that tells spidermonkey make the strict warning messages
 into error messages. Depending on how you load JS code, you might share
 the
 JSContext or you might not; for example, loadSubScript shares the
 JSContext,
 while Cu.import doesn't. Eddy has been making changes to the debugger
 server
 so that it will run in workers so we can debug workers. He has been
 replacing Cu.import calls with calls to a module loader that uses
 loadSubScript underneath the hood. So now code that used to be evaluated
 with this bit flipped off (because it is off by default and it was
 getting
 its own JSContext) is being evaluated with the bit on (because it is
 inheriting the JSContext from the xpcshell test). The result is error
 messages which cause devtools tests to fail.

 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform


 On 08/05/14 03:45, Gavin Sharp wrote:

 To elaborate:
 - Bug 524781 is still open
 - I don't see any reference to -werror or -S in runxpcshelltests.py

 Gavin

 On Wed, May 7, 2014 at 4:48 PM, Gavin Sharp ga...@gavinsharp.com wrote:

 When xpcshell tests are run, they flip a bit on the initial JSContext
 that's
 off by default that tells spidermonkey make the strict warning messages
 into error messages.

 Do you have a pointer to where this happens? I've never heard of this,
 and couldn't find it 

Re: Intent to implement: WebGL 2.0

2014-05-08 Thread Vladimir Vukicevic
On Thursday, May 8, 2014 5:25:49 AM UTC-4, Henri Sivonen wrote:
 Making the Web little-endian may indeed have been the right thing.
 Still, at least from the outside, it looks like the WebGL group didn't
 make an intentional wise decision to make the Web little-endian but
 instead made a naive decision that coupled with the general Web
 developer behavior and the dominance of little-endian hardware
 resulted in the Web becoming little-endian.
 
 http://www.khronos.org/registry/typedarray/specs/latest/#2.1 still
 says The typed array view types operate with the endianness of the
 host computer.  instead of saying The typed array view types operate
 in the little-endian byte order. Don't build big endian systems
 anymore.
 
 *Maybe* that's cunning politics to get a deliberate
 little-endianization pass without more objection, but from the spec
 and a glance at the list archives it sure looks like the WebGL group
 thought that it's reasonable to let Web developers deal with the API
 behavior differing on big-endian and little-endian computers, which
 isn't at all a reasonable expectation given everything we know about
 Web developers.

This is a digression, and I'm happy to discuss the endianness of typed 
arrays/webgl in a separate thread, but this decision was made because it made 
the most sense, both from a technical perspective (even for big endian 
machines!) and from an implementation perspective.

You seem to have a really low opinion of Web developers.  That's unfortunate, 
but it's your opinion.  It's not one that I share.  The Web is a complex 
platform.  It lets you do simple things simply, and it makes complex/difficult 
things possible.  You need to have some development skill to do the 
complex/difficult things.  I'd rather have that than make those things 
impossible.

- Vlad
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WebGL 2.0

2014-05-08 Thread Ehsan Akhgari

On 2014-05-08, 5:51 AM, Anne van Kesteren wrote:

On Wed, May 7, 2014 at 6:19 PM, Benoit Jacob jacob.benoi...@gmail.com wrote:

The Implementations are free to return a context that implements a higher
version part violates the above requirement 1. in your email, The WebGL
working group wants web pages to opt in to the WebGL2 specific parts of the
functionality explicitly.

For that reason, a necessary modification to this plan is to remove the
sentence, Implementations are free to return a context that implements a
higher version [...].

I agree with everything else.


It seems like you want to be able to do that going forward so you
don't have to maintain a large matrix forever, but at some point say
you drop the idea that people will want 1 and simply return N if they
ask for 1.


Yeah, this is the intent I think.

Cheers,
Ehsan

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


update-timer runs during mochitests, causing intermittent failures. We should disable it.

2014-05-08 Thread Irving Reid
Diagnosing bug 1006075 led me to believe that our mochitest-browser
framework instantiates and runs the update-timer service
(http://dxr.mozilla.org/mozilla-central/source/toolkit/mozapps/update/nsIUpdateTimerManager.idl,
http://dxr.mozilla.org/mozilla-central/source/toolkit/mozapps/update/nsUpdateTimerManager.js).

Just my luck, the fx-team TBPL test runners for Linux/Debug take just
the right amount of time to run the test suite, that the update timer
service usually invokes the Add-on Manager's background update at the
same time as the test suite is trying to test it, leading to
unpredictable behaviour. The problem was detected because I put in a
sanity check (since backed out) to throw an error if a background update
starts while another is in progress.

As far as I can tell, very few tests depend on the update manager; there
is a small test suite specifically for the update manager, but I don't
see anything else that should indirectly depend on having it run.


I propose that we disable the update timer manager in at least the
mochitest-browser test suite and xpcshell (if it isn't already); I'm
open to suggestions for other suites where it is on and shouldn't be.
I'm also interested in suggestions for the best way to implement the
disable.

 - irving -
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WebGL 2.0

2014-05-08 Thread Ehsan Akhgari

On 2014-05-08, 8:55 AM, Anne van Kesteren wrote:

On Thu, May 8, 2014 at 12:56 PM, Benoit Jacob jacob.benoi...@gmail.com wrote:

WebGL is low-level and generalistic enough that it is not specifically a
3d graphics API. I prefer to call it a low-level or generalistic graphics
API.


Fair, forgot about that argument. webgles or some such might be
better then. Or enshrine webgl2 forever, but that seems rather
weird.


One issue which we discussed at the meeting yesterday is how we should 
handle the fact that a version dictionary member would be optional 
anyway, and what the default should be.  I think it would be very hard 
to pick a sane default when that is not passed in because of the goal of 
preventing content accidentally working, so I think we might need to end 
up with webgl + N forever...  :/  Which is not great, but if this ends 
up being the only bad part of the solution, I think I would be happy 
with it.


Cheers,
Ehsan

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Test failures in mochitest-browser caused by doing work inside DOM event handlers

2014-05-08 Thread Irving Reid
I've recently fought my way through a bunch of intermittent test
failures in the Add-on Manager mochitest-browser suite, and there's a
common anti-pattern where tests receive a Window callback, usually
unload, and proceed to do significant work inside that event handler
(e.g. opening/closing/focusing other windows; see one detailed case at
https://bugzilla.mozilla.org/show_bug.cgi?id=608820#c281).

The 'unload' event is signalled before the window is completely
unloaded; I don't know the fine details but the stack is in a state
where other window operations sometimes fail.

Specifically, I found it a bit surprising that the run_next_test()
function in the async test harness starts the next test immediately on
top of the current JS stack, inside whatever callbacks are currently in
progress.

I just filed bug 1007906 proposing that we modify the run_next_test
function in the mochitest framework to always schedule the next test for
a future spin of the event loop, to allow the stack to unwind - the
xpcshell test harness already does this, see
http://dxr.mozilla.org/mozilla-central/source/testing/xpcshell/head.js#1443.

We should also update the MDN documentation about writing mochitests to
strongly advise making all DOM and Window callback listeners as small as
possible; my preference is to advocate using Promises as callback
listeners, because stack is always unwound before the .next handler is
invoked.

 - irving -
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Removing telemetry probes while keeping them reported for ESR

2014-05-08 Thread Vladan Djeric

On 07/05/2014 9:51 AM, Henri Sivonen wrote:

I would like to retire some telemetry probes in Firefox 32, but at the
same time, I would like the Telemetry Dashboard to keep reporting data
for those probes throughout the lifetime of Thunderbird 31 ESR. To
make sure that the probes don't disappear from the Telemetry Dashboard
during the ESR lifetime, should I leave the probe definitions in
Histograms.json in m-c and set their expires_in_version to 32 instead
of just removing the entries from Histograms.json in m-c?


Right, the best way to do this is to set the expires_in_version to 32 
and don't delete the histogram from Histograms.json until the histogram 
isn't needed anywhere. The Telemetry server also uses the definitions 
from this file.


Note that if you set expires_in_version to 32, you won't collect any 
data from 32 / 32.0a1 / 32.0b1 etc.


Also from what I gather, Thunderbird ESRs have been replaced with 
regular Thunderbird releases.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WebGL 2.0

2014-05-08 Thread Jonas Sicking
On Thu, May 8, 2014 at 1:40 PM, Ehsan Akhgari ehsan.akhg...@gmail.com wrote:
 On 2014-05-08, 8:55 AM, Anne van Kesteren wrote:

 On Thu, May 8, 2014 at 12:56 PM, Benoit Jacob jacob.benoi...@gmail.com
 wrote:

 WebGL is low-level and generalistic enough that it is not specifically a
 3d graphics API. I prefer to call it a low-level or generalistic
 graphics
 API.


 Fair, forgot about that argument. webgles or some such might be
 better then. Or enshrine webgl2 forever, but that seems rather
 weird.


 One issue which we discussed at the meeting yesterday is how we should
 handle the fact that a version dictionary member would be optional anyway,
 and what the default should be.  I think it would be very hard to pick a
 sane default when that is not passed in because of the goal of preventing
 content accidentally working, so I think we might need to end up with
 webgl + N forever...  :/  Which is not great, but if this ends up being
 the only bad part of the solution, I think I would be happy with it.

Based on what has been discussed in this thread so far, specifically that:
* We expect that UAs and hardware will make these features available
in chunks rather than a feature here and a feature there.
* We want devs to opt in to getting access to new features so that
they don't accidentally depend on a particular chunk. (FWIW, this
sounds like an awesome thing to me given the hardware dependency
here).
* We expect each new version of WebGL to be fully backwards compatible.

I don't see any particular problems with the
createContext(webgl/webgl2/webgl3) approach.

A well written page will detect what level is supported and fall back
to older versions as needed.

A poorly written page will just do createContext(webgl2) and not
fall back to webgl1. Whatever we do we can't get such a page to run
properly on a device which doesn't support webgl2, no matter what
syntax we use.

A really poorly written page might do createContext(webgl2) and then
only use the webgl1 feature set. Given the constraint that we want
people to opt-in to additional functionality over the base webgl1
profile, there's always a risk that people will attempt to opt in to
more than they need and then refuse to work when those opt-ins fail.

What bad code patterns is it that people are worried that this
proposal will lead to?

The only thing that I could see which in some sense leads to a
materially simpler API is to simply do createContext(webgl) and ask
developers to do feature detection on the returned context. However
that abandons the opt-in design which I think would be worse.

No matter what, I think the situation here is dramatically different
from CSS quirks mode and JS use strict. In both those situations
there are backwards incompatible modes. That does not appear to be the
case here.

/ Jonas
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Proposed W3C Charter: CSS Working Group

2014-05-08 Thread L. David Baron
The W3C is proposing a revised charter for:

  Cascading Style Sheets (CSS) Working Group
  http://lists.w3.org/Archives/Public/public-new-work/2014May/.html
  http://www.w3.org/Style/2013/css-charter.html
  deadline for comments: May 29

Mozilla has the opportunity to send comments or objections through
May 29.  Please reply to this thread if you think there's something
we should say.

As with charters for other long-lived groups, the details in the
list of deliverables largely matter only in that they might
constrain what the group is allowed to work on without a new
charter; the dates tend to be largely fiction.

My intent (unless I get other comments) is to support the charter,
but point to comments that have been made on the working group
mailing list since the charter was sent out for review (some of
which are about documents omitted from the list) so that the W3C has
a chance to incorporate those comments (which it probably wouldn't
if no AC representative mentioned them).

-David

-- 
턞   L. David Baron http://dbaron.org/   턂
턢   Mozilla  https://www.mozilla.org/   턂
 Before I built a wall I'd ask to know
 What I was walling in or walling out,
 And to whom I was like to give offense.
   - Robert Frost, Mending Wall (1914)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Proposal to revise W3C Process

2014-05-08 Thread L. David Baron
W3C recently published the following proposal to revise the W3C
process, which is effectively in the Proposed Recommendation stage
right now:
  http://www.w3.org/2014/05/Process-20140506/

The document contains a description of the major changes in the
Status of this Document section.

The most substantive change is to merge the Last Call and Candidate
Recommendation stages of the process, and begin the Advisory
Committee review at Candidate Recommendation.  There are also
substantive changes to the rules for what constitutes adequate
implementation experience.

It has been developed mostly in public, in a community group:
http://www.w3.org/community/w3process/
(I'm nominally a member, but didn't participate much.)

Mozilla has the opportunity to send comments as a W3C member through
June 13.  Comments can also be made on the public-w3process mailing
list.  The document will also be under discussion at the Advisory
Committee meeting on June 9-10, which I am attending, so if there
are comments you think Mozilla should make, it's best that I hear
about them prior to that meeting.

This is an incremental change to the process.  It's making some
rather substantive changes to the stages W3C specifications go
through, and mostly leaving other areas of the process alone.  I'd
prefer not to drag other (unrevised) areas of the process document
into this process cycle in the hopes that there will be further
revisions of the process document to improve other areas of the
process (and thus not go another 9 years without a revision to the
process), although I concede I haven't talked to others about that
idea.

If there are things you think Mozilla should raise in its comments,
please bring them up in this thread.

-David

-- 
턞   L. David Baron http://dbaron.org/   턂
턢   Mozilla  https://www.mozilla.org/   턂
 Before I built a wall I'd ask to know
 What I was walling in or walling out,
 And to whom I was like to give offense.
   - Robert Frost, Mending Wall (1914)


signature.asc
Description: Digital signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charter: CSS Working Group

2014-05-08 Thread Jonas Sicking
I'd really like to see the CSS WG spend some time on properties that
allowed more control over scrolling and zooming. Also something that
addresses the complexity involved in building long scrollable lists.

Right now a lot of websites implement their own scrolling behavior in
JS by listening to UI events, cancelling their default behavior, and
then setting various scroll related values through the DOM. UIs that
snap scrolling to certain elements or that implement edge effects
can't be done any other way.

However this doesn't work well at all as we are moving to a world
where scrolling happens off the main thread.

Likewise, being able to implement a UI where a user can do zooming
currently requires using JS to catch touch/mousewheel events,
cancelling them, and then using the DOM to implement a zooming UI.

UIs that allow zooming of a part of a webpage, for example zooming in
on a picture or an email while leaving surrounding UI intact, can't be
implementing right now without main-thread roundtrips.

The touch-action CSS property defined in pointer-events attempts to
address some of this, but falls far short. The obvious first problem
is that it's touch specific (which is sort of ironic given that it's a
spec that attempts to remove the separation between touch and mouse
input).

Finally there's the issue of how to use the DOM and CSS to create a
lazily populated list. Currently lazily populating the DOM from a
database leads to a lot of churn in reflows of content that's already
on the page.

These issues are *the* top difficulties on the layout side in creating
a performant UI on low end mobile devices. I don't see any
deliverables listed in the charter that's attempting to address this.
Should we ask that it's added, or is it covered by any of the existing
ones?

/ Jonas

On Thu, May 8, 2014 at 5:22 PM, L. David Baron dba...@dbaron.org wrote:
 The W3C is proposing a revised charter for:

   Cascading Style Sheets (CSS) Working Group
   http://lists.w3.org/Archives/Public/public-new-work/2014May/.html
   http://www.w3.org/Style/2013/css-charter.html
   deadline for comments: May 29

 Mozilla has the opportunity to send comments or objections through
 May 29.  Please reply to this thread if you think there's something
 we should say.

 As with charters for other long-lived groups, the details in the
 list of deliverables largely matter only in that they might
 constrain what the group is allowed to work on without a new
 charter; the dates tend to be largely fiction.

 My intent (unless I get other comments) is to support the charter,
 but point to comments that have been made on the working group
 mailing list since the charter was sent out for review (some of
 which are about documents omitted from the list) so that the W3C has
 a chance to incorporate those comments (which it probably wouldn't
 if no AC representative mentioned them).

 -David

 --
 턞   L. David Baron http://dbaron.org/   턂
 턢   Mozilla  https://www.mozilla.org/   턂
  Before I built a wall I'd ask to know
  What I was walling in or walling out,
  And to whom I was like to give offense.
- Robert Frost, Mending Wall (1914)
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Time to revive the require SSE2 discussion

2014-05-08 Thread Jonas Sicking
I'm curious how much of that 1% is on old versions of Firefox and
aren't updating anyway?

/ Jonas

On Thu, May 8, 2014 at 5:42 PM,  matthew.br...@gmail.com wrote:
 On Tuesday, January 3, 2012 4:37:53 PM UTC-8, Benoit Jacob wrote:
 2012/1/3 Jeff Muizelaar jmuizel...@mozilla.com:

 

  On 2012-01-03, at 2:01 PM, Benoit Jacob wrote:

 

  2012/1/2 Robert Kaiser ka...@kairo.at:

 

  Jean-Marc Desperrier schrieb:

 

 

  According to https://bugzilla.mozilla.org/show_bug.cgi?id=594160#c6 ,

 

  the Raw Dump tab on crash-stats.mozilla.com shows the needed

 

  information, you need to sort out from the info on the second line CPU

 

  maker, family, model, and stepping information whether SSE2 is there or

 

  not (With a little search, I can find that info again, bug 593117 gives

 

  a formula that's correct for most of the cases).

 

 

 

  https://crash-analysis.mozilla.com/crash_analysis/ holds

 

  *-pub-crashdata.csv.gz files that have that info from all Firefox

 

  desktop/mobile crashes on a given day, you should be able to analyze that

 

  for this info - with a bias, of course, as it's only people having crashes

 

  that you see there. No idea if the less biased telemetry samples have that

 

  info as well.

 

 

  On yesterday's crash data, assuming that AuthenticAMD\ family\

  [1-6][^0-9]  is the proper way to identify these old AMD CPUs (I

  didn't check that very well), I get these results:

 

 

  The measurement I have used in the past was:

 

  CPUs have sse2 if:

 

  if vendor == AuthenticAMD and family = 15

  if vendor == GenuineIntel and family = 15 or (family == 6 and (model == 9

  or model  11))

  if vendor == CentaurHauls and family = 6 and model = 10

 



 Thanks.



 AMD and Intel CPUs amount to 296362 crashes:



 bjacob@cahouette:~$ egrep AuthenticAMD\|GenuineIntel

 20120102-pub-crashdata.csv | wc -l

 296362



 Counting SSE2-capable CPUs:



 bjacob@cahouette:~$ egrep GenuineIntel\ family\ 1[5-9]

 20120102-pub-crashdata.csv | wc -l

 58490

 bjacob@cahouette:~$ egrep GenuineIntel\ family\ [2-9][0-9]

 20120102-pub-crashdata.csv | wc -l

 0

 bjacob@cahouette:~$ egrep GenuineIntel\ family\ 6\ model\ 9

 20120102-pub-crashdata.csv | wc -l

 792

 bjacob@cahouette:~$ egrep GenuineIntel\ family\ 6\ model\ 1[2-9]

 20120102-pub-crashdata.csv | wc -l

 52473

 bjacob@cahouette:~$ egrep GenuineIntel\ family\ 6\ model\ [2-9][0-9]

 20120102-pub-crashdata.csv | wc -l

 103655

 bjacob@cahouette:~$ egrep AuthenticAMD\ family\ 1[5-9]

 20120102-pub-crashdata.csv | wc -l

 59463

 bjacob@cahouette:~$ egrep AuthenticAMD\ family\ [2-9][0-9]

 20120102-pub-crashdata.csv | wc -l

 8120



 Total SSE2 capable CPUs:



 58490 + 792 + 52473 + 103655 + 59463 + 8120 = 282993



 1 - 282993 / 296362 = 0.045



 So the proportion of non-SSE2-capable CPUs among crash reports is 4.5 %.

 Just for the record, I coded this analysis up here: 
 https://gist.github.com/matthew-brett/9cb5274f7451a3eb8fc0

 SSE2 apparently now at about one percent:

 20120102-pub-crashdata.csv.gz: 4.53
 20120401-pub-crashdata.csv.gz: 4.24
 20120701-pub-crashdata.csv.gz: 2.77
 20121001-pub-crashdata.csv.gz: 2.83
 20130101-pub-crashdata.csv.gz: 2.66
 20130401-pub-crashdata.csv.gz: 2.59
 20130701-pub-crashdata.csv.gz: 2.20
 20131001-pub-crashdata.csv.gz: 1.92
 20140101-pub-crashdata.csv.gz: 1.86
 20140401-pub-crashdata.csv.gz: 1.12

 Cheers,

 Matthew
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Enabling new HTTP cache on nightly (browser only, not automated tests) soon

2014-05-08 Thread Nicholas Nethercote
On Thu, May 8, 2014 at 3:48 PM, Vladan Djeric vdje...@mozilla.com wrote:
 Why didn't TP5 report a regression in memory usage?

Because TP5's memory measurements are meagre and usually fail to
detect even obvious regressions. And this leak only occurred in
unusual circumstances; AWSY is much better than TP5 at detecting
regressions (though still far from perfect) and it missed this case
too.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform