Re: packaged apps and origins

2013-04-26 Thread Anne van Kesteren
On Fri, Apr 26, 2013 at 1:34 AM, Ben Adida benad...@mozilla.com wrote:
 Currently, packaged apps run in an origin that is newly minted for each
 device installation, effectively a GUID that differs from device to device.
 This works up until the point where the rest of the Web expects a stable
 origin across devices, e.g. OAuth and OpenID flows, and Persona. Since
 origins are so critical to the Web, I expect to see many more failures over
 time.

What is origin used for? Can Persona not use object-capabilities instead?


--
http://annevankesteren.nl/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


EOL of XP = EOL of GDI?

2013-04-26 Thread papalowa
Greetings,

it might be a little bit early considering that XP hasn't reached its EOL yet 
and probably even then will be far from being out-of-favor enough, but I was 
wondering whether once the time has come it's also time to move away from all 
GDI code paths concerning font rendering on Windows. In return probably the 
configuration of having DirectWrite in a non-HA-D2D-context could be supported 
officially, but would need some improvements. Something which could be worked 
on even before GDI is really abandoned for font rendering.

Reason why I ask: I would like to have DirectWrite rendering everywhere but one 
of my machines has only a non-WDDM-1.1+ GPU and if I force DirectWrite 
rendering (non-D2D) it looks horrible in that it's grey-scale anti-aliased most 
of the time (and that on a display that has not much DPI to begin with). Now I 
know this is an exotic setup and I am probably in the minority and for using an 
unsupported configuration all these inelegances serve me right but I thought 
that following the path I outlined it might help me, save some code and not to 
forget, make the Windows ecosystem more consistent.

What do you think?

Thanks,
Peter
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Improving Mac OS X 10.6 test wait times by reducing 10.7 load

2013-04-26 Thread jmaher
On Thursday, April 25, 2013 4:12:16 PM UTC-4, Ed Morley wrote:
 On 25 April 2013 20:14:10, Justin Lebar wrote:
 
  Is this what you're saying?
 
  * 10.6 opt tests - per-checkin (no change)
 
  * 10.6 debug tests- reduced
 
  * 10.7 opt tests - reduced
 
  * 10.7 debug tests - reduced
 
 
 
  * reduced -- m-c, m-a, m-b, m-r, esr17
 
 
 
  Yes.
 
 
 
  Now that I think about this more, maybe we should go big or go home:
 
  change 10.6 opt tests to reduced as well, and see how it goes.  We can
 
  always change it back.
 
 
 
  If it goes well, we can try to do the same thing with the Windows tests.
 
 
 
  We should get the sheriffs to sign off.
 
 
 
 Worth a shot, we can always revert :-) Only thing I might add, is that 
 
 we'll need a way to opt into 10.6 test jobs on Try, in case someone has 
 
 to debug issues found on mozilla-central (eg using sfink's undocumented 
 
 OS version specific syntax).
 
 
 
 Ed

I had to revert a talos change on inbound due to 10.6 failures only just on 
Wednesday.  This was due to a different version of python on 10.6 :(  

-Joel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: packaged apps and origins

2013-04-26 Thread Ben Adida

On 4/26/13 3:02 AM, Anne van Kesteren wrote:


What is origin used for? Can Persona not use object-capabilities instead?


Do you mean that we should completely revamp the Persona protocol, 
including assertions to an origin and the way we present the login UI to 
users, because packaged apps don't conform with the way other web apps work?


That would also mean asking OpenID and OAuth to change what they do.

That seems backwards to me. Reestablishing real origins is a better path 
forward to leverage existing web architecture.


-Ben
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Improving Mac OS X 10.6 test wait times by reducing 10.7 load

2013-04-26 Thread jmaher
On Friday, April 26, 2013 9:49:18 AM UTC-4, Armen Zambrano G. wrote:
 
 Maybe we can keep one of the talos jobs around? (until releng fixes the 
 
 various python versions' story)
 
 IIUC this was more of an infra issue rather than a Firefox testing issue.

It was infra related, but it was specific to the 10.6 platform.  Even knowing 
that, I fully support the proposed plan.  We could have easily determined the 
root cause of the 10.6 specific failure a day later on a different branch.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Improving Mac OS X 10.6 test wait times by reducing 10.7 load

2013-04-26 Thread Phil Ringnalda
On 4/25/13 1:12 PM, Ed Morley wrote:
 On 25 April 2013 20:14:10, Justin Lebar wrote:
 Is this what you're saying?
 * 10.6 opt tests - per-checkin (no change)
 * 10.6 debug tests- reduced
 * 10.7 opt tests - reduced
 * 10.7 debug tests - reduced

 * reduced -- m-c, m-a, m-b, m-r, esr17

 Yes.

 Now that I think about this more, maybe we should go big or go home:
 change 10.6 opt tests to reduced as well, and see how it goes.  We can
 always change it back.

 If it goes well, we can try to do the same thing with the Windows tests.

 We should get the sheriffs to sign off.
 
 Worth a shot, we can always revert :-) Only thing I might add, is that
 we'll need a way to opt into 10.6 test jobs on Try, in case someone has
 to debug issues found on mozilla-central (eg using sfink's undocumented
 OS version specific syntax).

So what we're saying is that we are going to completely reverse our
previous tree management policy?

Currently, m-c is supposed to be the tree that's safely unbroken, and we
know it's unbroken because the tests that we run on it have already been
run on the tree that merged into it, and you should almost never push
directly to it unless you're in a desperate hurry to hit a nightly.

This change would mean that we expect to have merges of hundreds of
csets from inbound sometimes break m-c with no idea which one broke it,
that we expect to sometimes have permaorange on it for days, and that
it's better to push your widget/cocoa/ pushes directly to m-c than to
inbound.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Improving Mac OS X 10.6 test wait times by reducing 10.7 load

2013-04-26 Thread Justin Lebar
 So what we're saying is that we are going to completely reverse our
 previous tree management policy?

Basically, yes.

Although, due to coalescing, do you always have a full run of tests on
the tip of m-i before merging to m-c?

A better solution would be to let you trigger a full set of tests (w/o
coalescing) on m-i before merging to m-c.  We've been asking for a
similar feature for tryserver (let us add new jobs to my push) for a
long time.  Perhaps if we made this change, we could get releng to
implement that feature sooner rather than later, particularly if this
change caused pain to other teams who pull from a broken m-c.

I am not above effecting a sense of urgency in order to get bugs fixed.  :)

 Currently, m-c is supposed to be the tree that's safely unbroken, and we
 know it's unbroken because the tests that we run on it have already been
 run on the tree that merged into it, and you should almost never push
 directly to it unless you're in a desperate hurry to hit a nightly.

 This change would mean that we expect to have merges of hundreds of
 csets from inbound sometimes break m-c with no idea which one broke it,
 that we expect to sometimes have permaorange on it for days, and that
 it's better to push your widget/cocoa/ pushes directly to m-c than to
 inbound.
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Improving Mac OS X 10.6 test wait times by reducing 10.7 load

2013-04-26 Thread Ryan VanderMeulen

On 4/26/2013 11:11 AM, Justin Lebar wrote:

So what we're saying is that we are going to completely reverse our
previous tree management policy?


Basically, yes.

Although, due to coalescing, do you always have a full run of tests on
the tip of m-i before merging to m-c?



Yes. Note that we generally aren't merging inbound tip to m-c - we're 
taking a known-green cset (including PGO tests).

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Some data on mozilla-inbound

2013-04-26 Thread Wesley Johnston
Maybe. I started to avoid it if possible around then, but almost 4 hours for 
results still is basically unusable.

- Wes

- Original Message -
From: Phil Ringnalda philringna...@gmail.com
To: dev-platform@lists.mozilla.org
Sent: Friday, April 26, 2013 8:01:25 AM
Subject: Re: Some data on mozilla-inbound

On 4/25/13 4:47 PM, Wesley Johnston wrote:
 Requesting one set of tests on one platform is a 6-10 hour turnaround for me.

That's surprising. https://tbpl.mozilla.org/?tree=Tryrev=9d1daf69061d
was a midday -b do -p all -u all with a 3 hour 40 minute end-to-end.

Or did you mean, as a great many people do while discussing try these
days, back in February when I stopped using try because it was so awful
then, requesting one set of tests...?

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Some data on mozilla-inbound

2013-04-26 Thread Phil Ringnalda
On 4/26/13 8:25 AM, Wesley Johnston wrote:
 Maybe. I started to avoid it if possible around then, but almost 4 hours for 
 results still is basically unusable.

Tell me about it - that's actually the same as the end-to-end on
inbound/central. Unfortunately, engineering is totally indifferent to
things like having doubled the cycle time for Win debug browser-chrome
since last November.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Improving Mac OS X 10.6 test wait times by reducing 10.7 load

2013-04-26 Thread Phil Ringnalda
On 4/26/13 8:11 AM, Justin Lebar wrote:
 So what we're saying is that we are going to completely reverse our
 previous tree management policy?
 
 Basically, yes.
 
 Although, due to coalescing, do you always have a full run of tests on
 the tip of m-i before merging to m-c?

It's not just coincidence that the tip of most m-i - m-c merges is a
backout - for finding a mergeable cset in the daytime, you're usually
looking at the last backout during a tree closure, when we sat and
waited to get tests run on it. Otherwise, you pick one that looks
possible, and then figure out what got coalesced up and see how that did
where it got coalesced.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Improving Mac OS X 10.6 test wait times by reducing 10.7 load

2013-04-26 Thread Armen Zambrano G.

Would we be able to go back to where we disabled 10.7 altogether?
Product (Asa in separate thread) and release drivers (Akeybl) were OK to 
the compromise of version specific test coverage being removed completely.


Side note: adding Mac PGO would increase the build load (Besides this we 
have to do a large PO as we expect Mac wait times to be showing up as 
general load increases).


Not all reducing load approaches are easy to implement (due to the way 
that buildbot is designed) and it does not ensure that we would reduce 
it enough. It's expensive enough to support 3 different versions of Mac 
as is without bringing 10.9 into the table. We have to cut things at times.


One compromise that would be easy to implement and *might* reduce the 
load is to disable all debug jobs for 10.7.


cheers,
Armen

On 2013-04-26 11:29 AM, Justin Lebar wrote:

As a compromise, how hard would it be to run the Mac 10.6 and 10.7
tests on m-i occasionally, like we run the PGO tests?  (Maybe we could
trigger them on the same csets as we run PGO; it seems like that would
be useful.)

On Fri, Apr 26, 2013 at 11:19 AM, Ryan VanderMeulen rya...@gmail.com wrote:

On 4/26/2013 11:11 AM, Justin Lebar wrote:


So what we're saying is that we are going to completely reverse our
previous tree management policy?



Basically, yes.

Although, due to coalescing, do you always have a full run of tests on
the tip of m-i before merging to m-c?



Yes. Note that we generally aren't merging inbound tip to m-c - we're taking
a known-green cset (including PGO tests).

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Improving Mac OS X 10.6 test wait times by reducing 10.7 load

2013-04-26 Thread Armen Zambrano G.
Just disabling debug and talos jobs for 10.7 should reduce more than 50% 
of the load on 10.7. That might be sufficient for now.


Any objections on this plan?
We can re-visit later on if we need more disabled.

cheers,
Armen

On 2013-04-26 11:50 AM, Armen Zambrano G. wrote:

Would we be able to go back to where we disabled 10.7 altogether?
Product (Asa in separate thread) and release drivers (Akeybl) were OK to
the compromise of version specific test coverage being removed completely.

Side note: adding Mac PGO would increase the build load (Besides this we
have to do a large PO as we expect Mac wait times to be showing up as
general load increases).

Not all reducing load approaches are easy to implement (due to the way
that buildbot is designed) and it does not ensure that we would reduce
it enough. It's expensive enough to support 3 different versions of Mac
as is without bringing 10.9 into the table. We have to cut things at times.

One compromise that would be easy to implement and *might* reduce the
load is to disable all debug jobs for 10.7.

cheers,
Armen

On 2013-04-26 11:29 AM, Justin Lebar wrote:

As a compromise, how hard would it be to run the Mac 10.6 and 10.7
tests on m-i occasionally, like we run the PGO tests?  (Maybe we could
trigger them on the same csets as we run PGO; it seems like that would
be useful.)

On Fri, Apr 26, 2013 at 11:19 AM, Ryan VanderMeulen rya...@gmail.com
wrote:

On 4/26/2013 11:11 AM, Justin Lebar wrote:


So what we're saying is that we are going to completely reverse our
previous tree management policy?



Basically, yes.

Although, due to coalescing, do you always have a full run of tests on
the tip of m-i before merging to m-c?



Yes. Note that we generally aren't merging inbound tip to m-c - we're
taking
a known-green cset (including PGO tests).

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform




___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Improving Mac OS X 10.6 test wait times by reducing 10.7 load

2013-04-26 Thread Justin Lebar
 Would we be able to go back to where we disabled 10.7 altogether?

On m-i and try only, or everywhere?

On Fri, Apr 26, 2013 at 12:10 PM, Armen Zambrano G. arme...@mozilla.com wrote:
 Just disabling debug and talos jobs for 10.7 should reduce more than 50% of
 the load on 10.7. That might be sufficient for now.

 Any objections on this plan?
 We can re-visit later on if we need more disabled.

 cheers,
 Armen


 On 2013-04-26 11:50 AM, Armen Zambrano G. wrote:

 Would we be able to go back to where we disabled 10.7 altogether?
 Product (Asa in separate thread) and release drivers (Akeybl) were OK to
 the compromise of version specific test coverage being removed completely.

 Side note: adding Mac PGO would increase the build load (Besides this we
 have to do a large PO as we expect Mac wait times to be showing up as
 general load increases).

 Not all reducing load approaches are easy to implement (due to the way
 that buildbot is designed) and it does not ensure that we would reduce
 it enough. It's expensive enough to support 3 different versions of Mac
 as is without bringing 10.9 into the table. We have to cut things at
 times.

 One compromise that would be easy to implement and *might* reduce the
 load is to disable all debug jobs for 10.7.

 cheers,
 Armen

 On 2013-04-26 11:29 AM, Justin Lebar wrote:

 As a compromise, how hard would it be to run the Mac 10.6 and 10.7
 tests on m-i occasionally, like we run the PGO tests?  (Maybe we could
 trigger them on the same csets as we run PGO; it seems like that would
 be useful.)

 On Fri, Apr 26, 2013 at 11:19 AM, Ryan VanderMeulen rya...@gmail.com
 wrote:

 On 4/26/2013 11:11 AM, Justin Lebar wrote:


 So what we're saying is that we are going to completely reverse our
 previous tree management policy?



 Basically, yes.

 Although, due to coalescing, do you always have a full run of tests on
 the tip of m-i before merging to m-c?


 Yes. Note that we generally aren't merging inbound tip to m-c - we're
 taking
 a known-green cset (including PGO tests).

 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform



 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: packaged apps and origins

2013-04-26 Thread Ben Adida

On 4/25/13 10:34 PM, jsmith.mozi...@gmail.com wrote:




1. It's way too late for this work for v1.01 (i.e. v1.01 OOS)


I want to emphasize that the current architecture is not just 
inconvenient, it breaks a ton of things, including all login solutions 
for packaged apps. This is a major problem.



2. We're past feature freeze for v1.1 (i.e. likely v1.1 OOS)


This is a major bug, not a feature request.


3. I recall talking with Fabrice that this was a non-trivial amount
of  work for fixing this,


I'm having trouble seeing how that is. We can stage the feature in a 
couple of ways, first by letting marketplace packaged apps claim an 
origin. That's one line in the manifest and a change to the app's origin 
from app://guid to app://facebook.com. The self-hosted use case can be 
fixed later.


-Ben
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Automatic tree clobbering is coming

2013-04-26 Thread Gregory Szorc
Auto clobber is now opt in on mozilla-central.

You will need to add |mk_add_options AUTOCLOBBER=1| to your mozconfig to
enable auto clobber. The in-tree mozconfigs (used by automation and
possibly some developers) have auto clobber enabled.

Thank you for the patch, Ed Morley!

On 4/17/2013 4:12 PM, Gregory Szorc wrote:
 I agree that we should consider a compromise regarding the UI/UX of
 auto clobber. I have filed bug 863091.

 I would like to say that I view the object directory as a cache of the
 output of the build system. Since it's a cache, cache rules apply and
 data may disappear at any time. This analogy works well for my
 developer workflow - I never put anything not derived from the build
 system in my object directory. But, I don't know what other people are
 doing. Could the anti-auto-clobberers please explain where this
 analogy falls apart for your workflow?

 On 4/17/13 3:36 PM, Justin Lebar wrote:
 I think the possibility of deleting user data should be taken
 seriously.  Exactly who is doing the deletion (configure vs. make) is
 immaterial.  It's also not right to argue that since a majority of
 users don't expect to lose data, it's OK to silently delete data for a
 minority of them.

 I think we should either opt in to auto-clobbering or prompt to
 clobber by default and allow users to opt out of the prompt.

 On Thu, Apr 18, 2013 at 12:18 AM, Ralph Giles gi...@mozilla.com wrote:
 On 13-04-17 12:36 PM, Gregory Szorc wrote:

 It /could/, sure. However, I consider auto clobbering a core build
 system feature (sheriffs were very vocal about wanting it). As
 such, it
 needs to be part of client.mk. (Please correct me if I am wrong.)

 Ok. A makefile deleting things is less surprising that an a configure
 script doing so.

   -r
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Improving Mac OS X 10.6 test wait times by reducing 10.7 load

2013-04-26 Thread Armen Zambrano G.


On 2013-04-26 12:14 PM, Justin Lebar wrote:

Would we be able to go back to where we disabled 10.7 altogether?


On m-i and try only, or everywhere?


The initial proposal was for disabling everywhere.

We could leave 10.7 opt jobs running everywhere as a compromise and 
re-visit after I re-purpose the first batch of machines.


best regards,
Armen



On Fri, Apr 26, 2013 at 12:10 PM, Armen Zambrano G. arme...@mozilla.com wrote:

Just disabling debug and talos jobs for 10.7 should reduce more than 50% of
the load on 10.7. That might be sufficient for now.

Any objections on this plan?
We can re-visit later on if we need more disabled.

cheers,
Armen


On 2013-04-26 11:50 AM, Armen Zambrano G. wrote:


Would we be able to go back to where we disabled 10.7 altogether?
Product (Asa in separate thread) and release drivers (Akeybl) were OK to
the compromise of version specific test coverage being removed completely.

Side note: adding Mac PGO would increase the build load (Besides this we
have to do a large PO as we expect Mac wait times to be showing up as
general load increases).

Not all reducing load approaches are easy to implement (due to the way
that buildbot is designed) and it does not ensure that we would reduce
it enough. It's expensive enough to support 3 different versions of Mac
as is without bringing 10.9 into the table. We have to cut things at
times.

One compromise that would be easy to implement and *might* reduce the
load is to disable all debug jobs for 10.7.

cheers,
Armen

On 2013-04-26 11:29 AM, Justin Lebar wrote:


As a compromise, how hard would it be to run the Mac 10.6 and 10.7
tests on m-i occasionally, like we run the PGO tests?  (Maybe we could
trigger them on the same csets as we run PGO; it seems like that would
be useful.)

On Fri, Apr 26, 2013 at 11:19 AM, Ryan VanderMeulen rya...@gmail.com
wrote:


On 4/26/2013 11:11 AM, Justin Lebar wrote:



So what we're saying is that we are going to completely reverse our
previous tree management policy?




Basically, yes.

Although, due to coalescing, do you always have a full run of tests on
the tip of m-i before merging to m-c?



Yes. Note that we generally aren't merging inbound tip to m-c - we're
taking
a known-green cset (including PGO tests).

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform





___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: packaged apps and origins

2013-04-26 Thread Fabrice Desré
On Fri, 26 Apr 2013 09:31:30 -0700, Ben Adida wrote:

 3. I recall talking with Fabrice that this was a non-trivial amount of 
 work for fixing this,
 
 I'm having trouble seeing how that is. We can stage the feature in a
 couple of ways, first by letting marketplace packaged apps claim an
 origin. That's one line in the manifest and a change to the app's origin
 from app://guid to app://facebook.com. The self-hosted use case can be
 fixed later.

Because my understanding was that people wanted packaged apps to have http
(s) origins to allow them to retrieve data from remote sites without 
using CORS or SystemXHR. If this is just about choosing the app://XXX 
origin, this is much simpler indeed. But it doesn't really solve the 
issue for eg. some oauth providers that only accept http(s) redirection 
URIs.

The origin of hosted apps is already the origin of the manifest, so I 
don't think we have anything to change there.

Fabrice
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Improving Mac OS X 10.6 test wait times by reducing 10.7 load

2013-04-26 Thread Justin Lebar
I don't think I'm comfortable disabling this platform across the
board, or even disabling debug-only runs across the board.

As jmaher pointed out, there are platform differences here.  If we
disable this platform entirely, we lose visibility into rare but, we
seem to believe, possible events.

It seems like the only reason to disable everywhere instead of only on
m-i/try (or running less frequently on m-i, like we do with PGO) is
that the former is easier to implement.  It seems like we're proposing
taking a lot of risk here to work around our own failings...

On Fri, Apr 26, 2013 at 1:03 PM, Armen Zambrano G. arme...@mozilla.com wrote:

 On 2013-04-26 12:14 PM, Justin Lebar wrote:

 Would we be able to go back to where we disabled 10.7 altogether?


 On m-i and try only, or everywhere?


 The initial proposal was for disabling everywhere.

 We could leave 10.7 opt jobs running everywhere as a compromise and re-visit
 after I re-purpose the first batch of machines.

 best regards,
 Armen



 On Fri, Apr 26, 2013 at 12:10 PM, Armen Zambrano G. arme...@mozilla.com
 wrote:

 Just disabling debug and talos jobs for 10.7 should reduce more than 50%
 of
 the load on 10.7. That might be sufficient for now.

 Any objections on this plan?
 We can re-visit later on if we need more disabled.

 cheers,
 Armen


 On 2013-04-26 11:50 AM, Armen Zambrano G. wrote:


 Would we be able to go back to where we disabled 10.7 altogether?
 Product (Asa in separate thread) and release drivers (Akeybl) were OK to
 the compromise of version specific test coverage being removed
 completely.

 Side note: adding Mac PGO would increase the build load (Besides this we
 have to do a large PO as we expect Mac wait times to be showing up as
 general load increases).

 Not all reducing load approaches are easy to implement (due to the way
 that buildbot is designed) and it does not ensure that we would reduce
 it enough. It's expensive enough to support 3 different versions of Mac
 as is without bringing 10.9 into the table. We have to cut things at
 times.

 One compromise that would be easy to implement and *might* reduce the
 load is to disable all debug jobs for 10.7.

 cheers,
 Armen

 On 2013-04-26 11:29 AM, Justin Lebar wrote:


 As a compromise, how hard would it be to run the Mac 10.6 and 10.7
 tests on m-i occasionally, like we run the PGO tests?  (Maybe we could
 trigger them on the same csets as we run PGO; it seems like that would
 be useful.)

 On Fri, Apr 26, 2013 at 11:19 AM, Ryan VanderMeulen rya...@gmail.com
 wrote:


 On 4/26/2013 11:11 AM, Justin Lebar wrote:



 So what we're saying is that we are going to completely reverse our
 previous tree management policy?




 Basically, yes.

 Although, due to coalescing, do you always have a full run of tests
 on
 the tip of m-i before merging to m-c?


 Yes. Note that we generally aren't merging inbound tip to m-c - we're
 taking
 a known-green cset (including PGO tests).

 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform




 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform


 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Storage in Gecko

2013-04-26 Thread Gregory Szorc
I'd like to start a discussion about the state of storage in Gecko.

Currently when you are writing a feature that needs to store data, you
have roughly 3 choices:

1) Preferences
2) SQLite
3) Manual file I/O

Preferences are arguably the easiest. However, they have a number of
setbacks:

a) Poor durability guarantees. See bugs 864537 and 849947 for real-life
issues. tl;dr writes get dropped!
b) Integers limited to 32 bit (JS dates overflow b/c milliseconds since
Unix epoch).
c) I/O is synchronous.
d) The whole method for saving them to disk is kind of weird.
e) The API is awkward. See Preferences.jsm for what I'd consider a
better API.
f) Doesn't scale for non-trivial data sets.
g) Clutters about:config (all preferences aren't config options).

We have SQLite. You want durability: it's your answer. However, it too
has setbacks:

a) It eats I/O operations for breakfast. Multiple threads. Lots of
overhead compared to prefs. (But hard to lose data.)
b) By default it's not configured for optimal performance (you need to
enable the WAL, muck around with other PRAGMA).
c) Poor schemas can lead to poor performance.
d) It's often overkill.
e) Storage API has many footguns (use Sqlite.jsm to protect yourself).
f) Lots of effort to do right. Auditing code for 3rd party extensions
using SQLite, many of them aren't doing it right.

And if one of those pre-built solutions doesn't offer what you need, you
can roll your own with file I/O. But that also has setbacks:

a) You need to roll your own. (How often do I flush? Do I use many small
files or fewer large files? Different considerations for mobile (slow
I/O) vs desktop?)
b) You need to roll your own. (Listing it twice because it's *really*
annoying, especially for casual developers that just want to implement
features - think add-on developers.)
c) Easy to do wrong (excessive flushing/fsyncing, too many I/O
operations, inefficient appends, poor choices for mobile, etc).
d) Wheel reinvention. Atomic operations/transactions. Data marshaling. etc.

I believe there is a massive gap between the
easy-but-not-ready-for-prime-time preferences and
the-massive-hammer-solving-the-problem-you-don't-have-and-introducing-many-new-ones
SQLite. Because this gap is full of unknowns, I'm arguing that
developers tend to avoid it and use one of the extremes instead. And,
the result is features that have poor durability and/or poor
performance. Not good. What's worse is many developers (including
myself) are ignorant of many of these pitfalls. Yes, we have code review
for core features. But code review isn't perfect and add-ons likely
aren't subjected to the same level of scrutiny. The end result is the
same: Firefox isn't as awesome as it could be.

I think there is an opportunity for Gecko to step in and provide a
storage subsystem that is easy to use, somewhere between preferences and
SQLite in terms of durability and performance, and just works. I don't
think it matters how it is implemented under the hood. If this were to
be built on top of SQLite, I think that would be fine. But, please don't
make consumers worry about things like SQL, schema design, and PRAGMA
statements. So, maybe I'm advocating a generic key-value store. Maybe
something like DOM Storage? Maybe SQLite 4 (which is emphasizing
key-value storage and speed)? Just... something. Please.

Anyway, I just wanted to see if others have thought about this. Do
others feel it is a concern? If so, can we formulate a plan to address
it? Who would own this?

Gregory
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Storage in Gecko

2013-04-26 Thread Kyle Huey
On Fri, Apr 26, 2013 at 11:17 AM, Gregory Szorc g...@mozilla.com wrote:

 I'd like to start a discussion about the state of storage in Gecko.

 Currently when you are writing a feature that needs to store data, you
 have roughly 3 choices:

 1) Preferences
 2) SQLite
 3) Manual file I/O

 Preferences are arguably the easiest. However, they have a number of
 setbacks:

 a) Poor durability guarantees. See bugs 864537 and 849947 for real-life
 issues. tl;dr writes get dropped!
 b) Integers limited to 32 bit (JS dates overflow b/c milliseconds since
 Unix epoch).
 c) I/O is synchronous.
 d) The whole method for saving them to disk is kind of weird.
 e) The API is awkward. See Preferences.jsm for what I'd consider a
 better API.
 f) Doesn't scale for non-trivial data sets.
 g) Clutters about:config (all preferences aren't config options).

 We have SQLite. You want durability: it's your answer. However, it too
 has setbacks:

 a) It eats I/O operations for breakfast. Multiple threads. Lots of
 overhead compared to prefs. (But hard to lose data.)
 b) By default it's not configured for optimal performance (you need to
 enable the WAL, muck around with other PRAGMA).
 c) Poor schemas can lead to poor performance.
 d) It's often overkill.
 e) Storage API has many footguns (use Sqlite.jsm to protect yourself).
 f) Lots of effort to do right. Auditing code for 3rd party extensions
 using SQLite, many of them aren't doing it right.

 And if one of those pre-built solutions doesn't offer what you need, you
 can roll your own with file I/O. But that also has setbacks:

 a) You need to roll your own. (How often do I flush? Do I use many small
 files or fewer large files? Different considerations for mobile (slow
 I/O) vs desktop?)
 b) You need to roll your own. (Listing it twice because it's *really*
 annoying, especially for casual developers that just want to implement
 features - think add-on developers.)
 c) Easy to do wrong (excessive flushing/fsyncing, too many I/O
 operations, inefficient appends, poor choices for mobile, etc).
 d) Wheel reinvention. Atomic operations/transactions. Data marshaling. etc.

 I believe there is a massive gap between the
 easy-but-not-ready-for-prime-time preferences and

 the-massive-hammer-solving-the-problem-you-don't-have-and-introducing-many-new-ones
 SQLite. Because this gap is full of unknowns, I'm arguing that
 developers tend to avoid it and use one of the extremes instead. And,
 the result is features that have poor durability and/or poor
 performance. Not good. What's worse is many developers (including
 myself) are ignorant of many of these pitfalls. Yes, we have code review
 for core features. But code review isn't perfect and add-ons likely
 aren't subjected to the same level of scrutiny. The end result is the
 same: Firefox isn't as awesome as it could be.

 I think there is an opportunity for Gecko to step in and provide a
 storage subsystem that is easy to use, somewhere between preferences and
 SQLite in terms of durability and performance, and just works. I don't
 think it matters how it is implemented under the hood. If this were to
 be built on top of SQLite, I think that would be fine. But, please don't
 make consumers worry about things like SQL, schema design, and PRAGMA
 statements. So, maybe I'm advocating a generic key-value store. Maybe
 something like DOM Storage? Maybe SQLite 4 (which is emphasizing
 key-value storage and speed)? Just... something. Please.

 Anyway, I just wanted to see if others have thought about this. Do
 others feel it is a concern? If so, can we formulate a plan to address
 it? Who would own this?

 Gregory


Have you explored using IndexedDB?

- Kyle
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Improving Mac OS X 10.6 test wait times by reducing 10.7 load

2013-04-26 Thread Armen Zambrano G.

On 2013-04-26 1:31 PM, Justin Lebar wrote:

I don't think I'm comfortable disabling this platform across the
board, or even disabling debug-only runs across the board.

As jmaher pointed out, there are platform differences here.  If we
disable this platform entirely, we lose visibility into rare but, we
seem to believe, possible events.


That was a python issue that was related to talos.
It was not a Firefox issue that would have only failed on a specific 
version of Mac.



It seems like the only reason to disable everywhere instead of only on
m-i/try (or running less frequently on m-i, like we do with PGO) is
that the former is easier to implement.  It seems like we're proposing
taking a lot of risk here to work around our own failings...

Yes, it is lot of work to try to change the way that buildbot works to 
try to optimize not-a-standard method of operations.
Just by doing jobs on PGO and not on every checkin it would make the 
10.7 platform less than the other versions.


I could also have not even started the thread trying to improve our wait 
times for 10.6 and when one day someone complained about wait times on 
rev4 I would say we can not buy more machines.


Just a little before on the thread you were asking go big or go home 
and asked to disable even 10.6 debug tests. I'm confused about the 
different messages.




On Fri, Apr 26, 2013 at 1:03 PM, Armen Zambrano G. arme...@mozilla.com wrote:


On 2013-04-26 12:14 PM, Justin Lebar wrote:


Would we be able to go back to where we disabled 10.7 altogether?



On m-i and try only, or everywhere?



The initial proposal was for disabling everywhere.

We could leave 10.7 opt jobs running everywhere as a compromise and re-visit
after I re-purpose the first batch of machines.

best regards,
Armen




On Fri, Apr 26, 2013 at 12:10 PM, Armen Zambrano G. arme...@mozilla.com
wrote:


Just disabling debug and talos jobs for 10.7 should reduce more than 50%
of
the load on 10.7. That might be sufficient for now.

Any objections on this plan?
We can re-visit later on if we need more disabled.

cheers,
Armen


On 2013-04-26 11:50 AM, Armen Zambrano G. wrote:



Would we be able to go back to where we disabled 10.7 altogether?
Product (Asa in separate thread) and release drivers (Akeybl) were OK to
the compromise of version specific test coverage being removed
completely.

Side note: adding Mac PGO would increase the build load (Besides this we
have to do a large PO as we expect Mac wait times to be showing up as
general load increases).

Not all reducing load approaches are easy to implement (due to the way
that buildbot is designed) and it does not ensure that we would reduce
it enough. It's expensive enough to support 3 different versions of Mac
as is without bringing 10.9 into the table. We have to cut things at
times.

One compromise that would be easy to implement and *might* reduce the
load is to disable all debug jobs for 10.7.

cheers,
Armen

On 2013-04-26 11:29 AM, Justin Lebar wrote:



As a compromise, how hard would it be to run the Mac 10.6 and 10.7
tests on m-i occasionally, like we run the PGO tests?  (Maybe we could
trigger them on the same csets as we run PGO; it seems like that would
be useful.)

On Fri, Apr 26, 2013 at 11:19 AM, Ryan VanderMeulen rya...@gmail.com
wrote:



On 4/26/2013 11:11 AM, Justin Lebar wrote:




So what we're saying is that we are going to completely reverse our
previous tree management policy?





Basically, yes.

Although, due to coalescing, do you always have a full run of tests
on
the tip of m-i before merging to m-c?



Yes. Note that we generally aren't merging inbound tip to m-c - we're
taking
a known-green cset (including PGO tests).

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform






___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Storage in Gecko

2013-04-26 Thread Andreas Gal

Preferences are as the name implies intended for preferences. There is no sane 
use case for storing data in preferences. I would give any patch I come across 
doing that an automatic sr- for poor taste and general insanity.

SQLite is definitely not cheap, and we should look at more suitable backends 
for our storage needs, but done right off the main thread, its definitely the 
saner way to go than (1).

While (2) is a foot-gun, (3) is a guaranteed foot-nuke. While its easy to use 
sqlite wrong, its almost guaranteed that you get your own atomic storage file 
use wrong, across our N platforms.

Chrome is working on replacing sqlite with leveldb for indexeddb and most their 
storage needs. Last time we looked it wasn't ready for prime time. Maybe it is 
now. This might be the best option.

Andreas

On Apr 26, 2013, at 11:17 AM, Gregory Szorc g...@mozilla.com wrote:

 I'd like to start a discussion about the state of storage in Gecko.
 
 Currently when you are writing a feature that needs to store data, you
 have roughly 3 choices:
 
 1) Preferences
 2) SQLite
 3) Manual file I/O
 
 Preferences are arguably the easiest. However, they have a number of
 setbacks:
 
 a) Poor durability guarantees. See bugs 864537 and 849947 for real-life
 issues. tl;dr writes get dropped!
 b) Integers limited to 32 bit (JS dates overflow b/c milliseconds since
 Unix epoch).
 c) I/O is synchronous.
 d) The whole method for saving them to disk is kind of weird.
 e) The API is awkward. See Preferences.jsm for what I'd consider a
 better API.
 f) Doesn't scale for non-trivial data sets.
 g) Clutters about:config (all preferences aren't config options).
 
 We have SQLite. You want durability: it's your answer. However, it too
 has setbacks:
 
 a) It eats I/O operations for breakfast. Multiple threads. Lots of
 overhead compared to prefs. (But hard to lose data.)
 b) By default it's not configured for optimal performance (you need to
 enable the WAL, muck around with other PRAGMA).
 c) Poor schemas can lead to poor performance.
 d) It's often overkill.
 e) Storage API has many footguns (use Sqlite.jsm to protect yourself).
 f) Lots of effort to do right. Auditing code for 3rd party extensions
 using SQLite, many of them aren't doing it right.
 
 And if one of those pre-built solutions doesn't offer what you need, you
 can roll your own with file I/O. But that also has setbacks:
 
 a) You need to roll your own. (How often do I flush? Do I use many small
 files or fewer large files? Different considerations for mobile (slow
 I/O) vs desktop?)
 b) You need to roll your own. (Listing it twice because it's *really*
 annoying, especially for casual developers that just want to implement
 features - think add-on developers.)
 c) Easy to do wrong (excessive flushing/fsyncing, too many I/O
 operations, inefficient appends, poor choices for mobile, etc).
 d) Wheel reinvention. Atomic operations/transactions. Data marshaling. etc.
 
 I believe there is a massive gap between the
 easy-but-not-ready-for-prime-time preferences and
 the-massive-hammer-solving-the-problem-you-don't-have-and-introducing-many-new-ones
 SQLite. Because this gap is full of unknowns, I'm arguing that
 developers tend to avoid it and use one of the extremes instead. And,
 the result is features that have poor durability and/or poor
 performance. Not good. What's worse is many developers (including
 myself) are ignorant of many of these pitfalls. Yes, we have code review
 for core features. But code review isn't perfect and add-ons likely
 aren't subjected to the same level of scrutiny. The end result is the
 same: Firefox isn't as awesome as it could be.
 
 I think there is an opportunity for Gecko to step in and provide a
 storage subsystem that is easy to use, somewhere between preferences and
 SQLite in terms of durability and performance, and just works. I don't
 think it matters how it is implemented under the hood. If this were to
 be built on top of SQLite, I think that would be fine. But, please don't
 make consumers worry about things like SQL, schema design, and PRAGMA
 statements. So, maybe I'm advocating a generic key-value store. Maybe
 something like DOM Storage? Maybe SQLite 4 (which is emphasizing
 key-value storage and speed)? Just... something. Please.
 
 Anyway, I just wanted to see if others have thought about this. Do
 others feel it is a concern? If so, can we formulate a plan to address
 it? Who would own this?
 
 Gregory
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Improving Mac OS X 10.6 test wait times by reducing 10.7 load

2013-04-26 Thread Armen Zambrano G.

After re-reading, I'm happy to disable just m-i/try for now.

Modifying to trigger *some* jobs on m-i through would be some decent 
amount of work (adding Mac pgo builders) but still different than normal 
operations and increase the 10.6/10.8 test load.


On 2013-04-26 1:31 PM, Justin Lebar wrote:

I don't think I'm comfortable disabling this platform across the
board, or even disabling debug-only runs across the board.

As jmaher pointed out, there are platform differences here.  If we
disable this platform entirely, we lose visibility into rare but, we
seem to believe, possible events.

It seems like the only reason to disable everywhere instead of only on
m-i/try (or running less frequently on m-i, like we do with PGO) is
that the former is easier to implement.  It seems like we're proposing
taking a lot of risk here to work around our own failings...

On Fri, Apr 26, 2013 at 1:03 PM, Armen Zambrano G. arme...@mozilla.com wrote:


On 2013-04-26 12:14 PM, Justin Lebar wrote:


Would we be able to go back to where we disabled 10.7 altogether?



On m-i and try only, or everywhere?



The initial proposal was for disabling everywhere.

We could leave 10.7 opt jobs running everywhere as a compromise and re-visit
after I re-purpose the first batch of machines.

best regards,
Armen




On Fri, Apr 26, 2013 at 12:10 PM, Armen Zambrano G. arme...@mozilla.com
wrote:


Just disabling debug and talos jobs for 10.7 should reduce more than 50%
of
the load on 10.7. That might be sufficient for now.

Any objections on this plan?
We can re-visit later on if we need more disabled.

cheers,
Armen


On 2013-04-26 11:50 AM, Armen Zambrano G. wrote:



Would we be able to go back to where we disabled 10.7 altogether?
Product (Asa in separate thread) and release drivers (Akeybl) were OK to
the compromise of version specific test coverage being removed
completely.

Side note: adding Mac PGO would increase the build load (Besides this we
have to do a large PO as we expect Mac wait times to be showing up as
general load increases).

Not all reducing load approaches are easy to implement (due to the way
that buildbot is designed) and it does not ensure that we would reduce
it enough. It's expensive enough to support 3 different versions of Mac
as is without bringing 10.9 into the table. We have to cut things at
times.

One compromise that would be easy to implement and *might* reduce the
load is to disable all debug jobs for 10.7.

cheers,
Armen

On 2013-04-26 11:29 AM, Justin Lebar wrote:



As a compromise, how hard would it be to run the Mac 10.6 and 10.7
tests on m-i occasionally, like we run the PGO tests?  (Maybe we could
trigger them on the same csets as we run PGO; it seems like that would
be useful.)

On Fri, Apr 26, 2013 at 11:19 AM, Ryan VanderMeulen rya...@gmail.com
wrote:



On 4/26/2013 11:11 AM, Justin Lebar wrote:




So what we're saying is that we are going to completely reverse our
previous tree management policy?





Basically, yes.

Although, due to coalescing, do you always have a full run of tests
on
the tip of m-i before merging to m-c?



Yes. Note that we generally aren't merging inbound tip to m-c - we're
taking
a known-green cset (including PGO tests).

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform






___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Storage in Gecko

2013-04-26 Thread Justin Lebar
 The current level of flakiness in the IndexedDB test suite (especially on
 OSX) makes me concerned about what to expect if it starts getting heavier
 use across the various platforms.

Is that just in the OOP tests, or everywhere?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Storage in Gecko

2013-04-26 Thread Benjamin Smedberg

On 4/26/2013 2:50 PM, Gavin Sharp wrote:

On Fri, Apr 26, 2013 at 11:36 AM, Andreas Gal g...@mozilla.com wrote:

Preferences are as the name implies intended for preferences. There is no sane 
use case for storing data in preferences. I would give any patch I come across 
doing that an automatic sr- for poor taste and general insanity.

As Greg suggests, that ship has kind of already sailed. In practice
preferences often ends up being the best choice for storing some
small amounts of data. Which is a sad state of affairs, to be sure -
so I'm glad we have this thread to explore alternatives.
The key problem with expanding this is that the pref API is designed to 
be synchronous because it controls a bunch of behavior early in startup. 
Our implementation is therefore to read all the prefs in (synchronously) 
and operate on them in-memory. That strategy only continues to work as 
long as the set of data in prefs is tightly constrained.


I really hope the outcome of this discussion is that we end up storing 
everything that isn't a true preference in some other datastore, and 
that is an async-by-default datastore ;-)
I have little experience actually trying to use indexedDB, so grain of 
salt etc., but my impression is that it's somewhat overkill for use 
cases currently addressed by preferences or custom JSON (e.g. a simple 
key-value store). 
With a pretty simple JSM wrapper, indexeddb could be a very good 
solution for saving JSON or JSON-like things (you don't even need JSON, 
because indexeddb does structured cloning). It can of course be used for 
more complex things as well, but if we want a durable key-value store, 
it could be as simple as:


ChromeData.get('key', function(value) {
  // null if unset
});

ChromeData.set('key', value [, function()]); // asynchronous

Or maybe there's a better syntax using promises, but in any case it 
could probably be this simple.


Does anyone use indexeddb in chrome right now?

--BDS
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Storage in Gecko

2013-04-26 Thread Kyle Huey
Resending to list.

On Fri, Apr 26, 2013 at 12:02 PM, Gregory Szorc g...@mozilla.com wrote:

 On 4/26/2013 11:52 AM, Kyle Huey wrote:

 Could you please point me at a good implementation of a Gecko consumer
 of IndexedDB? If you don't know which are good, an MXR search URL will
 suffice :)


I haven't looked at any of them closely but there are lots of uses at
http://mxr.mozilla.org/gaia/search?string=indexeddb


 I'm looking at

 https://mxr.mozilla.org/mozilla-central/source/addon-sdk/source/lib/sdk/indexed-db.js
 .
 Is all that principal magic necessary? Is there an MDN page documenting
 all this?


I think that's just necessary for separating jetpack addons from one
another.  If you're ok with it being possible for some other piece of
chrome to access your database and just use a unique name for it it's
unnecessary, AIUI.

- Kyle
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Storage in Gecko

2013-04-26 Thread Kyle Huey
On Fri, Apr 26, 2013 at 11:57 AM, Ryan VanderMeulen rya...@gmail.comwrote:

 The current level of flakiness in the IndexedDB test suite (especially on
 OSX) makes me concerned about what to expect if it starts getting heavier
 use across the various platforms.


Of the 24 open intermittent failure bugs in the IndexedDB component at
least 9 are IPC related.  Another 6 or 7 are all the same bug and are being
dealt with.  And those two account for the high frequency oranges.  The
remainder are all pretty low frequency from what I can tell.

- Kyle
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Storage in Gecko

2013-04-26 Thread Ryan VanderMeulen

On 4/26/2013 3:07 PM, Justin Lebar wrote:

The current level of flakiness in the IndexedDB test suite (especially on
OSX) makes me concerned about what to expect if it starts getting heavier
use across the various platforms.


Is that just in the OOP tests, or everywhere?



Mostly IPC.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Proposal for an inbound2 branch

2013-04-26 Thread Ryan VanderMeulen
As has been discussed at length in the various infrastructure meetings, 
one common point of frustration for developers is frequent tree closures 
due to bustage on inbound. While there are other issues and ideas for 
how to improve the inbound bustage situation, one problem I'm seeing is 
that multiple different issues with different solutions are being lumped 
together into one discussion, which makes it hard to gain traction on 
getting anything done. For that reason, I would like to specifically 
separate out the specific issue of inbound closures negatively affecting 
developer productivity and offer a more fleshed-out solution that can be 
implemented now independent of any other ideas on the table.


Specific goals:
-Offer an alternative branch for developers to push to during extended 
inbound closures

-Avoid patch pile-up after inbound re-opens from a long closure

Specific non-goals:
-Reducing infrastructure load
-Changing pushing strategies from the widely-accepted status quo (i.e. 
multi-headed approach)
-Creating multiple integration branches that allow for simultaneous 
pushing (i.e. inbound-b2g, inbound-gfx, etc)


My proposal:
-Create an inbound2 branch identically configured to mozilla-inbound.
-Under normal circumstances (i.e. m-i open), inbound2 will be CLOSED.
-In the event of a long tree closure, the last green changeset from m-i 
will be merged to inbound2 and inbound2 will be opened for checkins.
---It will be a judgment call for sheriffs as to how long of a closure 
will suffice for opening inbound2.
-When the bustage on m-i is resolved and it is again passing tests, 
inbound2 will be closed again.

-When all pending jobs on inbound2 are completed, it will be merged to m-i.
-Except under extraordinary circumstances, all merges to mozilla-central 
will continue to come from m-i ONLY.
-If bustage lands on inbound2, then both trees will be closed until 
resolved. Tough. We apparently can't always have nice things.


As stated above, I believe that this will solve one of the biggest 
painpoints of long tree closures without adding tons of extra complexity 
to what we're already doing and what developers are used to. The affect 
on infrastructure load should be close to neutral since at any given 
time, patches will only be getting checked into one branch. This 
proposal also has the advantage of being easy to implement since it's 
simply a clone of an existing repo, with a little extra sheriff 
overhead. It also helps to mitigate the pile-up of pushes we tend to see 
after a long closure which increase the likelihood of another long 
closure in the event of any bustage due to a high level of coalescing.


To be clear, this proposal is NOT intended to solve all of the various 
problems that have been raised with respect to infrastructure load, good 
coding practices, bustage minimization, good Try usage, etc. This is 
only looking to reduce the impact such issues have on developer workflow 
and make sheriffing easier after the tree reopens.


Feedback?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal for an inbound2 branch

2013-04-26 Thread Justin Lebar
I like that inbound2 would be open only when inbound is closed.  That
way you don't have to make a decision wrt which tree to push to.

sgtm.

On Fri, Apr 26, 2013 at 3:17 PM, Ryan VanderMeulen rya...@gmail.com wrote:
 As has been discussed at length in the various infrastructure meetings, one
 common point of frustration for developers is frequent tree closures due to
 bustage on inbound. While there are other issues and ideas for how to
 improve the inbound bustage situation, one problem I'm seeing is that
 multiple different issues with different solutions are being lumped together
 into one discussion, which makes it hard to gain traction on getting
 anything done. For that reason, I would like to specifically separate out
 the specific issue of inbound closures negatively affecting developer
 productivity and offer a more fleshed-out solution that can be implemented
 now independent of any other ideas on the table.

 Specific goals:
 -Offer an alternative branch for developers to push to during extended
 inbound closures
 -Avoid patch pile-up after inbound re-opens from a long closure

 Specific non-goals:
 -Reducing infrastructure load
 -Changing pushing strategies from the widely-accepted status quo (i.e.
 multi-headed approach)
 -Creating multiple integration branches that allow for simultaneous pushing
 (i.e. inbound-b2g, inbound-gfx, etc)

 My proposal:
 -Create an inbound2 branch identically configured to mozilla-inbound.
 -Under normal circumstances (i.e. m-i open), inbound2 will be CLOSED.
 -In the event of a long tree closure, the last green changeset from m-i will
 be merged to inbound2 and inbound2 will be opened for checkins.
 ---It will be a judgment call for sheriffs as to how long of a closure will
 suffice for opening inbound2.
 -When the bustage on m-i is resolved and it is again passing tests, inbound2
 will be closed again.
 -When all pending jobs on inbound2 are completed, it will be merged to m-i.
 -Except under extraordinary circumstances, all merges to mozilla-central
 will continue to come from m-i ONLY.
 -If bustage lands on inbound2, then both trees will be closed until
 resolved. Tough. We apparently can't always have nice things.

 As stated above, I believe that this will solve one of the biggest
 painpoints of long tree closures without adding tons of extra complexity to
 what we're already doing and what developers are used to. The affect on
 infrastructure load should be close to neutral since at any given time,
 patches will only be getting checked into one branch. This proposal also has
 the advantage of being easy to implement since it's simply a clone of an
 existing repo, with a little extra sheriff overhead. It also helps to
 mitigate the pile-up of pushes we tend to see after a long closure which
 increase the likelihood of another long closure in the event of any bustage
 due to a high level of coalescing.

 To be clear, this proposal is NOT intended to solve all of the various
 problems that have been raised with respect to infrastructure load, good
 coding practices, bustage minimization, good Try usage, etc. This is only
 looking to reduce the impact such issues have on developer workflow and make
 sheriffing easier after the tree reopens.

 Feedback?
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Storage in Gecko

2013-04-26 Thread Gavin Sharp
On Fri, Apr 26, 2013 at 12:10 PM, Benjamin Smedberg
benja...@smedbergs.us wrote:
 I really hope the outcome of this discussion is that we end up storing
 everything that isn't a true preference in some other datastore, and that is
 an async-by-default datastore ;-)

 With a pretty simple JSM wrapper, indexeddb could be a very good solution
 for saving JSON or JSON-like things (you don't even need JSON, because
 indexeddb does structured cloning). It can of course be used for more
 complex things as well, but if we want a durable key-value store, it could
 be as simple as:

 ChromeData.get('key', function(value) {
   // null if unset
 });

 ChromeData.set('key', value [, function()]); // asynchronous

 Or maybe there's a better syntax using promises, but in any case it could
 probably be this simple.

OK, sounds like we should do this. I filed
https://bugzilla.mozilla.org/show_bug.cgi?id=866238.

 Does anyone use indexeddb in chrome right now?

The patch in bug 789348 does (though that's actually running in
content). I don't know of any existing users in code that runs on
desktop (metro seems to use it, some core b2g-related code might).

Gavin
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Storage in Gecko

2013-04-26 Thread Gregory Szorc
On 4/26/2013 12:10 PM, Benjamin Smedberg wrote:
 On 4/26/2013 2:50 PM, Gavin Sharp wrote:
 On Fri, Apr 26, 2013 at 11:36 AM, Andreas Gal g...@mozilla.com wrote:
 Preferences are as the name implies intended for preferences. There
 is no sane use case for storing data in preferences. I would give
 any patch I come across doing that an automatic sr- for poor taste
 and general insanity.
 As Greg suggests, that ship has kind of already sailed. In practice
 preferences often ends up being the best choice for storing some
 small amounts of data. Which is a sad state of affairs, to be sure -
 so I'm glad we have this thread to explore alternatives.
 The key problem with expanding this is that the pref API is designed
 to be synchronous because it controls a bunch of behavior early in
 startup. Our implementation is therefore to read all the prefs in
 (synchronously) and operate on them in-memory. That strategy only
 continues to work as long as the set of data in prefs is tightly
 constrained.

Perhaps this should be advertised more, especially to the add-on
community. Looking at about:config of my main profile, about 2/3 of my
preferences are user set. There are hundreds of preferences apparently
being used for key-value storage by add-ons (not to pick on one, but
HTTPS Everywhere has a few hundred prefs).

This shouldn't be surprising: Preferences quacks like a generic
key-value store. In the absence of something similar and just as easy to
use, people will use (and abuse) it for storage needs.

IMO we can't just say don't use Preferences for that without offering
something equivalent. If we do, we'll have SQLite/raw I/O and we're no
better off.

 With a pretty simple JSM wrapper, indexeddb could be a very good
 solution for saving JSON or JSON-like things (you don't even need
 JSON, because indexeddb does structured cloning). It can of course be
 used for more complex things as well, but if we want a durable
 key-value store, it could be as simple as:

 ChromeData.get('key', function(value) {
   // null if unset
 });

 ChromeData.set('key', value [, function()]); // asynchronous

 Or maybe there's a better syntax using promises, but in any case it
 could probably be this simple.

I strongly believe a simple wrapper would go a long way. The current
pitfalls of storage have bitten me enough times that I'm tentatively
volunteering to add one to Toolkit.

However, before that happens, I'd like some consensus that IndexedDB is
the best solution here. I'd especially like to hear what Performance
thinks: I don't want to start creating a preferred storage solution
without their blessing. If they have suggestions for specific ways we
should use IndexedDB (or some other solution) to minimize perf impact,
we should try to enforce these through the preferred/wrapper API.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Storage in Gecko

2013-04-26 Thread Reuben Morais
We use IndexedDB extensively in a lot of the WebAPIs, see Contacts, Settings, 
SMS, MMS, Push, NetworkStats…

Right now there's a lot of boilerplate[1] involved in setting up IndexedDB, and 
people end up duplicating a lot of the boilerplate code. It'd be great to see a 
more polished wrapper around it. The callback chains of death involved in 
writing IDB code are also not very pleasant to read and write, so bonus points 
if we could have a syntax like Task.jsm, where you can do |result = yield 
objStore.get(foo);|

I don't know how much of this overlaps with the work to expose a simpler 
KV-store like API for saving snippets of data, but I figured I'd mention that 
this is also a problem for consumers wh need all the functionality of IDB.

[1] http://mxr.mozilla.org/mozilla-central/source/dom/base/IndexedDBHelper.jsm

-- reuben___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Storage in Gecko

2013-04-26 Thread Andrew Sutherland

On 04/26/2013 03:21 PM, Dirkjan Ochtman wrote:

Also, I wonder if SQLite 4 (which is more like a key-value store)


SQLite 4 is not actually more like a key-value store.  The underlying 
storage model used by the SQL-interface-that-is-the-interface changed 
from being a page-centric btree structure to a key-value store that is 
more akin to a log-structured merge implementation, but which will still 
seem very familiar to anyone familiar with the page-centric vfs 
implementation that preceded it.  Specifically, it does not look like 
IndexedDB's model; it still does a lot of fsync's in order to maintain 
the requisite SQL ACID semantics.


Unless we exposed that low level key-value store, SQLite 4 would look 
exactly the same to consumers.  The main difference would be that 
because records would actually be stored in their (lexicographic) 
PRIMARY KEY order, performance should improve in general, especially on 
traditional (non-SSD) hard drives.  Our IndexedDB implementation, for 
one, could probably see a good performance boost from a switch to SQLite4.


Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Storage in Gecko

2013-04-26 Thread Andrew Sutherland

On 04/26/2013 03:30 PM, Gregory Szorc wrote:

However, before that happens, I'd like some consensus that IndexedDB is
the best solution here. I'd especially like to hear what Performance
thinks: I don't want to start creating a preferred storage solution
without their blessing. If they have suggestions for specific ways we
should use IndexedDB (or some other solution) to minimize perf impact,
we should try to enforce these through the preferred/wrapper API.


I'm not on the performance team, but I've done some extensive 
investigation into SQLite performance[1] and a lot of thinking about how 
to efficiently do disk I/O for various workloads from my work with 
mozStorage for Thunderbird's global database.


I would say that the IndexedDB API has a very good API[2] that allows 
for very efficient back-end implementations.  Our existing 
implementation could do a lot of things to go faster, especially on 
non-SSDs.  But that can be done as an enhancement and does not need to 
happen yet.  I think LevelDB broadly has the right idea, although 
Chrome's IndexedDB implementation has some surprising limitations (no 
File Blob storage) that suggests it's not there yet.


The API can indeed be a bit heavy-weight for simple needs; shims over 
IndexedDB like gaia's asyncStorage helper are the way to go:

https://github.com/mozilla-b2g/gaia/blob/master/shared/js/async_storage.js

Andrew

1: 
http://www.visophyte.org/blog/2010/04/06/performance-annotated-sqlite-explaination-visualizations-using-systemtap/


2: The only enhancement I would like is non-binding hinting of desired 
batches so that IndexedDB could pre-fetch data that the consumer knows 
it is going to want anyways in order to avoid ping-ponging fetch 
requests back and forth to the async thread and the main thread.  (Right 
now mozGetAll can be used to accomplish similar, if non-transparent and 
dangerously foot-gunny results.)

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


New mfbt header: mozilla/PodOperations.h

2013-04-26 Thread Jeff Walden
For anyone who's not reading planet (or hasn't read it in the last fifteen 
minutes ;-) ), I recently landed a new mfbt header that exposes slightly safer 
versions of memset(.., 0, ...), memcpy, and memcmp for use on C++ types, 
particularly ones where sizeof(T)  1.

http://mxr.mozilla.org/mozilla-central/source/mfbt/PodOperations.h

The problem is that it's easy to forget to multiply the appropriate parameter 
in these methods by sizeof(T) when necessary.  Doing so can lead to various 
issues like incompletely-initialized values, issues causing security 
vulnerabilities in the past.  PodOperations.h throws some C++ template methods 
at the problem, to eliminate the need to remember to add sizeof(T).  This 
stuff's been used by the JS engine for awhile -- I just moved it out of there 
and to a centralized place for everyone to use.

More details here if you want them:

http://whereswalden.com/2013/04/26/mozillapodoperations-h-functions-for-zeroing-assigning-to-copying-and-comparing-plain-old-data-objects/

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Some data on mozilla-inbound

2013-04-26 Thread Gavin Sharp
Bug 864085

On Fri, Apr 26, 2013 at 2:06 PM, Kartikaya Gupta kgu...@mozilla.com wrote:
 On 13-04-26 11:37 , Phil Ringnalda wrote:

  Unfortunately, engineering is totally indifferent to
 things like having doubled the cycle time for Win debug browser-chrome
 since last November.


 Is there a bug filed for this? I just cranked some of the build.json files
 through some scripts and got the average time (in seconds) for all the jobs
 run on the mozilla-central_xp-debug_test-mochitest-browser-chrome builders,
 and there is in fact a significant increase since November. This makes me
 think that we need a resource usage regression alarm of some sort too.

 builds-2012-11-01.js: 4063
 builds-2012-11-15.js: 4785
 builds-2012-12-01.js: 5311
 builds-2012-12-15.js: 5563
 builds-2013-01-01.js: 6326
 builds-2013-01-15.js: 5706
 builds-2013-02-01.js: 5823
 builds-2013-02-15.js: 6103
 builds-2013-03-01.js: 5642
 builds-2013-03-15.js: 5187
 builds-2013-04-01.js: 5643
 builds-2013-04-15.js: 6207

 kats

 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Some data on mozilla-inbound

2013-04-26 Thread Gregory Szorc
On 4/26/2013 2:06 PM, Kartikaya Gupta wrote:
 On 13-04-26 11:37 , Phil Ringnalda wrote:
  Unfortunately, engineering is totally indifferent to
 things like having doubled the cycle time for Win debug browser-chrome
 since last November.


 Is there a bug filed for this? I just cranked some of the build.json
 files through some scripts and got the average time (in seconds) for
 all the jobs run on the
 mozilla-central_xp-debug_test-mochitest-browser-chrome builders, and
 there is in fact a significant increase since November. This makes me
 think that we need a resource usage regression alarm of some sort too.

 builds-2012-11-01.js: 4063
 builds-2012-11-15.js: 4785
 builds-2012-12-01.js: 5311
 builds-2012-12-15.js: 5563
 builds-2013-01-01.js: 6326
 builds-2013-01-15.js: 5706
 builds-2013-02-01.js: 5823
 builds-2013-02-15.js: 6103
 builds-2013-03-01.js: 5642
 builds-2013-03-15.js: 5187
 builds-2013-04-01.js: 5643
 builds-2013-04-15.js: 6207

Well, wall time will [likely] increase as we write new tests. I'm
guessing (OK, really hoping) the number of mochitest files has increased
in rough proportion to the wall time? Also, aren't we executing some
tests on virtual machines now? On any virtual machine (and especially on
EC2), you don't know what else is happening on the physical machine, so
CPU and I/O steal are expected to cause variations and slowness in
execution time.

Speaking of resource usage, I've filed bug 859573 to have system
resource counters reported as part of jobs. That way, we can have a
high-level handle on whether our CPU efficiency is increasing/decreasing
over time. I'd argue that we should strive for 100% CPU saturation on
every slave (for most jobs) otherwise those CPU cycles are lost forever
and we've wasted capacity. But, that's arguably a conversation for
another thread.

While I don't have numbers off hand, one of the things I noticed was the
wall time of the various test chunks isn't as balanced as it should be.
In particular, bc tests seem to be a long pole. Perhaps we should split
them into bc-1 and bc-2? Along that vein, perhaps we could combine some
of the regular mochitest jobs, as they don't seem to take too long to
execute. Who makes these kinds of decisions?

On the subject of mochitests, I think we should really pound home the
message that mochitests should be avoided if possible. If you can move
more business logic into JSMs and test with xpcshell tests and only
write mochitests for the code that exists in the browser, that's a net
win (xpcshell tests are lighter weight and easier to run in parallel).
This would likely involve a huge shift in the way FX Team (and others)
write code and tests, so I don't expect it will be an easy sell. But,
it's a discussion we should have because the impact on test execution
times could be drastic.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Storage in Gecko

2013-04-26 Thread Justin Dolske

On 4/26/13 11:17 AM, Gregory Szorc wrote:


But, please don't
make consumers worry about things like SQL, schema design, and PRAGMA
statements.


Ideally, yes. But I suspect there will never be a one-size-fits all 
solution, and so we should probably be clear about what it's 
appropriate/intended for (see: prefs today!).



Anyway, I just wanted to see if others have thought about this. Do
others feel it is a concern? If so, can we formulate a plan to address
it? Who would own this?


I'd really like to see a simple, standard way to mirror a JS object 
to/from disk. There would barely be any API, it Just Works. EG, 
something like:


  Cu.import(jsonator2000.jsm);
  jsonator.init(myfile.json, onready);
  var o;
  function onready(aVerySpecialProxyObject) {
o = aVerySpecialProxyObject;
  }
  ...
  o.foo = 42;
  if (o.flagFromLastSession) { ... }

With a backend that takes care of file creation, error handling, 
flushing to disk, doing writes OMT/async, handling writes-on-shutdown, 
etc. Maybe toss in optional compression, checksumming, or other goodies.


It's been on my maybe-some-weekend list for a while. :)

Justin
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal for an inbound2 branch

2013-04-26 Thread Nicholas Nethercote
On Sat, Apr 27, 2013 at 5:17 AM, Ryan VanderMeulen rya...@gmail.com wrote:

 -In the event of a long tree closure, the last green changeset from m-i will
 be merged to inbound2 and inbound2 will be opened for checkins.

If I have a patch ready to land when inbound closes, what would be the
sequence of steps that I need to do to land it on inbound2?  Would I
need to have an up-to-date inbound2 clone and transplant the patch
across?  Or is it possible to push from an inbound clone?

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal for an inbound2 branch

2013-04-26 Thread John O'Duinn
hi RyanVM;

I really like this proposal because it gives Mozilla a way for
developers to continue doing checkins even during a prolonged
mozilla-inbound closure. The experiment with birch as b2g-inbound has
been really successful so far, which is good.

To easily experiment to see if the proposal helps, I have reserved
cypress and filed bug#866314 to setup cypress running all the same
builds/tests as inbound. Once cypress is ready, lets use cypress as
inbound2... and try this proposal to see how it goes.

Unless anyone has any concerns/objections, lets try have sheriffs use
cypress-as-mozilla-inbound2 when we next hit a prolonged mozilla-inbound
closure.


(and yes, if we find this experiment helps, we can setup branches with
real names instead of using birch, cypress!)

tc
John.

On 4/26/13 12:17 PM, Ryan VanderMeulen wrote:
 As has been discussed at length in the various infrastructure meetings,
 one common point of frustration for developers is frequent tree closures
 due to bustage on inbound. While there are other issues and ideas for
 how to improve the inbound bustage situation, one problem I'm seeing is
 that multiple different issues with different solutions are being lumped
 together into one discussion, which makes it hard to gain traction on
 getting anything done. For that reason, I would like to specifically
 separate out the specific issue of inbound closures negatively affecting
 developer productivity and offer a more fleshed-out solution that can be
 implemented now independent of any other ideas on the table.
 
 Specific goals:
 -Offer an alternative branch for developers to push to during extended
 inbound closures
 -Avoid patch pile-up after inbound re-opens from a long closure
 
 Specific non-goals:
 -Reducing infrastructure load
 -Changing pushing strategies from the widely-accepted status quo (i.e.
 multi-headed approach)
 -Creating multiple integration branches that allow for simultaneous
 pushing (i.e. inbound-b2g, inbound-gfx, etc)
 
 My proposal:
 -Create an inbound2 branch identically configured to mozilla-inbound.
 -Under normal circumstances (i.e. m-i open), inbound2 will be CLOSED.
 -In the event of a long tree closure, the last green changeset from m-i
 will be merged to inbound2 and inbound2 will be opened for checkins.
 ---It will be a judgment call for sheriffs as to how long of a closure
 will suffice for opening inbound2.
 -When the bustage on m-i is resolved and it is again passing tests,
 inbound2 will be closed again.
 -When all pending jobs on inbound2 are completed, it will be merged to m-i.
 -Except under extraordinary circumstances, all merges to mozilla-central
 will continue to come from m-i ONLY.
 -If bustage lands on inbound2, then both trees will be closed until
 resolved. Tough. We apparently can't always have nice things.


 
 As stated above, I believe that this will solve one of the biggest
 painpoints of long tree closures without adding tons of extra complexity
 to what we're already doing and what developers are used to. The affect
 on infrastructure load should be close to neutral since at any given
 time, patches will only be getting checked into one branch. This
 proposal also has the advantage of being easy to implement since it's
 simply a clone of an existing repo, with a little extra sheriff
 overhead. It also helps to mitigate the pile-up of pushes we tend to see
 after a long closure which increase the likelihood of another long
 closure in the event of any bustage due to a high level of coalescing.
 
 To be clear, this proposal is NOT intended to solve all of the various
 problems that have been raised with respect to infrastructure load, good
 coding practices, bustage minimization, good Try usage, etc. This is
 only looking to reduce the impact such issues have on developer workflow
 and make sheriffing easier after the tree reopens.
 
 Feedback?
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform



signature.asc
Description: OpenPGP digital signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Storage in Gecko

2013-04-26 Thread Mounir Lamouri
On 26/04/13 11:17, Gregory Szorc wrote:
 Anyway, I just wanted to see if others have thought about this. Do
 others feel it is a concern? If so, can we formulate a plan to address
 it? Who would own this?

As others, I believe that we should use IndexedDB for Gecko internal
storage. I opened a bug regarding this quite a while ago:
https://bugzilla.mozilla.org/show_bug.cgi?id=766057

We could easily imagine an XPCOM component that would expose a simple
key/value storage available from JS or C++ using IndexedDB in the backend.

--
Mounir
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Improving Mac OS X 10.6 test wait times by reducing 10.7 load

2013-04-26 Thread Matt Brubeck

On 4/26/2013 9:10 AM, Armen Zambrano G. wrote:

Just disabling debug and talos jobs for 10.7 should reduce more than 50%
of the load on 10.7. That might be sufficient for now.


I'd be happy for us to disable all Talos jobs on 10.7, on all trees. 
I've been keeping track of Talos stuff recently and I have not seen any 
genuine regressions that are 10.7-specific, so I don't think it's 
providing us much benefit to run these benchmarks on three Mac platforms 
simultaneously.


In terms of tracking regressions, it would be better to have more 
complete data 10.6 alone than to have incomplete data (due to 
coalescing) on 10.6 and 10.7.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Storage in Gecko

2013-04-26 Thread Andreas Gal
We filed a bug for this and I am working on the patch.

Andreas

Sent from Mobile.

On Apr 26, 2013, at 16:06, Mounir Lamouri mou...@lamouri.fr wrote:

 On 26/04/13 11:17, Gregory Szorc wrote:
 Anyway, I just wanted to see if others have thought about this. Do
 others feel it is a concern? If so, can we formulate a plan to address
 it? Who would own this?

 As others, I believe that we should use IndexedDB for Gecko internal
 storage. I opened a bug regarding this quite a while ago:
 https://bugzilla.mozilla.org/show_bug.cgi?id=766057

 We could easily imagine an XPCOM component that would expose a simple
 key/value storage available from JS or C++ using IndexedDB in the backend.

 --
 Mounir
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Some data on mozilla-inbound

2013-04-26 Thread Chris AtLee

On 14:29, Fri, 26 Apr, Gregory Szorc wrote:

On 4/26/2013 2:06 PM, Kartikaya Gupta wrote:

On 13-04-26 11:37 , Phil Ringnalda wrote:

 Unfortunately, engineering is totally indifferent to
things like having doubled the cycle time for Win debug browser-chrome
since last November.



Is there a bug filed for this? I just cranked some of the build.json
files through some scripts and got the average time (in seconds) for
all the jobs run on the
mozilla-central_xp-debug_test-mochitest-browser-chrome builders, and
there is in fact a significant increase since November. This makes me
think that we need a resource usage regression alarm of some sort too.

builds-2012-11-01.js: 4063
builds-2012-11-15.js: 4785
builds-2012-12-01.js: 5311
builds-2012-12-15.js: 5563
builds-2013-01-01.js: 6326
builds-2013-01-15.js: 5706
builds-2013-02-01.js: 5823
builds-2013-02-15.js: 6103
builds-2013-03-01.js: 5642
builds-2013-03-15.js: 5187
builds-2013-04-01.js: 5643
builds-2013-04-15.js: 6207


Well, wall time will [likely] increase as we write new tests. I'm
guessing (OK, really hoping) the number of mochitest files has increased
in rough proportion to the wall time? Also, aren't we executing some
tests on virtual machines now? On any virtual machine (and especially on
EC2), you don't know what else is happening on the physical machine, so
CPU and I/O steal are expected to cause variations and slowness in
execution time.


Those tests are still on exactly the same hardware. philor points out in 
https://bugzilla.mozilla.org/show_bug.cgi?id=864085#c0 that the 
time increase is disproportionate for win7. It would be interesting to 
look at all the other suites too.


Perhaps a regular report of how much our wall-clock times for builds and 
different test suite has changed week-over-week would be useful?


That aside, how do we cope with an ever-increasing runtime requirement 
of tests? Keep adding more chunks?


signature.asc
Description: Digital signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Experimental Technology in Gecko (Re: Storage in Gecko)

2013-04-26 Thread Matt Brubeck

On 4/26/2013 11:43 AM, Gregory Szorc wrote:

Have you explored using IndexedDB?


Not seriously. The this is an experimental technology warning on MDN
is off-putting.


The largest audience for MDN is web developers, so we put that warning 
on anything that's not ready for widespread use on the public web, 
including most things that are prefixed in current browsers.


Here are some other things with the same experimental technology 
warning on their MDN pages:


* JavaScript for...of loops
* CSS transform, transition, animation
* WebSocket
* Set, Map, WeakMap

Obviously we have no qualms against using these ourselves.  When an 
experimental technology is one that *we* are promoting as part of the 
development platform *we* are building, then of course we should using 
it in our own code.  In fact we should be early adopters, because if 
there are issues that prevent us from using our own APIs, then they will 
often affect other developers on our platform, so we need to know about 
those and fix them.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal for an inbound2 branch

2013-04-26 Thread Matt Brubeck

On 4/26/2013 4:14 PM, Justin Lebar wrote:

If I have a patch ready to land when inbound closes, what would be the
sequence of steps that I need to do to land it on inbound2?  Would I
need to have an up-to-date inbound2 clone and transplant the patch
across?


I think mbrubeck or someone knows how to maintain multiple hg branches
in one repo, but I've never figured that out...


Yes, having been a git user for years before I started with Mercurial, I 
simply treat Mercurial as a slightly crippled version of git.  :)


hg clone https://hg.mozilla.org/mozilla-central/
cd mozilla-central
# hack on some stuff
# do some builds and tests
hg new my-patch

# time to push to inbound:
hg qpop
hg pull https://hg.mozilla.org/integration/mozilla-inbound/
hg up -c
hg qpush  hg qfin my-patch
hg push -r tip https://hg.mozilla.org/integration/mozilla-inbound/

# Now, let's backport something to Aurora!
hg pull https://hg.mozilla.org/releases/mozilla-aurora/
hg export -r a3c55bdbe37d | hg qimport -P -n patch-to-uplift
hg push -r tip https://hg.mozilla.org/releases/mozilla-aurora/

# After a good night's sleep, back to work!
# hg pull -u won't work across branches, so:
hg pull https://hg.mozilla.org/mozilla-central/
hg up -c
# do a build
# start hacking again!

This sort of workflow is of course much more natural in git, which makes 
it easy to track the state of the remote repo(s).  The bookmark workflow 
that gps added to MDN basically emulates part of the functionality of 
git remote tracking branches.


I'm actually astounded that Mercurial doesn't have better support for 
this built in; I see Mozilla developers doing crazy time-consuming 
things all the time because of Mercurial's poor support for working with 
remote repositories.  :(

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Storage in Gecko

2013-04-26 Thread bent
IndexedDB is our answer for this for JS... C++ folks are still pretty
much on their own!

IndexedDB handles indexing (hence the rather awkward name),
transactions with abort/rollback, object-graph serialization (not just
JSON), usage from multiple tabs/windows/components/processes
simultaneously, data integrity guarantees, and easy single-copy file/
blob support. It's also completely asynchronous (and read-only
transactions run in parallel!), and one day soon it will be available
in Workers too. We're using it extensively in B2G (it's the only
storage option, really) and it's easily usable from chrome too (with a
two line bit of Cc/Ci init code). IE and Chrome both implement it and
we all have it available without a prefix because the API (v1) is
pretty much frozen.

What we've heard from a lot of JS developers (gaia folks included),
though, is that this feature set is more than some want or need. They
don't want to worry about indexes or transactions or serializing
complex objects. Luckily we anticipated this! Our aim was to provide a
sufficiently powerful tool such that we could build complex apps (e.g.
B2G's email app, ping asuth for details!) as well as simple key-value
stores (e.g. gaia's async_storage, mostly written by dflanagan I
think). Someone even implemented an early version of Chrome's
filesystem API on IndexedDB...

Nevertheless, I (and others) think it's clear that the big thing we
screwed up on is that we didn't release a simple storage wrapper
alongside IndexedDB. I think we expected this sort of thing to appear
on its own, but so far it hasn't. Sorry :(

So now we're working on wrappers. https://github.com/mounirlamouri/storage.js
is one in-progress example. I think there is another as well but the
link escapes me at the moment. In any case it should be trivial to do
something very similar as a JSM. (We already have an
IndexedDBHelper.jsm for more complicated databases that do actually
want control over tables and indexes.)

Hopefully IndexedDB meets most people's needs... That's what we tried
to build here, after all. The need to sprinkle some sugar here and
there is completely expected and very much encouraged. Please feel
free to ping me over irc or email or here on the list if anything is
unclear or difficult. (Of course there are definitely other folks that
are involved here but I don't want to volunteer them without their
consent!)

-bent

P.S. The experimental mark in MDN is outdated, and very unfortunate.
We should remove that ASAP.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform