Re: Proposal for an inbound2 branch

2013-04-29 Thread Justin Lebar
> Is there sanity to this proposal or am I still crazy?

If we had a lot more project branches, wouldn't that increase the load
on infra dramatically, because we'd have less coalescing?

This is of course a solvable problem, but the fact that the problem
exists suggests to me that your proposal here should be considered
separately from RyanVM's tightly-scoped proposal here.

On Tue, Apr 30, 2013 at 2:46 AM, Gregory Szorc  wrote:
> On 4/26/2013 12:17 PM, Ryan VanderMeulen wrote:
>> Specific goals:
>> -Offer an alternative branch for developers to push to during extended
>> inbound closures
>> -Avoid patch pile-up after inbound re-opens from a long closure
>>
>> Specific non-goals:
>> -Reducing infrastructure load
>> -Changing pushing strategies from the widely-accepted status quo (i.e.
>> multi-headed approach)
>> -Creating multiple integration branches that allow for simultaneous
>> pushing (i.e. inbound-b2g, inbound-gfx, etc)
>>
>> My proposal:
>> -Create an inbound2 branch identically configured to mozilla-inbound.
>> -Under normal circumstances (i.e. m-i open), inbound2 will be CLOSED.
>> -In the event of a long tree closure, the last green changeset from
>> m-i will be merged to inbound2 and inbound2 will be opened for checkins.
>> ---It will be a judgment call for sheriffs as to how long of a closure
>> will suffice for opening inbound2.
>> -When the bustage on m-i is resolved and it is again passing tests,
>> inbound2 will be closed again.
>> -When all pending jobs on inbound2 are completed, it will be merged to
>> m-i.
>> -Except under extraordinary circumstances, all merges to
>> mozilla-central will continue to come from m-i ONLY.
>> -If bustage lands on inbound2, then both trees will be closed until
>> resolved. Tough. We apparently can't always have nice things.
>
> If you consider that every repository is essentially a clone of
> mozilla-central, what we have *now* is effectively equivalent to a
> single repository with multiple heads/branches/bookmarks. However, the
> different heads/branches/bookmarks differ in:
>
> * How much attention sheriffs give them.
> * The automation configuration (coalescing, priority, etc).
> * Policies around landing.
> * How developers use it.
>
> These are all knobs in our control.
>
> When we say "create an inbound2," we're essentially establishing a new
> head/branch/bookmark that behaves much like "inbound1" with a slightly
> different landing policy. If that's what we want to do, sure. I think
> it's better than a single, frequently closed inbound.
>
> Anyway, no matter how much I think about this proposal, I keep coming
> back to the question of "why don't we use project branches more?"
> Instead of everyone and her brother landing on inbound, what if more
> landings were performed on {fx-team, services-central,  twig>, etc}? I /think/ the worst that can happen is merge conflicts and
> bit rot. And, we can abate that through intelligent grouping of related
> commits in the same repository, frequent merges, and maybe even better
> communication (perhaps even automatically with tools that alert
> developers to potential conflicts - wouldn't it be cool if you updated a
> patch and Mercurial was like "o hai - Ehsan recently pushed a Try push
> that conflicts with your change: you two should talk.").
>
> As a counter-proposal, I propose that we start shifting landings to
> project branches/twigs. We should aim for a small and well-defined set
> of repositories (say 3 to 5) sharing similar automation configuration
> and sheriff love. By keeping the number small, it's easy to figure out
> where something should land and it's not too much of an extra burden on
> sheriffs. We can still keep inbound, but it should only be reserved for
> major, cross-repository landings with multi-module impact (e.g. build
> system changes), merges from the main landing repositories (unless we
> merge straight to central), and possibly as a backup in case one of the
> landing repositories is closed.
>
> We can test this today with very little effort: we figure out how to
> bucket commits, re-purpose existing repositories/twigs, and see what
> happens. If it works: great - we've just validated that distributed
> version control works for Firefox development (as opposed to the
> CVS/Subversion workflow we're currently using with inbound). If not, we
> can try variations and/or the inbound2 idea.
>
> Is there sanity to this proposal or am I still crazy?
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal for an inbound2 branch

2013-04-29 Thread Gregory Szorc
On 4/26/2013 12:17 PM, Ryan VanderMeulen wrote:
> Specific goals:
> -Offer an alternative branch for developers to push to during extended
> inbound closures
> -Avoid patch pile-up after inbound re-opens from a long closure
>
> Specific non-goals:
> -Reducing infrastructure load
> -Changing pushing strategies from the widely-accepted status quo (i.e.
> multi-headed approach)
> -Creating multiple integration branches that allow for simultaneous
> pushing (i.e. inbound-b2g, inbound-gfx, etc)
>
> My proposal:
> -Create an inbound2 branch identically configured to mozilla-inbound.
> -Under normal circumstances (i.e. m-i open), inbound2 will be CLOSED.
> -In the event of a long tree closure, the last green changeset from
> m-i will be merged to inbound2 and inbound2 will be opened for checkins.
> ---It will be a judgment call for sheriffs as to how long of a closure
> will suffice for opening inbound2.
> -When the bustage on m-i is resolved and it is again passing tests,
> inbound2 will be closed again.
> -When all pending jobs on inbound2 are completed, it will be merged to
> m-i.
> -Except under extraordinary circumstances, all merges to
> mozilla-central will continue to come from m-i ONLY.
> -If bustage lands on inbound2, then both trees will be closed until
> resolved. Tough. We apparently can't always have nice things.

If you consider that every repository is essentially a clone of
mozilla-central, what we have *now* is effectively equivalent to a
single repository with multiple heads/branches/bookmarks. However, the
different heads/branches/bookmarks differ in:

* How much attention sheriffs give them.
* The automation configuration (coalescing, priority, etc).
* Policies around landing.
* How developers use it.

These are all knobs in our control.

When we say "create an inbound2," we're essentially establishing a new
head/branch/bookmark that behaves much like "inbound1" with a slightly
different landing policy. If that's what we want to do, sure. I think
it's better than a single, frequently closed inbound.

Anyway, no matter how much I think about this proposal, I keep coming
back to the question of "why don't we use project branches more?"
Instead of everyone and her brother landing on inbound, what if more
landings were performed on {fx-team, services-central, , etc}? I /think/ the worst that can happen is merge conflicts and
bit rot. And, we can abate that through intelligent grouping of related
commits in the same repository, frequent merges, and maybe even better
communication (perhaps even automatically with tools that alert
developers to potential conflicts - wouldn't it be cool if you updated a
patch and Mercurial was like "o hai - Ehsan recently pushed a Try push
that conflicts with your change: you two should talk.").

As a counter-proposal, I propose that we start shifting landings to
project branches/twigs. We should aim for a small and well-defined set
of repositories (say 3 to 5) sharing similar automation configuration
and sheriff love. By keeping the number small, it's easy to figure out
where something should land and it's not too much of an extra burden on
sheriffs. We can still keep inbound, but it should only be reserved for
major, cross-repository landings with multi-module impact (e.g. build
system changes), merges from the main landing repositories (unless we
merge straight to central), and possibly as a backup in case one of the
landing repositories is closed.

We can test this today with very little effort: we figure out how to
bucket commits, re-purpose existing repositories/twigs, and see what
happens. If it works: great - we've just validated that distributed
version control works for Firefox development (as opposed to the
CVS/Subversion workflow we're currently using with inbound). If not, we
can try variations and/or the inbound2 idea.

Is there sanity to this proposal or am I still crazy?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Firefox/Gecko Development Meeting: Tue, Apr 30, 11am PT

2013-04-29 Thread Lawrence Mandel
The Firefox/Gecko development meeting is held weekly to discuss development 
team progress and issues related to the development of Firefox branded products 
and the Gecko platform.

Actions from last week are here:
https://wiki.mozilla.org/Platform/2013-04-30#Actions

Meeting Details:
* Agenda:  https://wiki.mozilla.org/Platform/2013-04-30
* Engineering Vidyo Room
* https://v.mozilla.com/flex.html?roomdirect.html&key=T2v8Pi8WuTRc
* Vidyo Phone# 650-903-0800 x92 Conf#98411 (US/INTL)
* 1-800-707-2533 (pin 369) Conf# 98411 (US)
* join irc.mozilla.org #planning for back channel

Lawrence
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal for an inbound2 branch

2013-04-29 Thread Ehsan Akhgari
Multiple inbounds are not a great idea for smarter usage of our infra 
capacity or for load balancing, but this proposal doesn't attempt to 
address the former and avoids the latter by making the availability of 
the two inbounds mutually exclusive, so, it looks good.


I also doubt that it's going to have a lot of bad impact on the infra 
load because of the mutual exclusion.  Hopefully we'll monitor the infra 
load after switching to this model to keep an eye on how we're doing there.


Cheers,
Ehsan

On 2013-04-26 3:17 PM, Ryan VanderMeulen wrote:

As has been discussed at length in the various infrastructure meetings,
one common point of frustration for developers is frequent tree closures
due to bustage on inbound. While there are other issues and ideas for
how to improve the inbound bustage situation, one problem I'm seeing is
that multiple different issues with different solutions are being lumped
together into one discussion, which makes it hard to gain traction on
getting anything done. For that reason, I would like to specifically
separate out the specific issue of inbound closures negatively affecting
developer productivity and offer a more fleshed-out solution that can be
implemented now independent of any other ideas on the table.

Specific goals:
-Offer an alternative branch for developers to push to during extended
inbound closures
-Avoid patch pile-up after inbound re-opens from a long closure

Specific non-goals:
-Reducing infrastructure load
-Changing pushing strategies from the widely-accepted status quo (i.e.
multi-headed approach)
-Creating multiple integration branches that allow for simultaneous
pushing (i.e. inbound-b2g, inbound-gfx, etc)

My proposal:
-Create an inbound2 branch identically configured to mozilla-inbound.
-Under normal circumstances (i.e. m-i open), inbound2 will be CLOSED.
-In the event of a long tree closure, the last green changeset from m-i
will be merged to inbound2 and inbound2 will be opened for checkins.
---It will be a judgment call for sheriffs as to how long of a closure
will suffice for opening inbound2.
-When the bustage on m-i is resolved and it is again passing tests,
inbound2 will be closed again.
-When all pending jobs on inbound2 are completed, it will be merged to m-i.
-Except under extraordinary circumstances, all merges to mozilla-central
will continue to come from m-i ONLY.
-If bustage lands on inbound2, then both trees will be closed until
resolved. Tough. We apparently can't always have nice things.

As stated above, I believe that this will solve one of the biggest
painpoints of long tree closures without adding tons of extra complexity
to what we're already doing and what developers are used to. The affect
on infrastructure load should be close to neutral since at any given
time, patches will only be getting checked into one branch. This
proposal also has the advantage of being easy to implement since it's
simply a clone of an existing repo, with a little extra sheriff
overhead. It also helps to mitigate the pile-up of pushes we tend to see
after a long closure which increase the likelihood of another long
closure in the event of any bustage due to a high level of coalescing.

To be clear, this proposal is NOT intended to solve all of the various
problems that have been raised with respect to infrastructure load, good
coding practices, bustage minimization, good Try usage, etc. This is
only looking to reduce the impact such issues have on developer workflow
and make sheriffing easier after the tree reopens.

Feedback?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Storage in Gecko

2013-04-29 Thread Ehsan Akhgari

On 2013-04-29 1:51 PM, Taras Glek wrote:

* How to robustly write/update small datasets?

#3 above is it for small datasets. The correct way to do this is to
write blobs of JSON to disk. End of discussion.


For an API that is meant to be used by add-on authors, I'm afraid the 
situation is not as easy as this.  For example, for a "simple" key/value 
store which should be used for small datasets, one cannot enforce the 
implicit requirement of this solution (the data fitting in a single 
block on the disk. for example) at the API boundary without creating a 
crappy API which would "fail" some of the times if the value to be 
written violates those assumptions.  In practice it's not very easy for 
the consumer of the API to guarantee the size of the data written to 
disk if the data is coming from the user, the network, etc.



Writes of data <= ~64K should just be implemented as atomic whole-file
read/write operations. Those are almost always single blocks on disk.

Writing a whole file at once eliminates risk of data corruption.
Incremental updates are what makes sqlite do the WAL/fsync/etc dance
that causes much of the slowness.


Is that true even if the file is written to more than one physical block 
on the disk, across all of the filesystems that Firefox can run on?



As you can see from above examples, manual IO is not scary


Only if you trust the consumer of the API to know the trade-offs of what 
they're doing.  That is not the right assumption for a generic key/value 
store API.



* What about fsync-less writes?
Many log-type performance-sensitive data-storage operations are ok with
lossy appends. By lossy I mean "data will be lost if there is a power
outage within a few seconds/minutes of write", consistency is still
important. For this one should create a directory and write out log
entries as checksummed individual files...but one should really use
compression(and get checksums for free).
https://bugzilla.mozilla.org/show_bug.cgi?id=846410 is about
facilitating such an API.

Use-cases here: telemetry saved-sessions, FHR session-statistics.


This is an interesting use case indeed, but I don't think that it falls 
under the umbrella of the API being discussed here.



* What about large datasets?
These should be decided on a case-by-case basis. Universal solutions
will always perform poorly in some dimension.

* What about indexeddb?
IDB is overkill for simple storage needs. It is a restrictive wrapper
over an SQLite schema. Perhaps some large dataset (eg an addressbook) is
a good fit for it. IDB supports filehandles to do raw IO, but that still
requires sqlite to bootstrap, doesn't support compression, etc.
IDB also makes sense as a transitional API for web due to the need to
move away from DOM Local Storage...


Indexed DB is not a wrapper around SQLite.  The fact that our current 
implementation uses SQLite is an implementation detail which might 
change.  (And it's not true on the web across different browser engines.)


I'm sure that if somebody can provide testcases on bad IndexedDB 
performance scenarios we can work on fixing them, and that would benefit 
the web, and Firefox OS as well.



* Why isn't there a convenience API for all of the above recommendations?
Because speculatively landing APIs that anticipate future consumers is
risky, results in over-engineering and unpleasant surprises...So give us
use-cases and we(ie Yoric) will make them efficient.


The use case being discussed here is a simple key/value data store, 
hopefully with asynchronous operations, and safety guarantees against 
dataloss.  I do not see the current discussion as speculative at all.


Cheers,
Ehsan

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Fallibility of NS_DispatchTo[Current|Main]Thread

2013-04-29 Thread Robert O'Callahan
On Tue, Apr 30, 2013 at 5:32 AM, Kyle Huey  wrote:

> Is it feasible to make these functions infallible?  What work would need to
> be done?
>

Off the top of my head, I think it probably is feasible. IIRC XPCOM event
dispatch can fail for two reasons: OOM, and when the thread has been shut
down. We can die on OOM. DispatchToCurrentThread obviously shouldn't happen
when that thread has shut down. DispatchToMainThread shouldn't happen after
the main thread has shut down either ... I suppose it could due to a bug,
but we could just die.

Rob
-- 
q“qIqfq qyqoquq qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qyqoquq,q qwqhqaqtq
qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq qsqiqnqnqeqrqsq
qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qtqhqeqmq.q qAqnqdq qiqfq qyqoquq
qdqoq qgqoqoqdq qtqoq qtqhqoqsqeq qwqhqoq qaqrqeq qgqoqoqdq qtqoq qyqoquq,q
qwqhqaqtq qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq
qsqiqnqnqeqrqsq qdqoq qtqhqaqtq.q"
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


RE: Increasing our Aurora and Nightly user populations?

2013-04-29 Thread Karen Rudnitski
Maybe another route to trial this would be to use our product
announcements feature since it can be much more targeted. I would not be
wholly comfortable at this point to use the snippet real estate (if I was
reading the original request), especially for our GA audience.

I also think we should think about the goal here - which I think would be
to expand the volume numbers of each of these groups and not necessarily
move current users to other products (although I do understand the
rationale prompting this suggestion, generally around to lower friction
level to try less stable software).

If we are agreed on the above, perhaps it is also a discussion to get our
marketing friendlies and contributor engagement folks with an objective to
target and capture new users for these channels. 

Karen

-Original Message-
From: dev-planning-bounces+krudnitski=mozilla@lists.mozilla.org
[mailto:dev-planning-bounces+krudnitski=mozilla@lists.mozilla.org] On
Behalf Of Johnathan Nightingale
Sent: 29 April 2013 15:32
To: Chris Peterson
Cc: mozilla.dev.planning group; dev-platform@lists.mozilla.org
Subject: Re: Increasing our Aurora and Nightly user populations?

On Apr 29, 2013, at 3:07 PM, Chris Peterson wrote:

> (Sorry if this is not the right forum for this discussion.)

dev-planning feels right-er here - I suggest follow ups take it thattaway.


J

PS - original post, for dev-planning context:

> To increase our testing populations, I'd like to suggest that we add a
periodic "channel upsell" message to the about:home page (of Aurora and
Beta) and the about box (of Aurora, Beta, and possibly Release).
> 
> Beta and Aurora users have already opted in to a testing channel, so we
know they are receptive to beta-quality software. If they have set Beta or
Aurora as their default browser and/or have been using it consistently for
a few weeks, we could try to upsell Beta users to Aurora and Aurora users
to Nightly.
> 
> We did something similar with Firefox for Android. After a certain
number of browser launches, we display a one-time message suggesting they
write a Google Play store review if they like Firefox or contact Firefox
Support if they have problems.
> 
> Similarly, users who check their browser's about box are clearly
interested in version numbers or checking for the latest updates. These
users might be receptive to a channel upsell message to get an even bigger
version number. :)


---
Johnathan Nightingale
VP Firefox Engineering
@johnath

___
dev-planning mailing list
dev-plann...@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-planning
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Storage in Gecko

2013-04-29 Thread Gregory Szorc

Great post, Taras!

Per IRC conversations, we'd like to move subsequent discussion of 
actions into a meeting so we can more quickly arrive at a resolution.


Please meet in Gregory Szorc's Vidyo Room at 1400 PDT Tuesday, April 30. 
That's 2200 UTC. Apologies to the European and east coast crowds. If 
you'll miss it because it's too late, let me know and I'll consider 
moving it.


https://v.mozilla.com/flex.html?roomdirect.html&key=yJWrGKmbSi6S

On 4/29/13 10:51 AM, Taras Glek wrote:

* How to robustly write/update small datasets?

#3 above is it for small datasets. The correct way to do this is to
write blobs of JSON to disk. End of discussion.

Writes of data <= ~64K should just be implemented as atomic whole-file
read/write operations. Those are almost always single blocks on disk.

Writing a whole file at once eliminates risk of data corruption.
Incremental updates are what makes sqlite do the WAL/fsync/etc dance
that causes much of the slowness.

We invested a year worth of engineering effort into a pure-js IO library
to facilitate efficient application-level IO. See OS.File docs, eg
https://developer.mozilla.org/en-US/docs/JavaScript_OS.File/OS.File_for_the_main_thread


As you can see from above examples, manual IO is not scary

If one is into convenience APIs, one can create arbitrary json-storage
abstractions in ~10lines of code.

* What about writes > 64K?
Compression gives you 5-10x reduction of json.
https://bugzilla.mozilla.org/show_bug.cgi?id=846410
Compression also means that your read-throughput is up to 5x better too.


* What about fsync-less writes?
Many log-type performance-sensitive data-storage operations are ok with
lossy appends. By lossy I mean "data will be lost if there is a power
outage within a few seconds/minutes of write", consistency is still
important. For this one should create a directory and write out log
entries as checksummed individual files...but one should really use
compression(and get checksums for free).
https://bugzilla.mozilla.org/show_bug.cgi?id=846410 is about
facilitating such an API.

Use-cases here: telemetry saved-sessions, FHR session-statistics.

* What about large datasets?
These should be decided on a case-by-case basis. Universal solutions
will always perform poorly in some dimension.

* What about indexeddb?
IDB is overkill for simple storage needs. It is a restrictive wrapper
over an SQLite schema. Perhaps some large dataset (eg an addressbook) is
a good fit for it. IDB supports filehandles to do raw IO, but that still
requires sqlite to bootstrap, doesn't support compression, etc.
IDB also makes sense as a transitional API for web due to the need to
move away from DOM Local Storage...

* Why isn't there a convenience API for all of the above recommendations?
Because speculatively landing APIs that anticipate future consumers is
risky, results in over-engineering and unpleasant surprises...So give us
use-cases and we(ie Yoric) will make them efficient.

Taras


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: DOM Bindings Meeting - Monday @ 12:30 PM PDT

2013-04-29 Thread Boris Zbarsky

On 4/29/13 11:05 AM, Kyle Huey wrote:

Our (ostensibly) weekly DOM bindings meetings continue on Monday April 29th
at 12:30 PM PDT.


Per discussion of the meeting, here's a strawman plan for converting 
Window to WebIDL: https://etherpad.mozilla.org/WebIDL-Window


Please add anything I missed that needs to happen there?

-Boris

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


MemShrink meeting: Tuesday April 30, 2013 @ 2:00pm PDT

2013-04-29 Thread Jet Villegas
Note new meeting time: Tuesday, 30 April 2013, 14:00:00 PDT
Note new Vidyo room: MTV 3V Very Good Very Mighty

The next MemShrink meeting will be brought to you by  properly 
discarding memory:
https://bugzilla.mozilla.org/show_bug.cgi?id=854799

The wiki page for this meeting is at:

   https://wiki.mozilla.org/Performance/MemShrink

Agenda:
* Prioritize unprioritized MemShrink bugs.
* Discuss how we measure progress.
* Discuss approaches to getting more data.

Meeting details:

* Tue, 30 April 2013, 2:00 PM PDT
* Vidyo: MTV 3V Very Good Very Mighty
* MTV: Very Good Very Mighty, Mountain View office, 3rd floor.
* SF: Independent, San Francisco office, 7th floor.
* Dial-in Info:
   - In office or soft phone: extension 92
   - US/INTL: 650-903-0800 or 650-215-1282 then extension 92
   - Toll-free: 800-707-2533 then password 369
   - Conference num 95309

--Jet
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Increasing our Aurora and Nightly user populations?

2013-04-29 Thread Johnathan Nightingale
On Apr 29, 2013, at 3:07 PM, Chris Peterson wrote:

> (Sorry if this is not the right forum for this discussion.)

dev-planning feels right-er here - I suggest follow ups take it thattaway. 

J

PS - original post, for dev-planning context:

> To increase our testing populations, I'd like to suggest that we add a 
> periodic "channel upsell" message to the about:home page (of Aurora and Beta) 
> and the about box (of Aurora, Beta, and possibly Release).
> 
> Beta and Aurora users have already opted in to a testing channel, so we know 
> they are receptive to beta-quality software. If they have set Beta or Aurora 
> as their default browser and/or have been using it consistently for a few 
> weeks, we could try to upsell Beta users to Aurora and Aurora users to 
> Nightly.
> 
> We did something similar with Firefox for Android. After a certain number of 
> browser launches, we display a one-time message suggesting they write a 
> Google Play store review if they like Firefox or contact Firefox Support if 
> they have problems.
> 
> Similarly, users who check their browser's about box are clearly interested 
> in version numbers or checking for the latest updates. These users might be 
> receptive to a channel upsell message to get an even bigger version number. :)


---
Johnathan Nightingale
VP Firefox Engineering
@johnath

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Increasing our Aurora and Nightly user populations?

2013-04-29 Thread Alex Keybl
A lot of this has to do with population diversity (as opposed to size) and 3rd 
party plugins/add-ons only targeting Beta/Release.

-Alex

On Apr 29, 2013, at 12:26 PM, Kyle Huey  wrote:

> On Mon, Apr 29, 2013 at 12:17 PM, Taras Glek  wrote:
> 
>> What is the problem with current population size? We are not as small as
>> Android on desktop channels :)
> 
> 
> We routinely see serious bugs first on Beta or Release ...
> 
> - Kyle
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Increasing our Aurora and Nightly user populations?

2013-04-29 Thread Kyle Huey
On Mon, Apr 29, 2013 at 12:17 PM, Taras Glek  wrote:

> What is the problem with current population size? We are not as small as
> Android on desktop channels :)


We routinely see serious bugs first on Beta or Release ...

- Kyle
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Increasing our Aurora and Nightly user populations?

2013-04-29 Thread Taras Glek
What is the problem with current population size? We are not as small as 
Android on desktop channels :)


Chris Peterson wrote:

(Sorry if this is not the right forum for this discussion.)

To increase our testing populations, I'd like to suggest that we add a
periodic "channel upsell" message to the about:home page (of Aurora and
Beta) and the about box (of Aurora, Beta, and possibly Release).

Beta and Aurora users have already opted in to a testing channel, so we
know they are receptive to beta-quality software. If they have set Beta
or Aurora as their default browser and/or have been using it
consistently for a few weeks, we could try to upsell Beta users to
Aurora and Aurora users to Nightly.

We did something similar with Firefox for Android. After a certain
number of browser launches, we display a one-time message suggesting
they write a Google Play store review if they like Firefox or contact
Firefox Support if they have problems.

Similarly, users who check their browser's about box are clearly
interested in version numbers or checking for the latest updates. These
users might be receptive to a channel upsell message to get an even
bigger version number. :)


chris p.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Increasing our Aurora and Nightly user populations?

2013-04-29 Thread Chris Peterson

(Sorry if this is not the right forum for this discussion.)

To increase our testing populations, I'd like to suggest that we add a 
periodic "channel upsell" message to the about:home page (of Aurora and 
Beta) and the about box (of Aurora, Beta, and possibly Release).


Beta and Aurora users have already opted in to a testing channel, so we 
know they are receptive to beta-quality software. If they have set Beta 
or Aurora as their default browser and/or have been using it 
consistently for a few weeks, we could try to upsell Beta users to 
Aurora and Aurora users to Nightly.


We did something similar with Firefox for Android. After a certain 
number of browser launches, we display a one-time message suggesting 
they write a Google Play store review if they like Firefox or contact 
Firefox Support if they have problems.


Similarly, users who check their browser's about box are clearly 
interested in version numbers or checking for the latest updates. These 
users might be receptive to a channel upsell message to get an even 
bigger version number. :)



chris p.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Storage in Gecko

2013-04-29 Thread Joshua Cranmer 🐧

On 4/26/2013 1:17 PM, Gregory Szorc wrote:

I'd like to start a discussion about the state of storage in Gecko.

Currently when you are writing a feature that needs to store data, you
have roughly 3 choices:

1) Preferences
2) SQLite
3) Manual file I/O


One of the ongoing tasks I dabbled in was replacing the message folder 
cache in Thunderbird with a sane database backend (bug 418551). It's 
currently implemented in mork, but it has a very simple database 
structure, basically a map of folder URLs -> property name -> string or 
integer. It's also a potential hot path in startup, so I actually took 
the time to run it through some tests, using traces of actual execution 
for my profile to benchmark. When I first compared SQLite to mork, I 
never got satisfactory results, so I didn't try, but a LevelDB 
implementation (it was the hotness when I decided to run a test) was 
whooped soundly by mork--factor of 8 or so. It's really telling that 
LevelDB -O3 was even beaten by a mork -O0 (factor of 2). I didn't try 
IndexedDB because the API to access the cache in Thunderbird is 
inherently synchronous [1] and it's much more pain than it's worth to make


I've come to the conclusion that any database-y solution for a 
comparatively small key-value or key-key-value store (my test is 
basically a 900 x 10 key store, with around 9000 calls to get/set) is a 
performance nightmare. This is the sort of thing that basically needs to 
be prefetched into memory and stay resident there forever. I haven't 
profiled Taras's suggestion of just using a JSON written atomically (the 
synchronous API notwithstanding, it's probably safe to allow for async 
write attempts); since I have all the test data and scripts still 
around, I might just try that.


[1] From the point of view of consumers, the API boils down to "get this 
value [to immediately display in the UI]" and "set this value". Since 
mork is pretty much a giant hashtable in memory, it has very fast access 
and making everything go async would both greatly increase complexity 
and probably also slow down a lot of code.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Storage in Gecko

2013-04-29 Thread Taras Glek



Andreas Gal wrote:

Preferences are as the name implies intended for preferences. There is no sane 
use case for storing data in preferences. I would give any patch I come across 
doing that an automatic sr- for poor taste and general insanity.

SQLite is definitely not cheap, and we should look at more suitable backends 
for our storage needs, but done right off the main thread, its definitely the 
saner way to go than (1).

While (2) is a foot-gun, (3) is a guaranteed foot-nuke. While its easy to use 
sqlite wrong, its almost guaranteed that you get your own atomic storage file 
use wrong, across our N platforms.


3) is not a footnuke. Atomic file IO is how we do prefs, other things.
We provide a writeAtomic API in OS.File. So long as your 'io 
transactions' are within a single file writeAtomic is your friend. If 
one needs io transactions across files, then one is in trouble indeed, 
but that is not the case for 99% of our code.


Big advantage of 3 is that one does not pay the abstraction penalty of 
heavier sqlite or indexeddb solutions. cost of a write+fsync + followup 
read is easy to reason about.


One can also layer compression/checksums on top of atomic file IO 
easily. This hard with a more complex storage layer(eg we can't compress 
our sqlite place database even though that'd be a nice win).




Chrome is working on replacing sqlite with leveldb for indexeddb and most their 
storage needs. Last time we looked it wasn't ready for prime time. Maybe it is 
now. This might be the best option.


leveldb sounds nice, but the level of complexity there is overkill for 
most of our usecases. Filesystems work well, complex abstractions on top 
of them tend to be flakey.



Taras

ps. sorry for chiming in late. I was away doing outdoorsy vacation stuff 
Thurs-Sun.





Andreas

On Apr 26, 2013, at 11:17 AM, Gregory Szorc  wrote:


I'd like to start a discussion about the state of storage in Gecko.

Currently when you are writing a feature that needs to store data, you
have roughly 3 choices:

1) Preferences
2) SQLite
3) Manual file I/O

Preferences are arguably the easiest. However, they have a number of
setbacks:

a) Poor durability guarantees. See bugs 864537 and 849947 for real-life
issues. tl;dr writes get dropped!
b) Integers limited to 32 bit (JS dates overflow b/c milliseconds since
Unix epoch).
c) I/O is synchronous.
d) The whole method for saving them to disk is kind of weird.
e) The API is awkward. See Preferences.jsm for what I'd consider a
better API.
f) Doesn't scale for non-trivial data sets.
g) Clutters about:config (all preferences aren't config options).

We have SQLite. You want durability: it's your answer. However, it too
has setbacks:

a) It eats I/O operations for breakfast. Multiple threads. Lots of
overhead compared to prefs. (But hard to lose data.)
b) By default it's not configured for optimal performance (you need to
enable the WAL, muck around with other PRAGMA).
c) Poor schemas can lead to poor performance.
d) It's often overkill.
e) Storage API has many footguns (use Sqlite.jsm to protect yourself).
f) Lots of effort to do right. Auditing code for 3rd party extensions
using SQLite, many of them aren't doing it right.

And if one of those pre-built solutions doesn't offer what you need, you
can roll your own with file I/O. But that also has setbacks:

a) You need to roll your own. (How often do I flush? Do I use many small
files or fewer large files? Different considerations for mobile (slow
I/O) vs desktop?)
b) You need to roll your own. (Listing it twice because it's *really*
annoying, especially for casual developers that just want to implement
features - think add-on developers.)
c) Easy to do wrong (excessive flushing/fsyncing, too many I/O
operations, inefficient appends, poor choices for mobile, etc).
d) Wheel reinvention. Atomic operations/transactions. Data marshaling. etc.

I believe there is a massive gap between the
easy-but-not-ready-for-prime-time preferences and
the-massive-hammer-solving-the-problem-you-don't-have-and-introducing-many-new-ones
SQLite. Because this gap is full of unknowns, I'm arguing that
developers tend to avoid it and use one of the extremes instead. And,
the result is features that have poor durability and/or poor
performance. Not good. What's worse is many developers (including
myself) are ignorant of many of these pitfalls. Yes, we have code review
for core features. But code review isn't perfect and add-ons likely
aren't subjected to the same level of scrutiny. The end result is the
same: Firefox isn't as awesome as it could be.

I think there is an opportunity for Gecko to step in and provide a
storage subsystem that is easy to use, somewhere between preferences and
SQLite in terms of durability and performance, and "just works." I don't
think it matters how it is implemented under the hood. If this were to
be built on top of SQLite, I think that would be fine. But, please don't
make consumers worry about things like SQL, schema de

Re: Storage in Gecko

2013-04-29 Thread Boris Zbarsky

On 4/29/13 1:57 PM, Andrew McCreight wrote:

A WebIDL callback interface thing could probably be set up to make calling from 
C++ into JS less awful, if somebody has a need.


Sort of.  IndexedDB keys are done on raw JS values ("any" in the IDL) 
because they want to tell apart Arrays from other objects, want to tell 
apart the number 5 and the string "5", etc.


-Boris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Storage in Gecko

2013-04-29 Thread Andrew McCreight
- Original Message -
> On Apr 27, 9:37 am, Mounir Lamouri  wrote:
> > Why? Wouldn't be the idea of such component to make sure it is
> > usable
> > from C++?
> 
> Perhaps some day, but IndexedDB was always designed with JS in mind.
> To use it you pass special JS dictionaries for options, clone things
> to/from JS objects, etc. Using it from C++ is not a pleasant
> experience and requires lots of JSAPI.

A WebIDL callback interface thing could probably be set up to make calling from 
C++ into JS less awful, if somebody has a need.

Andrew

> 
> We could implement a C++ API that reuses all the transactions and
> threading and such but so far no one has been breaking down our door
> asking for it.
> 
> -bent
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Storage in Gecko

2013-04-29 Thread Taras Glek

So there is no general 'good for performance' way of doing IO.

 However I think most people who need this need to write small bits of 
data and there is a good way to do that.


Gregory Szorc wrote:

I'd like to start a discussion about the state of storage in Gecko.

Currently when you are writing a feature that needs to store data, you
have roughly 3 choices:

1) Preferences
2) SQLite
3) Manual file I/O


* How to robustly write/update small datasets?

#3 above is it for small datasets. The correct way to do this is to 
write blobs of JSON to disk. End of discussion.


Writes of data <= ~64K should just be implemented as atomic whole-file 
read/write operations. Those are almost always single blocks on disk.


Writing a whole file at once eliminates risk of data corruption. 
Incremental updates are what makes sqlite do the WAL/fsync/etc dance 
that causes much of the slowness.


We invested a year worth of engineering effort into a pure-js IO library 
to facilitate efficient application-level IO. See OS.File docs, eg 
https://developer.mozilla.org/en-US/docs/JavaScript_OS.File/OS.File_for_the_main_thread


As you can see from above examples, manual IO is not scary

If one is into convenience APIs, one can create arbitrary json-storage 
abstractions in ~10lines of code.


* What about writes > 64K?
Compression gives you 5-10x reduction of json. 
https://bugzilla.mozilla.org/show_bug.cgi?id=846410

Compression also means that your read-throughput is up to 5x better too.


* What about fsync-less writes?
Many log-type performance-sensitive data-storage operations are ok with 
lossy appends. By lossy I mean "data will be lost if there is a power 
outage within a few seconds/minutes of write", consistency is still 
important. For this one should create a directory and write out log 
entries as checksummed individual files...but one should really use 
compression(and get checksums for free).
https://bugzilla.mozilla.org/show_bug.cgi?id=846410 is about 
facilitating such an API.


Use-cases here: telemetry saved-sessions, FHR session-statistics.

* What about large datasets?
These should be decided on a case-by-case basis. Universal solutions 
will always perform poorly in some dimension.


* What about indexeddb?
IDB is overkill for simple storage needs. It is a restrictive wrapper 
over an SQLite schema. Perhaps some large dataset (eg an addressbook) is 
a good fit for it. IDB supports filehandles to do raw IO, but that still 
requires sqlite to bootstrap, doesn't support compression, etc.
IDB also makes sense as a transitional API for web due to the need to 
move away from DOM Local Storage...


* Why isn't there a convenience API for all of the above recommendations?
Because speculatively landing APIs that anticipate future consumers is 
risky, results in over-engineering and unpleasant surprises...So give us 
use-cases and we(ie Yoric) will make them efficient.


Taras
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Fallibility of NS_DispatchTo[Current|Main]Thread

2013-04-29 Thread Kyle Huey
Is it feasible to make these functions infallible?  What work would need to
be done?

- Kyle
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Storage in Gecko

2013-04-29 Thread bent
On Apr 27, 9:37 am, Mounir Lamouri  wrote:
> Why? Wouldn't be the idea of such component to make sure it is usable
> from C++?

Perhaps some day, but IndexedDB was always designed with JS in mind.
To use it you pass special JS dictionaries for options, clone things
to/from JS objects, etc. Using it from C++ is not a pleasant
experience and requires lots of JSAPI.

We could implement a C++ API that reuses all the transactions and
threading and such but so far no one has been breaking down our door
asking for it.

-bent
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


DOM Bindings Meeting - Monday @ 12:30 PM PDT

2013-04-29 Thread Kyle Huey
Our (ostensibly) weekly DOM bindings meetings continue on Monday April 29th
at 12:30 PM PDT.

Meeting details:

* Monday, April 29, 2013, 12:30 PM PDT (3:30 PM EDT/9:30 PM CEST)
* Dial-in Info:
 - Vidyo room: Boris Zbarsky
 - In office or soft phone: extension 92
 - US/INTL: 650-903-0800 or 650-215-1282 then extension 92
 - Toll-free: 800-707-2533 then password 369
 - Conference number 9235
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


WebAPI Meeting: Tuesday 30 April @ 10 AM Pacific [1]

2013-04-29 Thread Andrew Overholt

Meeting Details:

* Agenda: https://wiki.mozilla.org/WebAPI/2013-04-30
* WebAPI Vidyo room
* Amoeba conf. room, San Francisco office (7A)
* Spadina conf. room, Toronto office
* Allo Allo conf. room, London office

* Vidyo Phone # +1-650-903-0800 x92 Conference #98413 (US/INTL)
* US Vidyo Phone # 1-800-707-2533 (PIN 369) Conference #98413 (US)

* Join irc.mozilla.org #webapi for back channel

Notes will be taken on etherpad:

https://etherpad.mozilla.org/webapi-meetingnotes

All are welcome!

Andrew

[1]
http://www.timeanddate.com/worldclock/fixedtime.html?msg=WebAPI+meeting&iso=20130430T10&p1=224&am=30 


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal for an inbound2 branch

2013-04-29 Thread Benjamin Smedberg

On 4/29/2013 9:32 AM, Boris Zbarsky wrote:

On 4/26/13 6:26 PM, Nicholas Nethercote wrote:

If I have a patch ready to land when inbound closes, what would be the
sequence of steps that I need to do to land it on inbound2?


The following steps assume that the default and default-push for your 
repo are both appropriate versions of inbound and that "i2" and 
"i2-push" are the appropriate versions of inbound2 (you can create 
such aliases in your .hgrc).  They also assume that since inbound just 
closed there is bustage on inbound that is not going to be present on 
inbound2 and that you have already pulled this bustage into your tree.


1)  Make sure your patch-to-land is tracked by mq
2)  hg qpop -a
3)  hg strip "roots(outgoing('i2'))"
4)  hg pull -u i2
5)  hg qpush your patches
6)  hg qfin -a && hg push i2-push
The strip really isn't necessary, as long as you're ok with your tree 
having multiple heads for a bit.


1) Make sure your patches are tracked by mq
2) hg qpop -a
3) hg pull inbound2
4) hg up -c tip # tip is what you just pulled from inbound2. The update 
crosses branches, but -c says that's ok

5) hg qpush # your patches
6) hg qfin -a && hg push -r. inbound2 # note -r. is necessary here so 
you're only pushing your inbound2 head, not your inbound head


I use a setup like this to push to the release branches, keeping all of 
aurora/beta/release as multiple heads of one tree.


--BDS

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: multiline pref value strings in user.js

2013-04-29 Thread Benjamin Smedberg

On 4/27/2013 6:26 PM, al...@yahoo.com wrote:

... appear to be possible without the \ escape.  In fact the \ can not be
present as it does not escape the newline but becomes a part of the string.
Once upon a time, prefs files were actually JS: we created a function 
"pref" and "user_pref" and then evaulated the file within the JS engine.


This was slow, and so in 2003 we changed it to use a mostly-compatible 
parser that could consume the existing syntax of most pref.js files (bug 
98533).


The format is now "whatever prefread.cpp accepts".

--BDS

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal for an inbound2 branch

2013-04-29 Thread Boris Zbarsky

On 4/26/13 6:26 PM, Nicholas Nethercote wrote:

If I have a patch ready to land when inbound closes, what would be the
sequence of steps that I need to do to land it on inbound2?


The following steps assume that the default and default-push for your 
repo are both appropriate versions of inbound and that "i2" and 
"i2-push" are the appropriate versions of inbound2 (you can create such 
aliases in your .hgrc).  They also assume that since inbound just closed 
there is bustage on inbound that is not going to be present on inbound2 
and that you have already pulled this bustage into your tree.


1)  Make sure your patch-to-land is tracked by mq
2)  hg qpop -a
3)  hg strip "roots(outgoing('i2'))"
4)  hg pull -u i2
5)  hg qpush your patches
6)  hg qfin -a && hg push i2-push

Step 3 is slightly annoying because you have to quote the parens from 
the shell


In any case, the upshot of the above steps is that after step 4 your 
tree is effectively a clone of inbound2, and then you just push to it 
normally.


If your patches are not tracked by mq and you don't want them to be thus 
tracked then things are more complicated: as far as I can tell you want 
to pull into the same tree from inbound2, transplant your patch on top 
of the tip of inbound2, strip out the bits of inbound that shouldn't be 
landing, then push.


-Boris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform