Re: Intent to ship: block audible autoplay media intervention

2018-07-20 Thread Chris Pearce
On Wednesday, July 4, 2018 at 11:38:21 AM UTC+12, Chris Pearce wrote:
> Intent to ship: block audible autoplay media intervention
> 
> SUMMARY:
> 
> We intend to change the behaviour of HTMLMediaElement to block autoplay of 
> audible audio and video in Firefox on desktop and mobile.
> 
> We are not going to block WebAudio at the same time. While we do plan to 
> block audible autoplay of WebAudio content in the near future, we have not 
> finalized our WebAudio blocking logic or intended ship date for blocking 
> WebAudio.
> 
> 
> TIMELINE:
> 
> We intend to run shield studies on the user impact of enabling 
> HTMLMediaElement autoplay blocking. If those go well we hope to ship in 
> Firefox 63 (2018-10-23) or Firefox 64 (2018-12-11). Upon conclusion of our 
> experiments, I’ll follow up here with a confirmed ship date for this feature.
> 
> We hope to block autoplay in WebAudio in a release soon after, hopefully 
> Firefox 64 or 65.
> 
> 
> DETAILS:
> 
> We intend to block autoplay of HTMLMediaElement in tabs which haven't had 
> user interaction. Web authors should assume that they require a user gesture 
> (mouse click on a "play" button for example) in order to play audible media.
> 
> HTMLMediaElements with a "muted" attribute or "volume=0" are still allowed to 
> play.
> 
> As with other browsers implementing this feature, we express playback being 
> blocked by rejecting the promise returned by HTMLMediaElement.play(). Web 
> authors should always check whether the promise returned by 
> HTMLMediaElement.play() is rejected, and handle that case accordingly.
> 
> We also plan to allow users to create their own whitelist of sites which they 
> trust to autoplay.
> 
> We are planning to experiment via shield studies with prompting users to 
> approve/block playback on sites that try to autoplay before receiving user 
> interaction.
> 
> 
> ADVICE FOR WEB AUTHORS:
> 
> In general, the advice that applies to other browsers [1][2] with respect to 
> autoplaying media will apply to Firefox as well; you cannot assume that you 
> can just call HTMLMediaElement.play() for audible media and expect it to 
> always play. You should always check whether the play promise is rejected, 
> and handle that case accordingly.
> 
> For example:
> 
> var promise = document.querySelector('video').play();
> 
> if (promise !== undefined) {
> promise.catch(error => {
> // Auto-play was prevented
> // Show a UI element to let the user manually start playback
> }).then(() => {
> // Auto-play started
> });
> }
> 
> (This example comes from WebKit’s announcement on blocking autoplay [2])
> 
> To test block autoplay in Firefox 63 (currently in Firefox Nightly channel), 
> download the latest Nightly and open about:config in the URL bar and set the 
> preferences:
> 
> media.autoplay.enabled=false
> media.autoplay.enabled.user-gestures-needed=true
> media.autoplay.ask-permission=true
> 
> 
> Tracking bug: (block-autoplay) 
> https://bugzilla.mozilla.org/show_bug.cgi?id=1376321
> 
> 
> If you find bugs, please file them via this link and CC or need-info me 
> (cpearce), and mark them blocking bug 1376321:
> https://bugzilla.mozilla.org/enter_bug.cgi?product=Core=Audio%2FVideo%3A%20Playback
> 
> 
> I will follow up with a confirmed ship date for block audible autoplay in 
> Firefox once we have one.
> 
> 
> 
> [1] https://developers.google.com/web/updates/2017/09/autoplay-policy-changes
> [2] https://webkit.org/blog/7734/auto-play-policy-changes-for-macos/


Block autoplay is now enabled by default in Nightly (only).

If you find bugs, please file them in "Core > Audio/Video > Playback" or via 
this link https://mzl.la/2JHYjlF and CC or need-info me (cpearce), and mark 
them blocking bug 1376321.

If you have any feedback on whether you find the feature good, bad, or ugly, 
please email/IRC me, or let the team know in the block_autoplay Mozilla slack 
channel.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ standards proposal for a embedding library

2018-07-20 Thread Randell Jesup
Ted wrote:
>Honestly I think at this point growth of the C++ standard library is an
>anti-feature. The committee should figure out how to get modules specified
>(which I understand is a difficult thing, I'm not trying to minimize the
>work there) so that tooling can be built to provide a first-class module
>ecosystem for C++ like Rust and other languages have. The language should
>provide a better means for extensibility and code reuse so that the
>standard library doesn't have to solve everyone's problems.

(and various others wrote things along the same lines, or similar
concerns)

I'm strongly in agreement.  This is not going to help people in the long
run, and may well be as Ted puts it an anti-feature that causes all
sorts of problems we've chipped away at to re-rear their heads (ossified
unsafe impls, lack of any improvements, etc).

A good module system is a much more useful and liberating thing to do.

-- 
Randell Jesup, Mozilla Corp
remove "news" for personal email
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Major preference service architecture changes inbound

2018-07-20 Thread Jeff Gilbert
I have filed bug 1477436: "Preferences::Get* is not threadsafe/is main
thread only"
https://bugzilla.mozilla.org/show_bug.cgi?id=1477436

On Fri, Jul 20, 2018 at 11:36 AM, Eric Rahm  wrote:
> We *could* special case prefs with an appropriate data structure that works
> in a thread-safe nature; as far as RWLock's go, we do have one in tree [1].
> This has gone off the rails a bit from Kris' original announcement, which
> I'll reiterate: Watch out for prefs related bustage.
>
> Jeff, would you mind filing a bug for further discussion of off-main-thread
> access as a future improvement?
>
> [1] https://searchfox.org/mozilla-central/source/xpcom/threads/RWLock.h
>
> On Thu, Jul 19, 2018 at 7:25 PM, Kris Maglione 
> wrote:
>>
>> On Thu, Jul 19, 2018 at 07:17:13PM -0700, Jeff Gilbert wrote:
>>>
>>> Using a classic read/write exclusive lock, we would only every contend
>>> on read+write or write+write, which are /rare/.
>>
>>
>> That might be true if we gave up on the idea of switching to Robin Hood
>> hashing. But if we don't, then every lookup is a potential write, which
>> means every lookup required a write lock.
>>
>> We also don't really have any good APIs for rwlocks at the moment. Which,
>> admittedly, is a solvable problem.
>>
>>
>>> On Thu, Jul 19, 2018 at 2:19 PM, Kris Maglione 
>>> wrote:

 On Tue, Jul 17, 2018 at 03:49:41PM -0700, Jeff Gilbert wrote:
>
>
> We should totally be able to afford the very low cost of a
> rarely-contended lock. What's going on that causes uncached pref reads
> to show up so hot in profiles? Do we have a list of problematic pref
> keys?



 So, at the moment, we read about 10,000 preferences at startup in debug
 builds. That number is probably slightly lower in non-debug builds, bug
 we
 don't collect stats there. We're working on reducing that number (which
 is
 why we collect statistics in the first place), but for now, it's still
 quite
 high.


 As for the cost of locks... On my machine, in a tight loop, the cost of
 a
 entering and exiting MutexAutoLock is about 37ns. This is pretty close
 to
 ideal circumstances, on a single core of a very fast CPU, with very fast
 RAM, everything cached, and no contention. If we could extrapolate that
 to
 normal usage, it would be about a third of a ms of additional overhead
 for
 startup. I've fought hard enough for 1ms startup time improvements, but
 *shrug*, if it were that simple, it might be acceptable.

 But I have no reason to think the lock would be rarely contended. We
 read
 preferences *a lot*, and if we allowed access from background threads, I
 have no doubt that we would start reading them a lot from background
 threads
 in addition to reading them a lot from the main thread.

 And that would mean, in addition to lock contention, cache contention
 and
 potentially even NUMA issues. Those last two apply to atomic var caches
 too,
 but at least they generally apply only to the specific var caches being
 accessed off-thread, rather than pref look-ups in general.


 Maybe we could get away with it at first, as long as off-thread usage
 remains low. But long term, I think it would be a performance foot-gun.
 And,
 paradoxically, the less foot-gunny it is, the less useful it probably
 is,
 too. If we're only using it off-thread in a few places, and don't have
 to
 worry about contention, why are we bothering with locking and off-thread
 access in the first place?


> On Tue, Jul 17, 2018 at 8:57 AM, Kris Maglione 
> wrote:
>>
>>
>> On Tue, Jul 17, 2018 at 02:06:48PM +0100, Jonathan Kew wrote:
>>>
>>>
>>>
>>> On 13/07/2018 21:37, Kris Maglione wrote:



 tl;dr: A major change to the architecture preference service has
 just
 landed, so please be on the lookout for regressions.

 We've been working for the last few weeks on rearchitecting the
 preference service to work better in our current and future
 multi-process
 configurations, and those changes have just landed in bug 1471025.
>>>
>>>
>>>
>>>
>>> Looks like a great step forward!
>>>
>>> While we're thinking about the prefs service, is there any
>>> possibility
>>> we
>>> could enable off-main-thread access to preferences?
>>
>>
>>
>>
>> I think the chances of that are pretty close to 0, but I'll defer to
>> Nick.
>>
>> We definitely can't afford the locking overhead—preference look-ups
>> already
>> show up in profiles without it. And even the current limited exception
>> that
>> we grant Stylo while it has the main thread blocked causes problems
>> (bug
>> 1474789), since it makes it impossible to update statistics for 

Re: PSA: Major preference service architecture changes inbound

2018-07-20 Thread Johann Hofmann
Since I have seen several people point out in this thread that there's
*probably* code that excessively accesses prefs:

You can easily assert the name and amount of different prefs that are read
during whatever scenario you'd like to perform by adding to
this test (or writing your own version of it):
https://searchfox.org/mozilla-central/source/browser/base/content/test/performance/browser_preferences_usage.js

It currently only runs a few basic things like opening a bunch of tabs and
windows and I'd be happy to mentor/review additions that cover whatever
code you are maintaining (even if you don't discover anything it's good to
prevent regressions creeping up).

Cheers,

Johann

On Fri, Jul 20, 2018 at 10:17 PM Kris Maglione 
wrote:

> On Fri, Jul 20, 2018 at 11:53:22AM -0400, Ehsan Akhgari wrote:
> >On Fri, Jul 20, 2018 at 5:00 AM, Jonathan Kew  wrote:
> >> +1 to that. Our need for OMT access to prefs is only likely to grow,
> IMO,
> >> and we should just fix it once, so that any code (regardless of which
> >> thread(s) it may eventually run on) can trivially read prefs.
> >>
> >> Even if that means we can't adopt Robin Hood hashing, I think the
> >> trade-off would be well worthwhile.
> >>
> >
> >While it's true that the need for reading more prefs from workers will
> >continue to grow, given the number of prefs we have I think it's safe to
> >say that the set of prefs that we need to access from threads beside the
> >main thread will be a minority of the entire set of prefs.  So one way to
> >have our cake and eat it too would be to separate out the prefs that are
> >meant to be accessible through a worker thread and expose them through an
> >alternate thread-safe API which may be a bit more costly to call on the
> >main thread, and keep the rest of the min-thread only prefs on the
> existing
> >non-thread-safe APIs.  This won't be as elegant as having one set of APIs
> >to work with, of course.
>
> This is what we have atomic var caches are for. They can't currently be
> used
> for string preferences, but that's a problem that could be solved with an
> rwlock. They're also a bit difficult to use for preferences which aren't
> known
> at compile time, but we've generally been trying to move away from using
> the
> preference service for such things.
>
> For the sorts of preferences that are generally needed by Worker threads,
> though, they should mostly just work as-is.
> ___
> firefox-dev mailing list
> firefox-...@mozilla.org
> https://mail.mozilla.org/listinfo/firefox-dev
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Major preference service architecture changes inbound

2018-07-20 Thread Kris Maglione

On Fri, Jul 20, 2018 at 11:53:22AM -0400, Ehsan Akhgari wrote:

On Fri, Jul 20, 2018 at 5:00 AM, Jonathan Kew  wrote:

+1 to that. Our need for OMT access to prefs is only likely to grow, IMO,
and we should just fix it once, so that any code (regardless of which
thread(s) it may eventually run on) can trivially read prefs.

Even if that means we can't adopt Robin Hood hashing, I think the
trade-off would be well worthwhile.



While it's true that the need for reading more prefs from workers will
continue to grow, given the number of prefs we have I think it's safe to
say that the set of prefs that we need to access from threads beside the
main thread will be a minority of the entire set of prefs.  So one way to
have our cake and eat it too would be to separate out the prefs that are
meant to be accessible through a worker thread and expose them through an
alternate thread-safe API which may be a bit more costly to call on the
main thread, and keep the rest of the min-thread only prefs on the existing
non-thread-safe APIs.  This won't be as elegant as having one set of APIs
to work with, of course.


This is what we have atomic var caches are for. They can't currently be used 
for string preferences, but that's a problem that could be solved with an 
rwlock. They're also a bit difficult to use for preferences which aren't known 
at compile time, but we've generally been trying to move away from using the 
preference service for such things.


For the sorts of preferences that are generally needed by Worker threads, 
though, they should mostly just work as-is.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Major preference service architecture changes inbound

2018-07-20 Thread Kris Maglione

On Fri, Jul 20, 2018 at 10:00:22AM +0100, Jonathan Kew wrote:
+1 to that. Our need for OMT access to prefs is only likely to grow, 
IMO, and we should just fix it once, so that any code (regardless of 
which thread(s) it may eventually run on) can trivially read prefs.


Even if that means we can't adopt Robin Hood hashing, I think the 
trade-off would be well worthwhile.


This is exactly the kind of performance footgun I'm talking about. The marginal 
cost of making something threadsafe may be low, but those costs pile up. The 
cost of locking in hot code often adds up enough on its own. Throwing away an 
important optimization on top of that adds up even faster. Getting into the 
habit, and then continuing the pattern elsewhere starts adding up exponentially.


Threads are great. Threadsafe code is useful. But data that's owned by a single 
thread is still the best when you can manage it, and it should be the default 
option whenever we can reasonably manage it. We already pay the price of being 
overzealous about thread safety in other areas (for instance, the fact that all 
of our strings require atomic refcounting, even though DOM strings are generally 
only used by a single thread). I think the trend needs to be in the opposite 
direction.


Rust and JS make this much easier than C++ does, unfortunately. But we're 
getting better at it in C++, and moving more and more code to Rust.



On Thu, Jul 19, 2018 at 2:19 PM, Kris Maglione  wrote:

On Tue, Jul 17, 2018 at 03:49:41PM -0700, Jeff Gilbert wrote:


We should totally be able to afford the very low cost of a
rarely-contended lock. What's going on that causes uncached pref reads
to show up so hot in profiles? Do we have a list of problematic pref
keys?



So, at the moment, we read about 10,000 preferences at startup in debug
builds. That number is probably slightly lower in non-debug builds, bug we
don't collect stats there. We're working on reducing that number (which is
why we collect statistics in the first place), but for now, it's still quite
high.


As for the cost of locks... On my machine, in a tight loop, the cost of a
entering and exiting MutexAutoLock is about 37ns. This is pretty close to
ideal circumstances, on a single core of a very fast CPU, with very fast
RAM, everything cached, and no contention. If we could extrapolate that to
normal usage, it would be about a third of a ms of additional overhead for
startup. I've fought hard enough for 1ms startup time improvements, but
*shrug*, if it were that simple, it might be acceptable.

But I have no reason to think the lock would be rarely contended. We read
preferences *a lot*, and if we allowed access from background threads, I
have no doubt that we would start reading them a lot from background threads
in addition to reading them a lot from the main thread.

And that would mean, in addition to lock contention, cache contention and
potentially even NUMA issues. Those last two apply to atomic var caches too,
but at least they generally apply only to the specific var caches being
accessed off-thread, rather than pref look-ups in general.


Maybe we could get away with it at first, as long as off-thread usage
remains low. But long term, I think it would be a performance foot-gun. And,
paradoxically, the less foot-gunny it is, the less useful it probably is,
too. If we're only using it off-thread in a few places, and don't have to
worry about contention, why are we bothering with locking and off-thread
access in the first place?

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Major preference service architecture changes inbound

2018-07-20 Thread Eric Rahm
We *could* special case prefs with an appropriate data structure that works
in a thread-safe nature; as far as RWLock's go, we do have one in tree [1].
This has gone off the rails a bit from Kris' original announcement, which
I'll reiterate: Watch out for prefs related bustage.

Jeff, would you mind filing a bug for further discussion of off-main-thread
access as a future improvement?

[1] https://searchfox.org/mozilla-central/source/xpcom/threads/RWLock.h

On Thu, Jul 19, 2018 at 7:25 PM, Kris Maglione 
wrote:

> On Thu, Jul 19, 2018 at 07:17:13PM -0700, Jeff Gilbert wrote:
>
>> Using a classic read/write exclusive lock, we would only every contend
>> on read+write or write+write, which are /rare/.
>>
>
> That might be true if we gave up on the idea of switching to Robin Hood
> hashing. But if we don't, then every lookup is a potential write, which
> means every lookup required a write lock.
>
> We also don't really have any good APIs for rwlocks at the moment. Which,
> admittedly, is a solvable problem.
>
>
> On Thu, Jul 19, 2018 at 2:19 PM, Kris Maglione 
>> wrote:
>>
>>> On Tue, Jul 17, 2018 at 03:49:41PM -0700, Jeff Gilbert wrote:
>>>

 We should totally be able to afford the very low cost of a
 rarely-contended lock. What's going on that causes uncached pref reads
 to show up so hot in profiles? Do we have a list of problematic pref
 keys?

>>>
>>>
>>> So, at the moment, we read about 10,000 preferences at startup in debug
>>> builds. That number is probably slightly lower in non-debug builds, bug
>>> we
>>> don't collect stats there. We're working on reducing that number (which
>>> is
>>> why we collect statistics in the first place), but for now, it's still
>>> quite
>>> high.
>>>
>>>
>>> As for the cost of locks... On my machine, in a tight loop, the cost of a
>>> entering and exiting MutexAutoLock is about 37ns. This is pretty close to
>>> ideal circumstances, on a single core of a very fast CPU, with very fast
>>> RAM, everything cached, and no contention. If we could extrapolate that
>>> to
>>> normal usage, it would be about a third of a ms of additional overhead
>>> for
>>> startup. I've fought hard enough for 1ms startup time improvements, but
>>> *shrug*, if it were that simple, it might be acceptable.
>>>
>>> But I have no reason to think the lock would be rarely contended. We read
>>> preferences *a lot*, and if we allowed access from background threads, I
>>> have no doubt that we would start reading them a lot from background
>>> threads
>>> in addition to reading them a lot from the main thread.
>>>
>>> And that would mean, in addition to lock contention, cache contention and
>>> potentially even NUMA issues. Those last two apply to atomic var caches
>>> too,
>>> but at least they generally apply only to the specific var caches being
>>> accessed off-thread, rather than pref look-ups in general.
>>>
>>>
>>> Maybe we could get away with it at first, as long as off-thread usage
>>> remains low. But long term, I think it would be a performance foot-gun.
>>> And,
>>> paradoxically, the less foot-gunny it is, the less useful it probably is,
>>> too. If we're only using it off-thread in a few places, and don't have to
>>> worry about contention, why are we bothering with locking and off-thread
>>> access in the first place?
>>>
>>>
>>> On Tue, Jul 17, 2018 at 8:57 AM, Kris Maglione 
 wrote:

>
> On Tue, Jul 17, 2018 at 02:06:48PM +0100, Jonathan Kew wrote:
>
>>
>>
>> On 13/07/2018 21:37, Kris Maglione wrote:
>>
>>>
>>>
>>> tl;dr: A major change to the architecture preference service has just
>>> landed, so please be on the lookout for regressions.
>>>
>>> We've been working for the last few weeks on rearchitecting the
>>> preference service to work better in our current and future
>>> multi-process
>>> configurations, and those changes have just landed in bug 1471025.
>>>
>>
>>
>>
>> Looks like a great step forward!
>>
>> While we're thinking about the prefs service, is there any possibility
>> we
>> could enable off-main-thread access to preferences?
>>
>
>
>
> I think the chances of that are pretty close to 0, but I'll defer to
> Nick.
>
> We definitely can't afford the locking overhead—preference look-ups
> already
> show up in profiles without it. And even the current limited exception
> that
> we grant Stylo while it has the main thread blocked causes problems
> (bug
> 1474789), since it makes it impossible to update statistics for those
> reads,
> or switch to Robin Hood hashing (which would make our hash tables much
> smaller and more efficient, but requires read operations to be able to
> move
> entries).
>
> I am aware that in simple cases, this can be achieved via the
>> StaticPrefsList; by defining a VARCACHE_PREF there, I can read its
>> value
>> from 

Re: C++ standards proposal for a embedding library

2018-07-20 Thread Jim Blandy
Reading between the lines, it seems like the committee's aim is to take
something that is widely understood and used, broadly capable, and in the
big picture relatively well-defined (i.e. the Web), and incorporate it into
the C++ standard by reference.

The problem is that the *relationship of web content to surrounding native
app code* is none of those things, and I think you could make a case that
it's been undergoing violent churn for years and years.


On Fri, Jul 20, 2018 at 10:04 AM, Botond Ballo  wrote:

> On Thu, Jul 19, 2018 at 5:35 PM, Mike Hommey  wrote:
> > Other than everything that has already been said in this thread,
> > something bugs me with this proposal: a web view is a very UI thing.
> > And I don't think there's any proposal to add more basic UI elements
> > to the standard library.
>
> Not that I'm aware of.
>
> > So even if a web view is a desirable thing in
> > the long term (and I'm not saying it is!), there are way more things
> > that should come first.
>
> I think the idea behind this proposal is that standardizing a UI
> framework for C++ would be too difficult (seeing as we couldn't even
> agree on a 2D graphics proposal, which is an ingredient in a UI
> framework), so the web view fills the role of the UI framework: your
> UI is built inside the web view. (Not saying that's a good idea or a
> bad idea, just trying to explain the line of thinking.)
>
> Cheers,
> Botond
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ standards proposal for a embedding library

2018-07-20 Thread Botond Ballo
On Thu, Jul 19, 2018 at 5:35 PM, Mike Hommey  wrote:
> Other than everything that has already been said in this thread,
> something bugs me with this proposal: a web view is a very UI thing.
> And I don't think there's any proposal to add more basic UI elements
> to the standard library.

Not that I'm aware of.

> So even if a web view is a desirable thing in
> the long term (and I'm not saying it is!), there are way more things
> that should come first.

I think the idea behind this proposal is that standardizing a UI
framework for C++ would be too difficult (seeing as we couldn't even
agree on a 2D graphics proposal, which is an ingredient in a UI
framework), so the web view fills the role of the UI framework: your
UI is built inside the web view. (Not saying that's a good idea or a
bad idea, just trying to explain the line of thinking.)

Cheers,
Botond
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Major preference service architecture changes inbound

2018-07-20 Thread Jean-Yves Avenard
Hi

I believe that this change may be the caused of 
https://bugzilla.mozilla.org/show_bug.cgi?id=1477254 


That is, the pref value set in all.js no longer overrides the default value set 
in StaticPrefs. The problem occurs mostly with e10s on, when e10s is disabled, 
I see the problem only about 5% of the time.

JY

> On 13 Jul 2018, at 10:37 pm, Kris Maglione  wrote:
> 
> tl;dr: A major change to the architecture preference service has just landed, 
> so please be on the lookout for regressions.
> 
> We've been working for the last few weeks on rearchitecting the preference 
> service to work better in our current and future multi-process 
> configurations, and those changes have just landed in bug 1471025.
> 
> Our preference database tends to be very large, even without any user values. 
> It also needs to be available in every process. Until now, that's meant 
> complete separate copies of the hash table, name strings, and value strings 
> in each process, along with separate initialization in each content process, 
> and a lot of IPC overhead to keep the databases in sync.
> 
> After bug 1471025, the database is split into two sections: a snapshot of the 
> initial state of the database, which is stored in a read-only shared memory 
> region and shared by all processes, and a dynamic hash table of changes on 
> top of that snapshot, of which each process has its own. This approach 
> significantly decreases memory, IPC, and content process initialization 
> overhead. It also decreases the complexity of certain cross-process 
> synchronization logic.
> 
> But it adds complexity in other areas, and represents one of the largest 
> changes to the workings of the preference service since its creation. So 
> please be on the lookout for regressions that look related to preference 
> handling. If you spot any, please file bugs blocking 
> https://bugzil.la/1471025.
> 
> Thanks,
> Kris
> ___
> firefox-dev mailing list
> firefox-...@mozilla.org
> https://mail.mozilla.org/listinfo/firefox-dev



smime.p7s
Description: S/MIME cryptographic signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Major preference service architecture changes inbound

2018-07-20 Thread Ehsan Akhgari
On Fri, Jul 20, 2018 at 5:00 AM, Jonathan Kew  wrote:

> On 20/07/2018 03:17, Jeff Gilbert wrote:
>
>> Using a classic read/write exclusive lock, we would only every contend
>> on read+write or write+write, which are /rare/.
>>
>
> Indeed, that's what I envisage we'd want. The -vast- majority of prefs
> access only involves reading values. We should be able to do that from any
> thread without a second thought about either safety or contention.
>
>
>> It's really, really nice when we can have dead-simple threadsafe APIs,
>> instead of requiring people to jump through hoops or roll their own
>> dispatch code. (fragile) IIRC most new APIs added to the web are
>> supposed to be available in Workers, so the need for reading prefs
>> off-main-thread is only set to grow.
>>
>> I don't see how this can mutate into a foot-gun in ways that aren't
>> already the case today without off-main-thread access.
>>
>> Anyway, I appreciate the work that's been done and is ongoing here. As
>> you burn down the pref accesses in start-up, please consider
>> unblocking this feature request. (Personally I'd just eat the 400us in
>> exchange for this simplifying architectural win)
>>
>
> +1 to that. Our need for OMT access to prefs is only likely to grow, IMO,
> and we should just fix it once, so that any code (regardless of which
> thread(s) it may eventually run on) can trivially read prefs.
>
> Even if that means we can't adopt Robin Hood hashing, I think the
> trade-off would be well worthwhile.
>

While it's true that the need for reading more prefs from workers will
continue to grow, given the number of prefs we have I think it's safe to
say that the set of prefs that we need to access from threads beside the
main thread will be a minority of the entire set of prefs.  So one way to
have our cake and eat it too would be to separate out the prefs that are
meant to be accessible through a worker thread and expose them through an
alternate thread-safe API which may be a bit more costly to call on the
main thread, and keep the rest of the min-thread only prefs on the existing
non-thread-safe APIs.  This won't be as elegant as having one set of APIs
to work with, of course.

(FWIW pref accesses sometimes also show up in profiles when code ends up
reading prefs in a loop, e.g. when invoked from web page JS, so the startup
scenario discussed so far is only one case to think about here, there are
many more also.)


>
> JK
>
>
>
>> On Thu, Jul 19, 2018 at 2:19 PM, Kris Maglione 
>> wrote:
>>
>>> On Tue, Jul 17, 2018 at 03:49:41PM -0700, Jeff Gilbert wrote:
>>>

 We should totally be able to afford the very low cost of a
 rarely-contended lock. What's going on that causes uncached pref reads
 to show up so hot in profiles? Do we have a list of problematic pref
 keys?

>>>
>>>
>>> So, at the moment, we read about 10,000 preferences at startup in debug
>>> builds. That number is probably slightly lower in non-debug builds, bug
>>> we
>>> don't collect stats there. We're working on reducing that number (which
>>> is
>>> why we collect statistics in the first place), but for now, it's still
>>> quite
>>> high.
>>>
>>>
>>> As for the cost of locks... On my machine, in a tight loop, the cost of a
>>> entering and exiting MutexAutoLock is about 37ns. This is pretty close to
>>> ideal circumstances, on a single core of a very fast CPU, with very fast
>>> RAM, everything cached, and no contention. If we could extrapolate that
>>> to
>>> normal usage, it would be about a third of a ms of additional overhead
>>> for
>>> startup. I've fought hard enough for 1ms startup time improvements, but
>>> *shrug*, if it were that simple, it might be acceptable.
>>>
>>> But I have no reason to think the lock would be rarely contended. We read
>>> preferences *a lot*, and if we allowed access from background threads, I
>>> have no doubt that we would start reading them a lot from background
>>> threads
>>> in addition to reading them a lot from the main thread.
>>>
>>> And that would mean, in addition to lock contention, cache contention and
>>> potentially even NUMA issues. Those last two apply to atomic var caches
>>> too,
>>> but at least they generally apply only to the specific var caches being
>>> accessed off-thread, rather than pref look-ups in general.
>>>
>>>
>>> Maybe we could get away with it at first, as long as off-thread usage
>>> remains low. But long term, I think it would be a performance foot-gun.
>>> And,
>>> paradoxically, the less foot-gunny it is, the less useful it probably is,
>>> too. If we're only using it off-thread in a few places, and don't have to
>>> worry about contention, why are we bothering with locking and off-thread
>>> access in the first place?
>>>
>>>
>>> On Tue, Jul 17, 2018 at 8:57 AM, Kris Maglione 
 wrote:

>
> On Tue, Jul 17, 2018 at 02:06:48PM +0100, Jonathan Kew wrote:
>
>>
>>
>> On 13/07/2018 21:37, Kris Maglione wrote:
>>
>>>
>>>
>>> 

Re: PSA: Major preference service architecture changes inbound

2018-07-20 Thread Jonathan Kew

On 20/07/2018 03:17, Jeff Gilbert wrote:

Using a classic read/write exclusive lock, we would only every contend
on read+write or write+write, which are /rare/.


Indeed, that's what I envisage we'd want. The -vast- majority of prefs 
access only involves reading values. We should be able to do that from 
any thread without a second thought about either safety or contention.




It's really, really nice when we can have dead-simple threadsafe APIs,
instead of requiring people to jump through hoops or roll their own
dispatch code. (fragile) IIRC most new APIs added to the web are
supposed to be available in Workers, so the need for reading prefs
off-main-thread is only set to grow.

I don't see how this can mutate into a foot-gun in ways that aren't
already the case today without off-main-thread access.

Anyway, I appreciate the work that's been done and is ongoing here. As
you burn down the pref accesses in start-up, please consider
unblocking this feature request. (Personally I'd just eat the 400us in
exchange for this simplifying architectural win)


+1 to that. Our need for OMT access to prefs is only likely to grow, 
IMO, and we should just fix it once, so that any code (regardless of 
which thread(s) it may eventually run on) can trivially read prefs.


Even if that means we can't adopt Robin Hood hashing, I think the 
trade-off would be well worthwhile.


JK



On Thu, Jul 19, 2018 at 2:19 PM, Kris Maglione  wrote:

On Tue, Jul 17, 2018 at 03:49:41PM -0700, Jeff Gilbert wrote:


We should totally be able to afford the very low cost of a
rarely-contended lock. What's going on that causes uncached pref reads
to show up so hot in profiles? Do we have a list of problematic pref
keys?



So, at the moment, we read about 10,000 preferences at startup in debug
builds. That number is probably slightly lower in non-debug builds, bug we
don't collect stats there. We're working on reducing that number (which is
why we collect statistics in the first place), but for now, it's still quite
high.


As for the cost of locks... On my machine, in a tight loop, the cost of a
entering and exiting MutexAutoLock is about 37ns. This is pretty close to
ideal circumstances, on a single core of a very fast CPU, with very fast
RAM, everything cached, and no contention. If we could extrapolate that to
normal usage, it would be about a third of a ms of additional overhead for
startup. I've fought hard enough for 1ms startup time improvements, but
*shrug*, if it were that simple, it might be acceptable.

But I have no reason to think the lock would be rarely contended. We read
preferences *a lot*, and if we allowed access from background threads, I
have no doubt that we would start reading them a lot from background threads
in addition to reading them a lot from the main thread.

And that would mean, in addition to lock contention, cache contention and
potentially even NUMA issues. Those last two apply to atomic var caches too,
but at least they generally apply only to the specific var caches being
accessed off-thread, rather than pref look-ups in general.


Maybe we could get away with it at first, as long as off-thread usage
remains low. But long term, I think it would be a performance foot-gun. And,
paradoxically, the less foot-gunny it is, the less useful it probably is,
too. If we're only using it off-thread in a few places, and don't have to
worry about contention, why are we bothering with locking and off-thread
access in the first place?



On Tue, Jul 17, 2018 at 8:57 AM, Kris Maglione 
wrote:


On Tue, Jul 17, 2018 at 02:06:48PM +0100, Jonathan Kew wrote:



On 13/07/2018 21:37, Kris Maglione wrote:



tl;dr: A major change to the architecture preference service has just
landed, so please be on the lookout for regressions.

We've been working for the last few weeks on rearchitecting the
preference service to work better in our current and future
multi-process
configurations, and those changes have just landed in bug 1471025.




Looks like a great step forward!

While we're thinking about the prefs service, is there any possibility
we
could enable off-main-thread access to preferences?




I think the chances of that are pretty close to 0, but I'll defer to
Nick.

We definitely can't afford the locking overhead—preference look-ups
already
show up in profiles without it. And even the current limited exception
that
we grant Stylo while it has the main thread blocked causes problems (bug
1474789), since it makes it impossible to update statistics for those
reads,
or switch to Robin Hood hashing (which would make our hash tables much
smaller and more efficient, but requires read operations to be able to
move
entries).


I am aware that in simple cases, this can be achieved via the
StaticPrefsList; by defining a VARCACHE_PREF there, I can read its value
from other threads. But this doesn't help in my use case, where I need
another thread to be able to query an extensible set of pref names that
are
not fully known at 

Intent to ship: Clear-Site-Data header

2018-07-20 Thread Andrea Marchesini
I intend to turn Clear-Site-Data header on by default in 63. The last
remaining dependency bug is going to land today; we pass all the WPTs.

It has been developed behind the dom.clearSiteData.enabled preference.

Chrome has this feature enabled since 06-2017.

Bug to turn on by default: bug 1470111.

Spec: https://w3c.github.io/webappsec-clear-site-data/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform