I have filed bug 1477436: "Preferences::Get* is not threadsafe/is main thread only" https://bugzilla.mozilla.org/show_bug.cgi?id=1477436
On Fri, Jul 20, 2018 at 11:36 AM, Eric Rahm <er...@mozilla.com> wrote: > We *could* special case prefs with an appropriate data structure that works > in a thread-safe nature; as far as RWLock's go, we do have one in tree [1]. > This has gone off the rails a bit from Kris' original announcement, which > I'll reiterate: Watch out for prefs related bustage. > > Jeff, would you mind filing a bug for further discussion of off-main-thread > access as a future improvement? > > [1] https://searchfox.org/mozilla-central/source/xpcom/threads/RWLock.h > > On Thu, Jul 19, 2018 at 7:25 PM, Kris Maglione <kmagli...@mozilla.com> > wrote: >> >> On Thu, Jul 19, 2018 at 07:17:13PM -0700, Jeff Gilbert wrote: >>> >>> Using a classic read/write exclusive lock, we would only every contend >>> on read+write or write+write, which are /rare/. >> >> >> That might be true if we gave up on the idea of switching to Robin Hood >> hashing. But if we don't, then every lookup is a potential write, which >> means every lookup required a write lock. >> >> We also don't really have any good APIs for rwlocks at the moment. Which, >> admittedly, is a solvable problem. >> >> >>> On Thu, Jul 19, 2018 at 2:19 PM, Kris Maglione <kmagli...@mozilla.com> >>> wrote: >>>> >>>> On Tue, Jul 17, 2018 at 03:49:41PM -0700, Jeff Gilbert wrote: >>>>> >>>>> >>>>> We should totally be able to afford the very low cost of a >>>>> rarely-contended lock. What's going on that causes uncached pref reads >>>>> to show up so hot in profiles? Do we have a list of problematic pref >>>>> keys? >>>> >>>> >>>> >>>> So, at the moment, we read about 10,000 preferences at startup in debug >>>> builds. That number is probably slightly lower in non-debug builds, bug >>>> we >>>> don't collect stats there. We're working on reducing that number (which >>>> is >>>> why we collect statistics in the first place), but for now, it's still >>>> quite >>>> high. >>>> >>>> >>>> As for the cost of locks... On my machine, in a tight loop, the cost of >>>> a >>>> entering and exiting MutexAutoLock is about 37ns. This is pretty close >>>> to >>>> ideal circumstances, on a single core of a very fast CPU, with very fast >>>> RAM, everything cached, and no contention. If we could extrapolate that >>>> to >>>> normal usage, it would be about a third of a ms of additional overhead >>>> for >>>> startup. I've fought hard enough for 1ms startup time improvements, but >>>> *shrug*, if it were that simple, it might be acceptable. >>>> >>>> But I have no reason to think the lock would be rarely contended. We >>>> read >>>> preferences *a lot*, and if we allowed access from background threads, I >>>> have no doubt that we would start reading them a lot from background >>>> threads >>>> in addition to reading them a lot from the main thread. >>>> >>>> And that would mean, in addition to lock contention, cache contention >>>> and >>>> potentially even NUMA issues. Those last two apply to atomic var caches >>>> too, >>>> but at least they generally apply only to the specific var caches being >>>> accessed off-thread, rather than pref look-ups in general. >>>> >>>> >>>> Maybe we could get away with it at first, as long as off-thread usage >>>> remains low. But long term, I think it would be a performance foot-gun. >>>> And, >>>> paradoxically, the less foot-gunny it is, the less useful it probably >>>> is, >>>> too. If we're only using it off-thread in a few places, and don't have >>>> to >>>> worry about contention, why are we bothering with locking and off-thread >>>> access in the first place? >>>> >>>> >>>>> On Tue, Jul 17, 2018 at 8:57 AM, Kris Maglione <kmagli...@mozilla.com> >>>>> wrote: >>>>>> >>>>>> >>>>>> On Tue, Jul 17, 2018 at 02:06:48PM +0100, Jonathan Kew wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>> On 13/07/2018 21:37, Kris Maglione wrote: >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> tl;dr: A major change to the architecture preference service has >>>>>>>> just >>>>>>>> landed, so please be on the lookout for regressions. >>>>>>>> >>>>>>>> We've been working for the last few weeks on rearchitecting the >>>>>>>> preference service to work better in our current and future >>>>>>>> multi-process >>>>>>>> configurations, and those changes have just landed in bug 1471025. >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> Looks like a great step forward! >>>>>>> >>>>>>> While we're thinking about the prefs service, is there any >>>>>>> possibility >>>>>>> we >>>>>>> could enable off-main-thread access to preferences? >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> I think the chances of that are pretty close to 0, but I'll defer to >>>>>> Nick. >>>>>> >>>>>> We definitely can't afford the locking overhead—preference look-ups >>>>>> already >>>>>> show up in profiles without it. And even the current limited exception >>>>>> that >>>>>> we grant Stylo while it has the main thread blocked causes problems >>>>>> (bug >>>>>> 1474789), since it makes it impossible to update statistics for those >>>>>> reads, >>>>>> or switch to Robin Hood hashing (which would make our hash tables much >>>>>> smaller and more efficient, but requires read operations to be able to >>>>>> move >>>>>> entries). >>>>>> >>>>>>> I am aware that in simple cases, this can be achieved via the >>>>>>> StaticPrefsList; by defining a VARCACHE_PREF there, I can read its >>>>>>> value >>>>>>> from other threads. But this doesn't help in my use case, where I >>>>>>> need >>>>>>> another thread to be able to query an extensible set of pref names >>>>>>> that >>>>>>> are >>>>>>> not fully known at compile time. >>>>>>> >>>>>>> Currently, it looks like to do this, I'll have to iterate over the >>>>>>> relevant prefs branch(es) ahead of time (on the main thread) and copy >>>>>>> all >>>>>>> the entries to some other place that is then available to my worker >>>>>>> threads. >>>>>>> For my use case, at least, the other threads only need read access; >>>>>>> modifying prefs could still be limited to the main thread. >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> That's probably your best option, yeah. Although I will say that those >>>>>> kinds >>>>>> of extensible preference sets aren't great for performance or memory >>>>>> usage, >>>>>> so switching to some other model might be better. >>>>>> >>>>>>> Possible? Or would the overhead of locking be too crippling? >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> The latter, I'm afraid. >>>>>> >>>>>> _______________________________________________ >>>>>> dev-platform mailing list >>>>>> dev-platform@lists.mozilla.org >>>>>> https://lists.mozilla.org/listinfo/dev-platform >>>> >>>> >>>> >>>> -- >>>> Kris Maglione >>>> Senior Firefox Add-ons Engineer >>>> Mozilla Corporation >>>> >>>> On two occasions I have been asked, "Pray, Mr. Babbage, if you put >>>> into the machine wrong figures, will the right answers come out?" I am >>>> not able rightly to apprehend the kind of confusion of ideas that >>>> could provoke such a question. >>>> --Charles Babbage >>>> >> >> -- >> Kris Maglione >> Senior Firefox Add-ons Engineer >> Mozilla Corporation >> >> You can't trust code that you did not totally create yourself. >> --Ken Thompson >> >> >> _______________________________________________ >> firefox-dev mailing list >> firefox-...@mozilla.org >> https://mail.mozilla.org/listinfo/firefox-dev > > _______________________________________________ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform