Yeah, the initial startup hit is the one thing that worries me about deleting so much data at once. What we could do is make eviction async, which would allow us to cut the cache to any size we want without worrying about hanging the UI. That would, of course, make the patch take a little longer to write, but it would kill two birds with one stone (fixing the sizing issue, as well as making yet another of the cache APIs asynchronous).
Bug 709297 definitely looks like a good candidate for this work. On Thu, Jul 26, 2012 at 4:42 PM, Jason Duell <[email protected]> wrote: > It's great to have this data--thanks Nick! > > I'd measure how long it takes to evict 500MB--I assume it'd happen at > startup, and possibly be quite noticeable to browser perf. If so it might > actually be better to just blow away the cache (which is lots of I/O but > we've gotten to a point where it doesn't block the browser otherwise) and > instantiate the new limit for the new cache. We could also jump right to > 350 MB. > > Bug 709297 could be the right place for this. > > Cheers, > > Jason > > > On 07/26/2012 03:48 PM, Nick Hurley wrote: >> >> All, I've posted the results of my cache usage survey to >> http://todesschaf.org/posts/2012/07/25/cache-usage-results.html >> >> The short version of the story is that it appears we can >> *significantly* reduce the max size of the disk cache without causing >> problems related to unnecessary churn in the cache. I propose we do >> the following: >> >> (1) Cut the default max size from 1GiB to 512MiB for Firefox 17. Watch >> telemetry for hit rate to make sure there are no unexpected bad >> effects. Also, watch telemetry for lock wait time to see if this helps >> at all with that (in parallel with other lock contention reducing >> work). Assuming no objections here and a quick r+, a patch for this >> could be landed tomorrow. >> (2) When development starts on Firefox 18, cut the default max size >> further to 350MiB (similar to chrome's number). Again, watch relevant >> telemetry for any contraindications. >> >> Why the 2-phase approach? To help mitigate the (once per version) >> effect of slowing down startup by evicting a bunch of entries. It also >> gives us a longer window to keep an eye on things, and having multiple >> steps can help us see if the win by dropping to 350MiB is really more >> significant from dropping to 512MiB (this of course assumes we see a >> significant win from either of these actions). >> >> Anyone have any concerns, or should we go ahead and give this a shot? > > > > _______________________________________________ > dev-tech-network mailing list > [email protected] > https://lists.mozilla.org/listinfo/dev-tech-network _______________________________________________ dev-tech-network mailing list [email protected] https://lists.mozilla.org/listinfo/dev-tech-network
