On 09/30/2011 12:47 PM, Patrick McManus wrote:
On Fri, 2011-09-30 at 12:30 -0700, Geoffrey Brown wrote:
Some more info to save you from reading through the bug:

tp4m loads a sequence of 21 popular pages, then loads the same sequence of 
pages again in the same order, and reports the average of the minimum times for 
each page. In practice, the minimum per-page time is very usually the second 
access, in part because of cache hits in the Necko memory cache.

> From a cache perspective, the access pattern does not reflect real-world use 
in that:
  - every page that is loaded is guaranteed to be loaded again
  - every page has the same access frequency
  - pages are loaded in the same order each time

    + a cache miss and then a load from the localhost network is not
inherently slower than a cache hit and a load from the cache. tp* runs
off localhost backends iirc.

I thought this is why nick did necko net.

Yes, sorry, we'd need both the access pattern described above AND necko-net to get this to work. The key is to have some test running that has better hit rate with LRU on (and a small cache), and has network times greater than disk access. There's all sort of refinements that could be made, but even something stupid would be progress as far as regression coverage goes.

Jason
_______________________________________________
dev-tech-network mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-tech-network

Reply via email to