Patrick McManus wrote:
> Additionally - ability to asynchronously fetch and place items in the
> cache for future use if they weren't currently fresh.

AppCache does this, more or less.

> (it wasn't clear to me exactly why manipulating the dom for link
> prefetch didn't do that - though I was assured it didn't.)

If it doesn't work, then my guess is that link prefetch is only processed 
during HTML parsing, so that prefetch links added by JS would be ignored.

Taras's blog post makes some good points. But, the examples he chose were not 
great as far as best practices are concerned. For example, if whitehouse.gov 
makes the browser revalidate 95 resources every time it is loaded, that is not 
HTTP's fault; that is the developer's fault. A new API isn't going to help that 
developer. People who build high-performance websites know how to make the 
browser avoid those requests. Many of these techniques have even been automated 
into things like mod_pagespeed. 

Even when you have to do revalidation, at least in theory revalidating 95 
entries should be really fast if you have (SPDY or (TLS compression and full 
HTTP pipelining)) AND a smart server). The fact that that describes about 0% of 
the web is the main problem, AFAICT. And, most of the remaining work on 
correcting that has to happen server-side, not client-side.

If I were a web app developer, in the short term I would try putting as much in 
AppCache as possible, for browsers that don't prompt for AppCache. This should 
work well unless/until other people start doing so, if the browser 
de-prioritizes the eviction of AppCache-cached resources. (Once everybody does 
this, then such browsers will have to garbage collect your AppCache-cached 
resources just like non-AppCache resources, AFAICT.) Though long-term, AFAICT, 
AppCache doesn't really solve any problems unless it is used as the manifest 
for explicitly-installed web apps.

I wouldn't doubt that some IndexedDB implementations are slow. But, I think 
IndexedDB can be made fast if it is slow now. There's no reason that an 
IndexedDB implementation should be slower than a persistent HTTP cache. I would 
think it would be easier to make an IndexedDB implementation faster than a disk 
cache than to make a disk cache faster than IndexedDB. What operations are slow 
in Gecko's IndexedDB implementation?

One thing that I think is missing in IndexedDB is a way to indicate which 
entries can be safely garbage collected by the browser. Right now, the browser 
has to choose to throw away none of the data for a site, or all of it. This 
means it can't automatically allow a site to use up to 100MB (say) of space, 
and then selectively delete some of it as needed. I think this is where some 
kind of cache API like Taras suggested may be helpful.

FWIW, this weekend I reviewed the use of the nsICache* API by dozens of 
extensions on AMO and AFAICT, most if not all of them were prone to race 
conditions, at least in theory.

Cheers,
Brian
_______________________________________________
dev-tech-network mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-tech-network

Reply via email to