On Aug 5, 2008, at 11:30 PM, Aaron Boodman wrote:

Some quick notes/questions...

- I think the manifest should be some structured, extensible format
such as XML or JSON. The current text-based format is going to quickly
turn into a mess as we add additional fields and rows.

We've implemented the current format already in WebKit (available in nightlies and the Safari 4 Developer Preview).

The format does not seem to have much call for extension and seems easy to understand and use as-is.

- I like the fallback entry feature, but I don't understand why it is
coupled to opportunistic caching. On the Gears team, we frequently get
requests to add a feature where a certain pattern of URLs will try to
go the network first, and if that fails, will fall through to a
certain cached fallback URL. But nobody has asked for a way to lazily
cache URLs matching a certain pattern. Even if they had, I don't
understand what that has to do with the fallback behavior. Can we
split these apart, and maybe just remove the opportunistic caching
thing entirely?

I think the idea of opportunistic caching (as I understand it) is that the author can be lazy, and not write a manifest at all.



- It seems odd that you request a resource and the server returns 400
(bad request) we fallback. Maybe it should just be up to the server to
return an error message that directs the user to the fallback URL? I'm
not sure about this one, looking for feedback.

- Maybe this is obvious, but it's not specified what happens when the
server returns a redirect for a resource that is being cached. Do we
cache the redirect chain and replay it?

- In practice, I expect the number of URLs in the online whitelist is
going to be unbounded because of querystrings. I think if this is
going to exist, it has to be a pattern.

I agree the online whitelist should allow patterns of some form.

- I know you added the behavior of failing loads when a URL is not in
the manifest based on something I said, but now that I read it, it
feels a bit draconian. I wish that developers could somehow easily
control the space of URLs they expect to be online as well as the ones
they expect to be offline. But maybe we should just remove the whole
thing about failing loads of resources not in the manifest and online
whitelist for v1.

I think it would be hard to add later after the fact.

Regards,
Maciej

Reply via email to