Mark Nottingham <[EMAIL PROTECTED]> wrote:

> Can you expand upon "being more precise about exactly what is needed"?

I don't have time to go into very much detail right now, but basically
it's about what people are calling 'stable urls' in another part of this
thread.

The draft calls feeds where it's sensible to maintain a history
'incremental feeds', but defines an incremental feed as a 'subscription
document' plus a series of 'archive documents'.  An archive document is
defined as a feed (url) whose entry set doesn't change.

It was necessary for you to require 'archive documents' with fixed
element sets because your history reconstruction algorithm's state is
stored as feed urls.  If you rework  the algorithm so its state is
stored in terms of <atom:id>s or <guid>s then you no longer require
archive documents to have fixed entry sets (and they probably shouldn't
be called archive documents in that case).

I need to write down a concrete algorithm that remembers <atom:id>s
instead of feed urls, but I believe that is perfectly possible (without
any particular extra burden on the client) if you have these two
requirements (which need more formal wording):

- new entries can only be inserted at the start of the 'logical feed'
- old entries can only be deleted at the end of the 'logical feed'
(you can weaken these to make no requirement of the order within a
single feed document's xml as long as the 'real' order is preserved on
the server)

The simplest way to satisfy those two requirements is to have a
conventional feed where the entries are ordered by publication date and
none are ever removed.

For a feed that satisfies those requirements whose pages are all
'sliding windows' it's not possible to miss any entries if you traverse
the next/prev links in the right direction.

Regards,

Peter

Reply via email to