Henry Story wrote: > The best solution is just to add a link types to the atom syntax: > a link to the previous feed document that points to the next bunch of > entries. IE. do what web sites do. If you can't find your answer on > the first page, go look at the next page. > How do you know when to stop? If the pages are ordered > chronologically, the client will know to stop when he has come to a page > with entries with update times before the date he last looked.
This is *not* simpler than taking a push feed using Atom over XMPP. For a push feed, all you do is: 1. Open a socket 2. Send a "login" XML Stanza 3. Process the stanzas as they arrive. For your solution, you need to: 1. Poll the feed to get a pointer to the "first link". (each poll will cost you a TCP/IP connection). 2. If you got a new "first link" then go to step 5 3. Wait some period of time (the polling interval) 4. GoTo Step 1 5. Open a new TCP/IP socket to get the next link 6. Form and send an HTTP request for the next entry 7. Catch the response from the server 8. Parse the response to determine if its time stamp is something you've already seen. 9. If you haven't seen the current entry before, then go to step 5 10. Go to step 1 to start over. (Note: I've eliminated and compressed a few steps to avoid more typing... An actual implementation would be more complex than I describe above.) Your solution is more complex and generates much more network traffic (i.e. because of polling the feed, repeatedly opening new TCP/IP connections with all the traditional "slow start" overhead, and requesting each "next link"). Additionally, you end up with reduced latency since the age of any entry you discover will be, on average, half that of your polling frequency plus some latency introduced by link following. (Yes, you could rely on continuous connections and thus remove the overhead of creating so many TCP/IP connections, however, at that point, you might as well have a continuous push socket open...) The push solution conserves network bandwidth, delivers data with much less latency and is simpler to implement. Polling sucks! (that was a pun...) bob wyman