Why not just slap memcached in the middle?  Would help with scalability
   as well, plus you could keep cached results keyed by query params in
   there if needed.  Just a thought...



   -------- Original Message --------
   Subject: Re: [Neo] Traversers in the REST API
   From: Alastair James <al.ja...@gmail.com>
   Date: Fri, April 09, 2010 8:32 am
   To: Neo user discussions <user@lists.neo4j.org>
   >Since in manycases the results of a query will need to be reformed
   into
   > their associated domain objects
   Unlikely to be the case over the HTTP API. Its unlikely people will
   create
   domain objects in (e.g.) PHP they will just use the data directly.
   Pagination is kinda tricky if the data changes between subsequent
   > requests for "pages". Since pagination is generally used for UIs, a
   > common approach is to place the entire dataset (or a cursor,
   depending
   > on where the data is coming from) in a session object. Regardless of
   > where it is kept, if you want to deal with data changes, you either
   > have to a) invalidate the "cached" dataset if data changes or b) keep
   a
   > copy of the whole dataset around in its "as queried" state so that
   > subsequent paging requests are consistent. Either case involves
   > keeping a fairly big duplicate data structure on the server or middle
   > tier and violates one of the objectives of REST-ful APIs, which is
   that
   > of statelessness. For that reason, I personally think the REST-ful
   API
   > shouldn't deal with paging. It should probably be done at some
   > intermediate level as needed by applications. We can certainly build
   a
   > separate API that we can all leverage if needed, but I don't think it
   > should be in the core REST-ful layer.
   >
   Well, I think for my use cases (websites), its likely that users dont
   flick
   between pages that often. For example, on may sites, users will view
   page 1
   and select an item, any very view move on to page 2. Its a very
   different
   usage pattern compared to a traditional desktop UI, so there
   is absolutely no need to hold the sorted set on the server in a cursor
   type
   way.
   A typical use case for me would be 1000+ matching rows, with 90%+ of
   page
   views for the first 10, 5% for the next 10 etc...! You can clearly see
   that
   sending the entire results set of 1000+ rows over HTTP/JSON is
   inefficient.
   Of course, caching between the web server and the neo HTTP API can
   help, but
   not in all cases, and it seems silly to rely on this.
   Al
   --
   Dr Alastair James
   CTO James Publishing Ltd.
   [1]http://www.linkedin.com/pub/3/914/163
   [2]www.worldreviewer.com
   WINNER Travolution Awards Best Travel Information Website 2009
   WINNER IRHAS Awards, Los Angeles, Best Travel Website 2008
   WINNER Travolution Awards Best New Online Travel Company 2008
   WINNER Travel Weekly Magellan Award 2008
   WINNER Yahoo! Finds of the Year 2007
   "Noli nothis permittere te terere!"
   _______________________________________________
   Neo mailing list
   User@lists.neo4j.org
   [3]https://lists.neo4j.org/mailman/listinfo/user

References

   1. http://www.linkedin.com/pub/3/914/163
   2. http://www.worldreviewer.com/
   3. https://lists.neo4j.org/mailman/listinfo/user
_______________________________________________
Neo mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user

Reply via email to