On Thu, Aug 13, 2009 at 4:26 PM, ajacks504 <adam.p.jack...@gmail.com> wrote:

>
> thanks for the responses guys.  I think i'm getting the hang of things
> here...
>
> >> 2- i dont know how to grab the decimated data in a way that wont be
> >> computationaly expensive
>
> >You could add a property that you set randomly to a number between 0
> >and 9. Fetching every record where that property equals (for example)
> >0 will get you 1/10 the records.
>
> That might work well, maybe instead of random i can age them 0-9 as i
> post them, then select only records with age = 4?


Maintaining a sequential, distributed counter is tricky. If you need exact
numbering, you'd be better kicking off a regular task that numbers existing
entities that don't yet have numbers.


> what i really want is something that grabs every nth record
> efficiently so that i can adjust the timebase on the display chart and
> get only "enough" evenly decimated samples from the DB to display
> based on the timebase.


Perhaps the best way to do this would be a set of boolean values indicating
if this entry was logged in the first second of the minute, the first minute
of the hour, and so forth. Then you can fetch the first entity of any
interval that you've catered for.


>
>
> small skips for today only, larger for days, larger for months...
> just like you guys do for the google.finance stock displays...
>
> thanks again for the replys...
>
> On Aug 12, 7:18 pm, sboire <sebastien.boirelavi...@gmail.com> wrote:
> > True indeed it will timeout if you try to do them all in the same
> > request. The trick is to return the last entry info of the bunch of
> > 1000 records as part of the request response and then query another
> > time for the next 1000. Still, as you mention., there is a huge
> > bandwidth and CPU hit to do so. So in practice that may not be a good
> > solution for counting, more for iterating a whole database.Using
> > key_only option in your query may significantly increases the
> > performance of the query, allowing you to do several "1000" chunk in
> > one request, better, but still inefficient for counting unless the
> > datasets stays within few thousands.
> >
> > Sebastien
> >
> > On Aug 12, 2:12 pm, Neal Walters <nwalt...@sprynet.com> wrote:
> >
> > > Sboire,
> > >   There is an "offset" parm on the fetch, so yes, you can get 1000
> > > records at a time in a loop.
> > > I believe however this is discouraged because it will eat up your CPU
> > > quota, and potentially you could hit other limits and quotas.  Imagine
> > > if you had 5 million records.  Reading 1000 at a time would take 5000
> > > calls.  Even on a MySQL database with PHP for example, you would
> > > probably hit the various CPU limits per task reading so many records
> > > in one round-trip from the client to the server.
> >
> > > Neal Walters
> >
>


-- 
Nick Johnson, Developer Programs Engineer, App Engine

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to