This is expected behavior and nature of GAE data store.
This is the beauty that the time will consistent and have direct
relations to number of items retrieved and not depend on number of
records stored.
--
Alex
On Jul 15, 4:14 am, Anthony Mills mrmi...@gmail.com wrote:
On Jul 14, 6:48 am,
Hi Alexander,
I have a code which updates the table with 7 fields, one of them being
an image, on a certain condition, which would take place once in two
days, and then retrieves around 25 records from that table and
displays them in an html page which uses Dajngo script. All together,
I have
On Jul 16, 12:45 pm, Alexander Trakhimenok
alexander.trakhime...@gmail.com wrote:
This is expected behavior and nature of GAE data store.
This is the beauty that the time will consistent and have direct
relations to number of items retrieved and not depend on number of
records stored.
I
More testing reveals that requesting by key costs 12 api_cpu_ms per
result returned, plain and simple. No bonus for batching whatsoever.
So, to sum up my testing so far:
Getting (using knowledge of keys) seems to take 12 ms * results.
Querying seems to take 17.5 ms + 13.2 ms * results.
Inserting
Do you use Django or other framework(s)?
1st expensive call indicates that probably CPU spent on modules
loading - it's well known and widely discussed on the forum.
--
Alex
On Jul 12, 1:39 am, Anthony Mills mrmi...@gmail.com wrote:
Nick, thanks for responding. I'll add a lot more detail and
I don't use any frameworks, no. My server side code is mainly meant to
be a thin layer over the database. My Python version was taking a lot
of CPU, so I rewrote some of my server side code in Java, and that's
taking a lot of CPU too. Oh well.
I just use the low-level Java libraries (Query,
On Jul 14, 6:48 am, Anthony Mills mrmi...@gmail.com wrote:
I'll do some testing later tonight about what happens when
REQUEST_SIZE is 1, 5, 10, 20, etc., and report back.
Ok, here are a selection of the results of my tests. rs controls the
number of results asked for.
/service?a=gt=mrs=1 200
On Sun, Jul 12, 2009 at 1:39 AM, Anthony Millsmrmi...@gmail.com wrote:
Nick, thanks for responding. I'll add a lot more detail and test data
here.
I'm doing three fetches: one to get the user data, one to get his
starred items, one to get his own items. I've broken it down to
examine each
50.
On Jul 13, 3:30 am, Nick Johnson (Google) nick.john...@google.com
wrote:
On Sun, Jul 12, 2009 at 1:39 AM, Anthony Millsmrmi...@gmail.com wrote:
Nick, thanks for responding. I'll add a lot more detail and test data
here.
I'm doing three fetches: one to get the user data, one to get
I agree CPU time acounting is quite strange,
even if code is not involved
i was accounted over 2H CPU time for a 35min upload
cpusec/sec was close to 4.0 and msec/sec was 400
and the data to upload was a CSV file of less than 3MBytes.
Yes 3MB in 35min and it needs 4 CPU
Nick, thanks for responding. I'll add a lot more detail and test data
here.
I'm doing three fetches: one to get the user data, one to get his
starred items, one to get his own items. I've broken it down to
examine each one individually. The user fetch is fast:
userEntity =
Hi Alex,
It's impossible to give a useful comment without first seeing the code
that you're using. Speaking from my own experience, I have a personal
site that serves reasonably datastore-intensive pages, and typically
total CPU milliseconds doesn't exceed about 600 - that's to do several
get
Yeah, this is absolutely killing me. Killing me. I'm doing two
queries, for instance, where there are 50 records coming back each,
and JSON'ning them up to be sent back to the client. This shouldn't be
too bad, right? And the query they're for is an = on a list property
and an order on an ID,
I am wondering whether you have proved that it is the datastore using
the CPU and not your app?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
Google App Engine group.
To post to this group, send email to
On Fri, Jul 10, 2009 at 2:54 AM, Anthony Millsmrmi...@gmail.com wrote:
Yeah, this is absolutely killing me. Killing me. I'm doing two
queries, for instance, where there are 50 records coming back each,
and JSON'ning them up to be sent back to the client. This shouldn't be
too bad, right? And
Hi Nick,
I'll deploy some tests to further investigate this issue and I'll
provide the code and results.
Meanwhile, I'm wondering if you could share the relevant pieces of
code in your app that are tipically
below 600ms. I must confess that I haven't seen many such cases in my
app so far
Here's some sample code (not exactly the one I've previously mentioned
as I haven't got the time to prepare it):
[code]
query = ContentEvent.gql('ORDER BY created_at DESC')
event_list = query.fetch(max_results, offset)
# now that we've fetched the initial list we need to navigate all the
17 matches
Mail list logo