Hi Diomedes,

Pitty you have not got any reply yet as the question is very valid and
interesting.

I'm not Google representative, just a newbie GAE developer and will
share my assumptions.

As I understand the GAE is more suitable for apps where writes are
triggered by user actions.

And any kind of statistics/logging/etc systems are not best candidates
to implement in GAE.

This all about BigTable design limitations. My understanding it was
designed to have much less writes then reads.

Like 1 user write a post/twitt and hundreds are reading it.

In you case even with all optimizations you have exactly oppozite -
lot's of writes and just few reads.

So I think you could have big problems to use GAE effectivly and
benefit from all it features.
--
Alexander Trakhimenok

On Feb 17, 5:01 am, diomedes <alakaz...@gmail.com> wrote:
> Ok,
>
> (a bit bummed that nobody bother to reply to my first post to the
> group)
> I spent the last few days reading related posts, transcripts of chats
> with Marzia, went through the related tickets and still I have no
> answer.
>
> Let me try once more to explain my question:
> It is not about quotas.
> It is neither about performance per se.
> My question is about cost.
> My app implements a beacon service - my client websites that will be
> using the beacon have 5-10M pageviews per month and these pageviews
> will result into actual beacon hits to my app.
> These requests do not hit the Datastore - I buffer them in memcached
> doing minimal processing to keep the per beacon-hit cost low and
> "process them" in batches every few seconds.
>
> Still, in spite of all the buffering, if the cheapest write (1 attr
> table, no indexes) gets charged 250-500msec, this make the app design
> for such a high throughput service non-obvious.
>
> I understand that I can decrease the number of my writes by using a
> pickled blob attr that contains lots of records inside - that what I
> am about to do.
> I just wanted a confirmation from any gurus out there, or from the
> google team that my understanding about the cost of the "cheapest
> write" is correct.
>
> I tried batch db.put( of 100 single cell objects) - still took 23
> seconds of data store CPU , i.e. my cheapest write = 230msec - a bit
> less than before but not by much.
> I made all 100 objects children of same parent and performed the same
> test - again same datastore CPU utilization
>
> Is there another method to update/insert multiple records that costs
> less? I.e. is there any cheaper write? ( I am very flexible :-) )
> Is my understanding that the planned 10-12cents per CPU hr will be
> applied towards the datastore CPU usage as well?
>
> Thanks a lot,
>
> diomedes
>
> On Feb 14, 3:18 pm, diomedes <alakaz...@gmail.com> wrote:
>
> > Hi all,
>
> > I have a question with regards to the pricing and how it relates to
> > the relatively high API CPU associated with each write (400-500msec)
>
> > I have started working on my first GAE app about a month ago - and
> > overall I am both excited and very satisfied.  My initial plan, was to
> > run it on Amazon's EC2 - but eventually I took the plunge, started
> > learning python, move to GAE and (almost) never looked back :-)
>
> > My app, a cacti-like webservice that monitors the performance ( think
> > response time) of a website using google analytics-like beacons, is
> > rather resource demanding.  On top of that GAE best practices imply
> > that any expensive reports/aggregates etc should be precalculated/
> > stored instead of dynamically produced on demand.  All that result in
> > many writes and given that the simplest write (single key-val pair, no
> > indexes) gets "charged" approx 500msec of API cpu time (see related
> > thread by 
> > Matijahttp://groups.google.com/group/google-appengine/browse_thread/thread/...)
> > a normal DB design that would have been meaningful in terms of cost on
> > EC2 becomes impossible on GAE.
>
> > Because I am a google-aholic I decided to change the app design to
> > minimize writes - I fetch a bunch of pickled data as a blob,  update
> > in mem and write them back as blob (just like people did before DBs
> > came along :-) )
> > Before I commit to that design I wanted to get the confirmation that
> > my understanding is correct:
> >     - Google is going to charge 10-12cents per CPU-hour and it will
> > include in that all the CPU used from APIs etc. (http://
> > googleappengine.blogspot.com/2008/05/announcing-open-signups-
> > expected.html)
> >     - This means that if your site does 10M pageviews a month and does
> > a couple writes per pageview at 500msec per write it will be  "10M CPU
> > secs/mo just from the writes, i.e. 10M/3600 * $.10/hr = $280/mo just
> > from the writes.
>
> > Is this correct?
>
> > For the record, I find Google's planned pricing extremely attractive
> > when compared to Amazon's primarily due to the fact that Amazon
> > charges 10c for CPU-hr of the machine while google (will) charge 10c
> > for CPU-hr *actually used* by your requests.  This makes a huge
> > difference -- a server running at 50+% capacity (thats rather
> > aggressive - but with Amazon/RightScale combination you can be
> > aggressive) will still use less than 20% of its CPU during that time.
> > However, when comparing the cost writes  between Google and the
> > corresponding setup of a [high CPU EC2 server + elastic storage] combo
> > (able to provide quite more than  20-50 "simple" writes per sec)
> > Amazon is much cheaper than Google.
>
> > Ok, that's all I had to say,
> > Sorry for the rather long post,
> > Looking forward to hear comments
>
> > Ah and thank you very very much for lifting the high cpu quota!!
>
> > Diomedes
>
>
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to