I suspect it is for simplicity.  Billing for CPU usage *and* RAM usage
would lead to a lot of confusion, the instance-hour concept
encompasses both.  It also does away with the confusion caused by CPU
times not matching latency due to different CPU speeds and the CPU
used by API calls.  Now you've got one number to think about: latency.
 It makes optimizing more clear, do everything as fast as possible.
The sad side-effect is that we now care about the scheduler, how many
instances we've got / need, etc....  Also makes slow URL fetches a far
less-attractive thing.



Robert





On Sun, May 22, 2011 at 14:43, Anders <blabl...@gmail.com> wrote:
> Ok, if it's the RAM that is the bottleneck, then paying per instance makes
> sense, if each instance is given a limited amount of RAM. If this (RAM being
> the bottleneck) will remain the case also in the near future then that kind
> of price model would work. But then why not base the price on the amount of
> RAM used? I find the model of paying per instance unclear.
>
> CPU time is also not entirely clear, as as you said the CPU can often be
> idle. It would be misleading to have the same price for idle CPU time as for
> 100% CPU usage since when the CPU is idle for one instance, other instances
> can use that CPU time.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to google-appengine@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to