Hi Pierre,

On Tue, Nov 15, 2011 at 3:19 AM, Pol <p...@everpix.net> wrote:
> Hi Brian,
>
>> > So on December 1st, the 50% discount for front-instances is gone. The
>> > idea is to compensate by switching to Python 2.7 with multithreading,
>> > but it looks like at this point it's a lose-lose situation: it runs
>> > more requests at the same time, but they take longer. We're mid-
>> > November already, do you guys think you'll have all of this working
>> > perfectly within 2 weeks?
>>
>> No, the issues with concurrent requests won't be fixed by the end of 
>> November.
>>
>> But note that concurrent requests will *not* improve the utilization
>> of CPU-bound requests. Running multiple threads on the same CPU just
>> proportionally slows each thread down.
>
> That doesn't make sense: apps do a mix of CPU stuff and RPC stuff (and
> possibly URL requests). What's the points of concurrent requests if it
> slows down the CPU stuff while allowing to parallelize your RPC calls?

This pattern (a mix of CPU use and RPC calls) will benefit from
concurrent requests. I was writing about what I understood to be your
login example.

Presumably it does a single datastore read to access user information
(taking 40ms of so) and then spends 1 seconds doing cryptography.

> The end result will be the same number of instance as requests end up
> taking longer. Isn't the scheduler supposed to watch all this and
> ensure the CPU on each physical machine is not saturated?
>
> Only apps that do long poll URL requests and barely use the CPU would
> benefit of concurrent requests then.
>
> We were told: don't worry so much about hours-based pricing, just wait
> for 2.7 runtime, it'll have concurrent requests, it'll compensate
> things. Clearly that doesn't work as promised if just turning
> threadsafe ON makes a 2 seconds requests turn into a 30-60 seconds
> one: the scheduler is not doing the right thing.

Yes, these large latency increases are a bug:
http://code.google.com/p/googleappengine/issues/detail?id=6323

> It seems what you need is a per WSGIApplication instance control of
> the concurrent setting instead of global one, so you can turn it on
> only where it makes sense.
>
> Finally, no matter what, concurrent or not, there's still a problem as
> 2.7 runtime appears slower than 2.5 in this simple empirical test. I'm
> starting to suspect you are using the 2.7 transition as a opportunity
> to run more virtual instances per physical machine.

That's not the case. The Python 2.7 runtime is slower than the Python
2.5 runtime in some cases and faster in others. We aren't publicizing
the reasons why at this point.

Cheers,
Brian

> - Pierre
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Google App Engine" group.
> To post to this group, send email to google-appengine@googlegroups.com.
> To unsubscribe from this group, send email to 
> google-appengine+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/google-appengine?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to