Hey Jeff,
It will be closer to 45, though right now it is less than that and fixed;
this is due to a staged rollout of the feature. The concurrent request
limit for Backends is dynamic, and App Engine will continue to push requests
to an instance as long as cpu usage is below the instance clas
It's a small, fixed number at the moment. We're looking into making it user
tweakable.
Ikai Lan
Developer Programs Engineer, Google App Engine
Blog: http://googleappengine.blogspot.com
Twitter: http://twitter.com/app_engine
Reddit: http://www.reddit.com/r/appengine
On Tue, May 17, 2011 at 12:59
I've been thinking more and more about this, and I'm a little concerned:
What is the realistic level of concurrency for a frontend instance?
Does this mean a concurrency of ~45 or a concurrency of ~6?
>- average response time: 962 ms
>- QPS: 45.333
>- Average Latency: 148.1 ms
Here
[I just realized my initial response didn't go to all]
This is really really great information. One more question - what is
the difference between average latency and average response time?
Is response time the actual time to client, but latency the actual
wallclock time spent executing? In oth
Nice analysis.
Robert
On Thu, May 12, 2011 at 18:41, Mike Lawrence wrote:
> reran the Jmeter volume tests to get QPS data...
> 1.5.0 GAE SDK. 1000 user threads, each doing 100 web page requests
> that issue a couple of backend datastore reads with sessions enabled:
>
> - without m
This is really neat, thanks for doing this.
Did you track the peak average QPS per instance during this test? I'm
very curious to know what kind of throughput we can expect for a
(threaded) java instance.
Thanks,
Jeff
On Wed, May 11, 2011 at 8:02 PM, Mike Lawrence wrote:
> here are the jmeter