It's a small, fixed number at the moment. We're looking into making it user
tweakable.

Ikai Lan
Developer Programs Engineer, Google App Engine
Blog: http://googleappengine.blogspot.com
Twitter: http://twitter.com/app_engine
Reddit: http://www.reddit.com/r/appengine



On Tue, May 17, 2011 at 12:59 PM, Jeff Schnitzer <j...@infohazard.org>wrote:

> I've been thinking more and more about this, and I'm a little concerned:
>
> What is the realistic level of concurrency for a <threadsafe> frontend
> instance?
>
> Does this mean a concurrency of ~45 or a concurrency of ~6?
>
> >    - average response time: 962 ms
> >    - QPS: 45.333
> >    - Average Latency: 148.1 ms
>
> Here's my concern:  My app is a Facebook app.  RPCs to Facebook take
> time - quite a bit more than I would prefer, but there's nothing I can
> do about it.  As a consequence, my app spends a fair amount of time
> waiting - some operations take more than a second to complete
> (thankfully, GWT helps shield the user from this) - so I'm concerned
> this will spike my instance count.
>
> Based on past experience with more traditional web architectures, I
> can easily have a couple hundred threads in a JBoss instance blocked
> waiting for IO.  The Node.js/Twisted/EventMachine people probably
> laugh at this level of concurrency, but it's enough to keep costs
> within reason.  I'm a bit concerned that Appengine's level of
> concurrency might be "6"... which means that when I reach scale, I'll
> need to pay for 33X more instances than I would in an alternative
> cloud offering.  That could hurt - a lot.
>
> The docs say "Instances marked <threadsafe> can serve a small number
> of requests in parallel."  I don't like the sound of "a small number
> of requests" :-(
>
> Jeff
>
> On Tue, May 17, 2011 at 12:41 PM, Jeff Schnitzer <j...@infohazard.org>
> wrote:
> > [I just realized my initial response didn't go to all]
> >
> > This is really really great information.  One more question - what is
> > the difference between average latency and average response time?
> >
> > Is response time the actual time to client, but latency the actual
> > wallclock time spent executing?  In other words, does this mean that
> > the difference is the amount of time a request spends in a pending
> > queue?
> >
> > I'm trying to get a handle on the "best-case" scenario assuming
> > optimal behavior of the scheduler.  This is straightforward with
> > single-threaded python (given 200ms wall-clock requests, each instance
> > can at best handle 5 req/sec) but of course much harder to compute
> > with multi-threaded java.
> >
> > A question for Google:  What factors go into deciding how many threads
> > an instance can process at once?  Is it a constant # or does it depend
> > on how cpu-bound the application is?  I make a lot of requests to
> > Facebook and it's not at all unlikely for my app to sit around for
> > hundreds of milliseconds at a time.
> >
> > Jeff
> >
> > On Thu, May 12, 2011 at 3:41 PM, Mike Lawrence <m...@systemsplanet.com>
> wrote:
> >> reran the Jmeter volume tests to get QPS data...
> >>    1.5.0 GAE SDK. 1000 user threads, each doing 100 web page requests
> >>     that issue a couple of backend datastore reads with sessions
> enabled:
> >>
> >> - without multithreading
> >>    - nodes: 40
> >>    - average response time: 1998 ms
> >>    - QPS: 6.745
> >>    - Average Latency: 103.9 ms
> >>    - Average Memory: 61.2 MBytes
> >>
> >> - with multi threading
> >>    - nodes: 8
> >>    - average response time: 962 ms
> >>    - QPS: 45.333
> >>    - Average Latency: 148.1 ms
> >>    - Average Memory: 71.3 MBytes
> >>
> >> Bottom Line: For my application (Java/Stripes/Slim3), Multi-treading is
> >> twice as fast;
> >>    Under the new pricing model, multi-threading is 5 times cheaper (8
> nodes
> >> instead of 40 nodes).
> >> Sincerely,  Mike Lawrence
> >>
> >> On Thu, May 12, 2011 at 3:46 AM, Jeff Schnitzer <j...@infohazard.org>
> wrote:
> >>>
> >>> This is really neat, thanks for doing this.
> >>>
> >>> Did you track the peak average QPS per instance during this test?  I'm
> >>> very curious to know what kind of throughput we can expect for a
> >>> (threaded) java instance.
> >>>
> >>> Thanks,
> >>> Jeff
> >>>
> >>> On Wed, May 11, 2011 at 8:02 PM, Mike Lawrence <m...@systemsplanet.com
> >
> >>> wrote:
> >>> > here are the jmeter print screens
> >>> >
> >>> > --
> >>> > You received this message because you are subscribed to the Google
> >>> > Groups
> >>> > "Google App Engine" group.
> >>> > To post to this group, send email to
> google-appengine@googlegroups.com.
> >>> > To unsubscribe from this group, send email to
> >>> > google-appengine+unsubscr...@googlegroups.com.
> >>> > For more options, visit this group at
> >>> > http://groups.google.com/group/google-appengine?hl=en.
> >>> >
> >>>
> >>> --
> >>> You received this message because you are subscribed to the Google
> Groups
> >>> "Google App Engine" group.
> >>> To post to this group, send email to google-appengine@googlegroups.com
> .
> >>> To unsubscribe from this group, send email to
> >>> google-appengine+unsubscr...@googlegroups.com.
> >>> For more options, visit this group at
> >>> http://groups.google.com/group/google-appengine?hl=en.
> >>>
> >>
> >>
> >
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to google-appengine@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to