I did my own testing in python.  And, I definitely couldn't get 30
simultaneous requests to work.

This was done on a test app that does not have Billing Enabled.. so not sure
if that affects the Dynamic Request Limit.

Either way, it seems that for this test, the effective Limit was 16
simultaneous accesses.  As soon as I modified my code to try 17, it started
throwing a few  Dynamic Request Limit errors.. though, the Logs page
AppEngine just refers to it as another Request (with code 500 and a time of
10 seconds).

Here is the code for the handler.. after receiving a GET, it sleeps for 3
seconds and the responds with a 'hello':

from google.appengine.ext import webapp
from google.appengine.ext.webapp.util import run_wsgi_app
import time

class meTest(webapp.RequestHandler):
    def get(self):
        time.sleep(3)
        meid = self.request.get('id')
        self.response.out.write('hi! foo%s'%meid)


application = webapp.WSGIApplication([('/test/meTest',meTest)],
                                     debug = True)

def main():
    run_wsgi_app(application)

if __name__ == "__main__":
    main()

And here is they python code I am running on my local machine to test.  It
fires off 17 threads.  Each thread requests a GET from the handler and
prints the response, three times in a row:

import httplib
import threading

class meThread(threading.Thread):
    def run(self):
        for i in range(3):
            conn = httplib.HTTPConnection('datastoretester.appspot.com')
            conn.request("GET","/test/meTest?id=%s" % self.getName())
            response = conn.getresponse()
            print response.read()
            conn.close()

for i in range(17):
    meThread(name=str(i+1)).start()

As soon as I scale the range back to (16), no more Dynamic Request Error.

Special notes:  I am not sure if there is some sort of built in
rate-limiting for requests from the same IP address.. but when sending 17
threads at the handler.. it is not responding in 3 seconds to their GETs..
It takes about 11 seconds.

If you look in the Logs, for a run with 17 threads, you see this:

03-02 08:31PM 00.384 /test/meTest?id=16 200 12855ms 19cpu_ms 0kb gzip(gfe)
03-02 08:31PM 01.266 /test/meTest?id=7 200 11973ms 19cpu_ms 0kb gzip(gfe)
03-02 08:31PM 01.283 /test/meTest?id=15 500 10017ms 0cpu_ms 0kb gzip(gfe)
03-02 08:30PM 58.280 /test/meTest?id=4 200 11952ms 0cpu_ms 0kb gzip(gfe)

(Notice the 500 error in there as well.. that's the one that mentions the
Dynamic Request Limit).

Now, if I run the test with only 4 threads, it looks nice and quick like
this:

03-02 08:35PM 25.602 /test/meTest?id=1 200 3009ms 0cpu_ms 0kb gzip(gfe)
03-02 08:35PM 22.637 /test/meTest?id=2 200 3008ms 0cpu_ms 0kb gzip(gfe)
03-02 08:35PM 22.623 /test/meTest?id=3 200 3009ms 0cpu_ms 0kb gzip(gfe)
03-02 08:35PM 22.603 /test/meTest?id=4 200 3008ms 0cpu_ms 0kb gzip(gfe)

Once I start testing with more than 4 threads, it starts to slow down in its
response time...

So, I'd guess something would need to be clarified.. is there some internal
limiting going on per ip address?  Does a "long" running process have a
lower simultaneous request limit?

On Tue, Mar 2, 2010 at 12:21 PM, Gary Orser <garyor...@gmail.com> wrote:

> Eli,
>
> You have the python request server.
> Here is the java client:
> You'll have to get the libraries yourself.
>
> Cheers, Gary
>
> import java.util.ArrayList;
> import java.util.concurrent.Callable;
> import java.util.concurrent.ExecutionException;
> import java.util.concurrent.ExecutorService;
> import java.util.concurrent.Executors;
> import java.util.concurrent.Future;
>
> import org.apache.commons.io.IOUtils;
> import org.apache.http.HttpHost;
> import org.apache.http.HttpResponse;
> import org.apache.http.HttpVersion;
> import org.apache.http.client.HttpClient;
> import org.apache.http.client.methods.HttpGet;
> import org.apache.http.conn.ClientConnectionManager;
> import org.apache.http.conn.params.ConnManagerParams;
> import org.apache.http.conn.params.ConnPerRouteBean;
> import org.apache.http.conn.scheme.PlainSocketFactory;
> import org.apache.http.conn.scheme.Scheme;
> import org.apache.http.conn.scheme.SchemeRegistry;
> import org.apache.http.conn.ssl.SSLSocketFactory;
> import org.apache.http.impl.client.DefaultHttpClient;
> import org.apache.http.impl.conn.tsccm.ThreadSafeClientConnManager;
> import org.apache.http.params.BasicHttpParams;
> import org.apache.http.params.HttpParams;
> import org.apache.http.params.HttpProtocolParams;
>
>
> import org.apache.commons.logging.Log;
> import org.apache.commons.logging.LogFactory;
>
> public class Main
> {
>        private Log log = LogFactory.getLog(Main.class);
>        // ADJUST: number of threads to make requests on
>        public static int NUM_PARALLEL_SECTION_REQUESTS = 20;
>        public static HttpParams httpParams = new BasicHttpParams();
>        {
>                httpParams.setBooleanParameter("http.protocol.expect-
> continue", false);
>                // ADJUST: if this is included, will use 8888 as a
> proxy port. Charles Proxy defaults to this port.
>                //httpParams.setParameter("http.route.default-proxy",
> new HttpHost("localhost", 8888));
>        }
>
>        protected class GetSection implements Callable<String>
>        {
>                protected int index;
>                protected HttpClient client;
>                protected String URL;
>                public GetSection(int index, HttpClient client, String
> URL)
>                {
>                        this.index = index;
>                        this.client = client;
>                        this.URL = URL;                }
>                public String call() throws Exception
>                {
>                        HttpGet getSection = new
> HttpGet(URL);                        HttpResponse respSection =
> client.execute(getSection);
>                        String foo =
> IOUtils.toString(respSection.getEntity().getContent(), "UTF-8");
>                        return foo;
>                }        }
>
>        public static void main(String[] args) throws Exception
>        {                new Main().maint(args);
>        }
>
>        public void maint(String[] args) throws Exception        {
>                SchemeRegistry schemeRegistry = new SchemeRegistry();
>                schemeRegistry.register(new Scheme("http",
> PlainSocketFactory.getSocketFactory(), 80));
>                schemeRegistry.register(new Scheme("https",
> SSLSocketFactory.getSocketFactory(), 443));
>                HttpParams params = new BasicHttpParams();
>                ConnManagerParams.setMaxTotalConnections(params,
> NUM_PARALLEL_SECTION_REQUESTS);
>                ConnManagerParams.setMaxConnectionsPerRoute(params,
> new ConnPerRouteBean(NUM_PARALLEL_SECTION_REQUESTS));
>                HttpProtocolParams.setVersion(params,
> HttpVersion.HTTP_1_1);
>                ClientConnectionManager cm = new
> ThreadSafeClientConnManager(params, schemeRegistry);
>                HttpClient client = new DefaultHttpClient(cm,
> httpParams);
>
>                ExecutorService es =
> Executors.newFixedThreadPool(NUM_PARALLEL_SECTION_REQUESTS);
>                // ADJUST: total number of requests to make.
>                int numSections = 100;
>                ArrayList<Future<String>> futures = new
> ArrayList<Future<String>>(numSections);
>                log.info("queuing requests");
>                for (int i = 0; i < numSections; i++)
>                {
>                        // ADJUST: set a real hostname here
>                        futures.add(es.submit(new GetSection(i,
> client, "http://yourappid.appspot.com/sit/"; + Integer.toString(i))));
>                        // ADJUST: stagger initial requests with this
> sleep
>                        //Thread.sleep(200);
>                }
>
>                es.shutdown();
>
>                log.info("waiting for thread pool to finish");
>                while (!es.isTerminated())
>                        Thread.sleep(500);
>
>                log.info("all requests queued");
>
>                try
>                {
>                        for (Future<String> future: futures)
>                                future.get();
>                        log.info("got all futures");
>                }
>                catch (ExecutionException e)
>                {
>                        // TODO: not really sure what to do if cause
> is Throwable but not Exception
>                        if (e.getCause() instanceof Exception)
>                                throw (Exception)e.getCause();
>                 }
>        }
>
> On Mar 2, 9:50 am, Eli Jones <eli.jo...@gmail.com> wrote:
> > What I'm suggesting is.. You need to create a simple test setup that
> > recreates this dynamic request limit error.. (It definitely should not
> > take 8mb of code).
> >
> > I will see if I can create a handler like the one you posted, deploy
> > it, and then run 30 seperate processes that keep getting from that
> > handler.. (I can write this up in less than 10kb or python code)...
> >
> > My guess is this will work.  Without seeing sample code.. I can't tell
> > where you may be going wrong (or where GAE may be breaking)
> >
> > On 3/2/10, Gary Orser <garyor...@gmail.com> wrote:
> >
> >
> >
> > > Actually, 4 threads was before we optimized server side, and set up
> > > the test environment.
> >
> > > I have a tarball, about 8mb, with the test environment. (django and
> > > libraries, grrr)
> > > What is the best way to post this?  I don't see any file attachments
> > > on groups.
> >
> > > Cheers, Gary
> >
> > > On Mar 2, 8:23 am, Eli Jones <eli.jo...@gmail.com> wrote:
> > >> Are these threads you're using (at this point, it really seems like
> you
> > >> should post some simplified code to illustrate the issue at hand)
> waiting
> > >> for their response before trying to get again?
> >
> > >> Posting some code to help recreate this issue will lead to a much
> faster
> > >> resolution.. as it stands.. I just know that someone on the internet
> has
> > >> "10
> > >> threads" that are hitting a dynamic request limit.
> >
> > >> I also know that in the initial e-mail, when the request took longer
> to
> > >> return.. these "threads" were hitting a lower dynamic request limit
> (only
> > >> 4
> > >> could run).  This suggest that there is an important detail to how
> your
> > >> "threads" are doing their work.. and we would need that to provide
> useful
> > >> help.
> >
> > >> Thanks for info.
> >
> > >> On Tue, Mar 2, 2010 at 10:01 AM, Gary Orser <garyor...@gmail.com>
> wrote:
> >
> > >> > But that's the point.  I can not reach 30 active requests.
> > >> > I can only reach 10 active requests without error.
> >
> > >> > Any ideas on how I can debug this?
> >
> > >> > Cheers, Gary.
> >
> > >> > On Mar 2, 7:05 am, "Nick Johnson (Google)" <nick.john...@google.com
> >
> > >> > wrote:
> > >> > > Hi,
> >
> > >> > > On Tue, Mar 2, 2010 at 1:54 PM, Wooble <geoffsp...@gmail.com>
> wrote:
> > >> > > > The 500 requests per second number relies on the
> > >> > > > probably-unreasonable
> > >> > > > assumption that each request can complete in ~75ms.
>  Deliberately
> > >> > > > making your requests take a whole 3 seconds each is, obviously,
> not
> > >> > > > going to work.  You can only have 10 instances active at a time
> by
> > >> > > > default; if the pages you're serving actually take 3 seconds to
> > >> > > > complete you'll need to optimize things a whole lot or be stuck
> with
> > >> > > > a
> > >> > > > 3.33 request/sec maximum.
> >
> > >> > > Actually, the default limit is 30 active requests.
> >
> > >> > > -Nick Johnson
> >
> > >> > > > On Mar 1, 11:33 pm, Gary Orser <garyor...@gmail.com> wrote:
> > >> > > > > Hi Nick,
> >
> > >> > > > > Hmm, I was running tests on a billing enabled appspot today.
> 100
> > >> > > > > requests/test.
> >
> > >> > > > > 10 threads getting a URL with a 3 second sleep (to emulate
> > >> > > > > computation) on appspot, was the most I could get without
> getting
> > >> > > > > 500
> > >> > > > > errors.
> > >> > > > > If I raised the thread pool beyond 10, I started getting
> errors??
> >
> > >> > > > > That doesn't reconcile very well with this statement from the
> > >> > > > > appengine website.
> > >> > > > > "Requests
> > >> > > > >     The total number of requests to the app. The per-minute
> quotas
> > >> > for
> > >> > > > > application with billing enabled allow for up to 500 requests
> per
> > >> > > > > second--more than one billion requests per month. If your
> > >> > > > > application
> > >> > > > > requires even higher quotas than the "billing-enabled" values
> > >> > > > > listed
> > >> > > > > below, you can request an increase in these limits here.
> > >> > > > > "
> >
> > >> > > > > Is there some billing setting that affects this?
> >
> > >> > > > > Cheers, Gary
> >
> > >> > > > > PS.  dead simple request handler.
> >
> > >> > > > > import time
> > >> > > > > from django import http
> > >> > > > > def sit(req):
> > >> > > > >     time.sleep(3)
> > >> > > > >     return http.HttpResponse('foo')
> >
> > >> > > > > errors are:
> >
> > >> > > > > 03-01 04:15PM 48.177 /sit/91 500 10019ms 0cpu_ms 0kb gzip(gfe)
> > >> > > > > 153.90.236.210 - - [01/Mar/2010:16:15:58 -0800] "GET /sit/91
> > >> > HTTP/1.1"
> > >> > > > > 500 0 - "gzip(gfe)" ".appspot.com"
> > >> > > > > W 03-01 04:15PM 58.197
> > >> > > > > Request was aborted after waiting too long to attempt to
> service
> > >> > > > > your
> > >> > > > > request. Most likely, this indicates that you have reached
> your
> > >> > > > > simultaneous dynamic request limit. This is almost always due
> to
> > >> > > > > excessively high latency in your app. Please seehttp://
> > >> > > > code.google.com/appengine/docs/quotas.htmlfor more details.
> >
> > >> > > > > On Mar 1, 2:36 pm, Michael Wesner <mike.wes...@webfilings.com
> >
> > >> > wrote:
> >
> > >> > > > > > Correction/addition to my last email.
> >
> > >> > > > > > It turns out that our requests for this EC2 pull thing are
> > >> > > > > > actually
> > >> > > > much faster now.  Gary and our other devs have reworked it.  I
> need
> > >> > updated
> > >> > > > numbers, but they don't take 10s, probably more like 2s.  We
> still
> > >> > > > have
> > >> > some
> > >> > > > heavy ~5s services though, so the same issue exists with the
> > >> > > > simul-req
> > >> > > > stuff, just to less extent.  We don't actually hit this limit
> much
> > >> > > > now
> > >> > with
> > >> > > > the current beta that is in production, but it is low traffic at
> the
> > >> > moment.
> > >> > > >  We are just getting ready to ramp up heavily.
> >
> > >> > > > > > I asked Nick what we should do, well just today after my
> last
> > >> > email, I
> > >> > > > have made contact with a Developer Advocate and whatnot, which
> is
> > >> > fantastic.
> > >> > > >  It looks like we,  as a business, will be able to have better
> > >> > > > contact
> > >> > with
> > >> > > > the GAE team. We would very much like to continue working with
> you
> > >> > > > to
> > >> > figure
> > >> > > > out what actions we can take and what provisioning we can do to
> make
> > >> > our
> > >> > > > product successful and scale it as we grow in the near future.
>  Gary
> > >> > Orser
> > >> > > > will be replying to this thread soon with more findings from
> both
> > >> > > > our
> > >> > real
> > >> > > > app code and a little test app we are using and which he will
> share
> > >> > with
> > >> > > > you.
> >
> > >> > > > > > We plan on having a presence at Google I/O this year as we
> did
> > >> > > > > > at
> > >> > > > PyCon.  Hopefully we can even get setup in the demonstration
> area at
> > >> > I/O.
> >
> > >> > > > > > Thanks Nick for your help.  Could we possibly setup a quick
> > >> > > > > > skype
> > >> > conf
> > >> > > > call at some point?
> >
> > >> > > > > > -Mike Wesner
> >
> > >> > > > > > On Mar 1, 2010, at 1:13 PM, Michael Wesner wrote:
> >
> > >> > > > > > > Nick,
> >
> > >> > > > > > > If we (I work with Gary) require fairly heavy requests
> which
> > >> > > > > > > run
> > >> > for
> > >> > > > multiple seconds then it is not possible to get anywhere near
> 400
> > >> > > > QPS.
> > >> >   The
> > >> > > > math used on the docs page only applies to 75ms requests.
> >
> > >> > > > > > > (1000 ms/second / 75 ms/request) * 30 = 400
> requests/second
> >
> > >> > > > > > > so lets say each request takes 10 seconds (and ours,
> pulling
> > >> > > > > > > data
> > >> > to
> > >> > > > EC2 for a heavy operation that we can't do on GAE could take
> that
> > >> > > > much
> > >> > since
> > >> > > > we have to process and update some XML before sending it)
> >
> > >> > > > > > > (1000 ms/second / 10000 ms/request) * 30 = 3
> requests/second
> >
> > >> > > > > > > And that does not even take into account all the other
> traffic
> > >> > > > > > > to
> > >> > our
> > >> > > > application, nor the fact that many users could be doing this
> same
> > >> > heavy
> > >> > > > operation at the same time.  Our application will see spikes in
> this
> > >> > type of
> > >> > > > activity also.  The docs also mention that CPU heavy apps incur
> > >> > penalties,
> > >> > > > which is vague and scary.
> >
> > >> > > > > > > Great effort is put into doing things in the most
> efficient
> > >> > > > > > > way
> > >> > > > possible, but not everyones apps can do everything in 75ms. Most
> all
> > >> > > > of
> > >> > our
> > >> > > > service calls are under 250ms. We do have a little overhead from
> our
> > >> > > > framework which we are constantly working on improving.  Our
> > >> > application is
> > >> > > > AMF service/object based which is inherently heavy compared to
> > >> > > > simple
> > >> > web
> > >> > > > requests.  It limits the amount of memcache work we can do also,
> but
> > >> > > > we
> > >> > are
> > >> > > > also working on improving our use of that.
> >
> > >> > > > > > > We easily hit these boundaries during testing so I think
> we
> > >> > really
> > >> > > > need much higher simultaneous dynamic request limits for not
> only
> > >> > > > our
> > >> > > > production instance but our dev/qa instances so we can test and
> load
> > >> > them to
> > >> > > > some degree.  Our QA team could easily bust this limit 20 times
> > >> > > > over.
> >
> > >> > > > > > > So, Nick Johnson... I ask your advice.   We are running a
> > >> > > > company/product on GAE.  We are more than happy to pay for
> > >> > > > quota/service/extra assistance in these matters. What do you
> suggest
> > >> > > > we
> > >> > do?
> >
> > >> > > > > > > I should also mention that I spoke with Brett Slatkin at
> PyCon
> > >> > and he
> > >> > > > is now at least semi-familiar with the scope of product we have
> > >> > developed.
> > >> > > >  I have exchanged contact info with him but have not heard
> anything
> > >> > back
> > >> > > > from him yet.  We would really appreciate contact or even a
> brief
> > >> > meeting at
> > >> > > > some point (in person or otherwise).
> >
> > >> > > > > > > Thanks,
> >
> > >> > > > > > > -Mike Wesner
> >
> > >> > > > > > > On Mar 1, 2010, at 7:40 AM, Nick Johnson (Google) wrote:
> >
> > >> > > > > > >> Hi Gary,
> >
> > >> > > > > > >> Practically speaking, for an app that hasn't been given
> > >> > > > > > >> elevated
> > >> > > > permissions, you should be able to have at least 30 concurrent
> > >> > > > requests
> > >> > -
> > >> > > > equating to around 400 QPS if your app is fairly efficient. What
> > >> > problems
> > >> > > > are you running into that lead you to conclude you're hitting a
> > >> > > > limit
> > >> > at 4
> > >> > > > QPS, and that the problem is at App Engine's end?
> >
> > ...
> >
> > read more ยป
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to google-appeng...@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com<google-appengine%2bunsubscr...@googlegroups.com>
> .
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to