Hi, I am trying to capacity plan for a new Rest service we are developing. I would like to do determine what is the maximum number of POSTS that can be handled in the server for different loads. For example if for 1 MB data what is the server overhead (HTTP header parsing and thread assignment from the threadpool , server context switches etc.,) that is involved. I would like to determine a rough threshold of per core number of parallel requests that can be handled by the server. If there exists a limiting factor in the system (say network, memory or something else) what is it?
Thanks, Dinesh On Sat, Oct 19, 2013 at 1:21 AM, Bruno D. Rodrigues < [email protected]> wrote: > > A 18/10/2013, às 20:26, Simone Bordet <[email protected]> escreveu: > > > Hi, > > > > On Fri, Oct 18, 2013 at 9:06 PM, dinesh kumar <[email protected]> > wrote: > >> Hi All, > >> I am trying to run Jetty 9 on Ubuntu 12.10 (32 bit). The JVM i am using > in > >> JDK 1.7.0_40. I have setup a rest service on my server that uses > RestLib. > >> The rest service is a POST method that just receives the data and does > no > >> processing with it and responds a success. > >> > >> I want to see what is the maximum load the Jetty9 server will take with > the > >> given resources. I have a Intel i5 processor box with 8 GB memory. I > have > >> setup a Jmeter to test this rest in the localhost setting. I know this > is > >> not advisable but i would like to know this number (just out of > curiosity). > >> > >> When i run the JMeter to test this POST method with 1 MB of payload > data in > >> the body, i am getting a through put of around 20 (for 100 users). > >> > >> I measured the the bandwidth using iperf to begin with > >> > >> iperf -c 127.0.0.1 -p 8080 > >> ------------------------------------------------------------ > >> Client connecting to 127.0.0.1, TCP port 8080 > >> > >> TCP window size: 167 KByte (default) > >> > >> [ 3] local 127.0.0.1 port 44130 connected with 127.0.0.1 port 8080 > >> > >> [ ID] Interval Transfer Bandwidth > >> > >> [ 3] 0.0-10.0 sec 196 MBytes 165 Mbits/sec > >> > >> the number 165 MB seems ridiculously small for me but that's one > >> observation. > > > > You're not connecting iperf to Jetty, are you ? > > > > On my 4.5 years old laptop iperf on localhost gives me 16.2 Gbits/s. > > > > -- > > Simone Bordet > > > I'd have a look at whatever RestLib is doing. > > My own tests using not REST POST, but a never ending HTTP chunked POST > request (so passing through the whole jetty http headers and chunking > stack, plus my own line/msg split, plus a clone of the message for > posterior usage, measures as much throughput as my own NIO or AIO simpler > raw versions with zero-copy bytebuffers - meaning Jetty is now almost as > optimised as a raw socket! > > My own values from a MBPro 4xi7 is 3GB (24Gbit) for NIO/AIO zero > processing (just reading bytes into null), 900MB (7.2Gbit) for my code and > Jetty (reading bytes into null, but passing through the http headers and > the chunking), down to 600MB (2.4Gbit) for the whole split + clone bytes. > This is for a single request, consuming about 125% cpu (1 and 1/4 of a > second cpu). > > Now you mention putting a 1MB file, which is a completely different kind > of test. I've also done this test before, again both with jetty and my own > code, and what I noticed is that if you start a new connection for each > operation, no matter how much Jetty (or I tried in my code) to accept the > connection and process the http headers asap, the raw performance is way > smaller than a continuous put stream. > > Changing my test case to put those small files but using http keep alive > (or even pipelining, which I discovered that ab that comes with MacOS does, > not on purpose but due to a bug), the raw performance comes back to the raw > stream values. > > It would be nice to know exactly what is that test doing. Opening one > connection and putting multiple 8MB POSTS into it? > > > > > > _______________________________________________ > jetty-users mailing list > [email protected] > https://dev.eclipse.org/mailman/listinfo/jetty-users > >
_______________________________________________ jetty-users mailing list [email protected] https://dev.eclipse.org/mailman/listinfo/jetty-users
