Hi Bruno,
I perfectly understand that measuring against the localhost is not a good
idea. But i tried the same on different client and server boxes on the same
data center and even there i saw a network bottleneck (around 20 MBPS ). So
to confirm that this network issue was not due to some weird setting in the
machine, i wanted to try on the loopback interface that according to my
knowledge has no max limit set on the network bandwidth. Even in the local
loopback i measured using iperf connected to the jetty server i got 21 MBPS
which i was not able to understand. The CPU and memory usage in the system
was very low and the context switches and interrupts were also low
(monitored using nmon), so i was not able to understand what is the real
underlying problem with my setting. Is it some kind of setting in Jetty
that i need to change in the config?

I ran the same tests on Weblogic server to make sure that network was not
the issue. I was able to see using iperf that the bandwidth in loopback was
around 250 MBPS (~1 Gbps). i was using the same settings in JMeter and it
was giving good through put. It is a mystery that i cannot put my finger on

Appreciate your inputs on helping me in this issue.

Thanks,
Dinesh


On Sun, Oct 20, 2013 at 4:30 PM, Bruno D. Rodrigues <
[email protected]> wrote:

>
> A 19/10/2013, às 19:20, dinesh kumar <[email protected]> escreveu:
>
> > Hi,
> > I am trying to capacity plan for a new Rest service we are developing. I
> would like to do determine what is the maximum number of POSTS that can be
> handled in the server for different loads. For example if for 1 MB data
> what is the server overhead (HTTP header parsing and thread assignment from
> the threadpool , server context switches etc.,) that is involved. I would
> like to determine a rough threshold of per core number of parallel requests
> that can be handled by the server. If there exists a limiting factor in the
> system (say network, memory or something else) what is it?
> >
> > Thanks,
> > Dinesh
>
> The overhead of Jetty, from my experience, tends to be zero when compared
> with the objectives and final implementation of what you want to do with
> that data, and specially with the real life conditions of how that data
> comes. In other words, a load test of such small data, against localhost,
> without doing anything to the data, will have so many external variables
> that will conclude with bogus data.
>
> The first question you need to ask yourself is "what will you do with
> those 1MB data chunks". If you're saving them into disk or DB, then the
> HTTP side resources will tend to zero.
>
> The second question is "will I get that data from good sources, or bad
> behaving sources". I had a project where the "architects" and "quality
> assurance" were pushing me for high performance under local network loads
> and jmeter high paralell requests of similar request types, when in reality
> the requests would come slowly behind the mobile network, meaning
> unreliable throughput and high latency.
>
> In other words, your implementation needs to consider if you're getting
> 1MB chunks from high performance local clients, which could let you do a
> simple synchronous implementation like the one I sent you, and where you'd
> be limited by the number of threads allocated to the jetty pool divided by
> the time to receive the 1MB.
> Or, in real life, you may be receiving a lot more clients, but each with
> slower throughput (network bottlenecks, client bugs, real life murphy's
> law). Jetty has always been perfect for these cases, and now with Servlets
> 3.1 it's perfect in a standard way, because you can implement a proper
> async reader that will only consume resources when the clients are really
> sending data. The bottleneck will then be the OS or the network hardware.
>
> Also if you really want to do such a test, you need to be very careful
> with the test suit to make sure your bottleneck isn't there, as we saw in
> the beginning of this thread where your jmeter setup couldn't pass
> 165Mbit/sec (or the 200 with the other server), whilst we were easily
> crossing the 1Gbit using curl or ab.
>
> With the 8MB POST I noticed that the times are so small that the time to
> process the 8MB is hardly different from the total setup and teardown. They
> are all 0+ms. So if your test suite takes 0.5ms itself to prepare the
> connection, your throughput will decrease a lot.
>
> Good luck with your tests, but please don't waste time testing what is
> irrelevant for the final product.
>
> _______________________________________________
> jetty-users mailing list
> [email protected]
> https://dev.eclipse.org/mailman/listinfo/jetty-users
>
>
_______________________________________________
jetty-users mailing list
[email protected]
https://dev.eclipse.org/mailman/listinfo/jetty-users

Reply via email to