On 11/24/2014 04:13 PM, Alan Conway wrote:
On Thu, 2014-11-20 at 14:10 -0500, Michael Goulish wrote:
I recently finished switching over my proton-c programs psend & precv
to the new event-based interface, and my first test of them was a
5 billion message soak test.

The programs survived this test with no memory growth, and no gradual
slowdown.

This test is meant to find the fastest possible speed of the proton-c
code itself. (In future, we could make other similar tests designed
to mimic realistic user scenarios.) In this test, I run both sender
and receiver on one box, with the loopback interface. I have MTU ==
64K, I use a credit scheme of 600 initial credits, and 300 new credits
whenever credit falls below 300. The messages are small: exactly 100
bytes long.

I am using two processors, both Intel Xeon E5420 @ 2.50GHz with 6144
KB cache. (Letting the OS decide which processors to use for my two
processes.)

On that system, with the above credit scheme, the test is sustaining
throughput of 408,500 messages per second . That's over a single link,
between two singly-threaded processes.


That is an excellent result. It sets the context for doing performance
work on proton-based systems (which is nearly everything we do at this
point) At that rate, proton certainly doesn't sound like its the
bottleneck for any of the stuff I've been looking at, but  I'd be
interested in seeing results for a range of larger message sizes.

[...]

First thing I would suggest is adding command line parameters for
connection info, message size, credit etc. etc. Simple send/recieve
programs like this, when parameterized flexibly, are *extremely* useful
building blocks for a huge range of performance experiments.

At present the code just 'sends' the same chunk of raw memory allocated at the start. On the receiver side the data is never actually read by the application. This is certainly useful to isolate the performance different layers. However assessing the impact of sending and receiving real messages is also important.

The rates you are seeing are indeed very impressive. The next step on this track is to figure out where performance is lost in more realistic and richer scenarios, and ways to reduce that.

Reply via email to