Dear L4Re experts,

We now have a couple projects in which we are going to be utilizing your OS, so 
I've been implementing and testing
some of the basic functionality that we will need. Namely that would be message 
passing....
I've been using the Hello World QEMU example as my starting point and have 
created a number of processes
that communicate via a pair of unidirectional channels with IPC and shared 
memory. One channel for
messages coming in, one channel for messages going out. The sender does an 
IPC_CALL() when a message
has been put into shared memory. The receiver completes an IPC_RECEIVE(), 
fetches the message, and then
responds with the IPC_REPLY() to the original IPC_CALL(). It is all 
interrupt/event driven, no sleeping, no polling.
It works. I've tested it for robustness and it behaves exactly as expected, 
with the exception of throughput.

I seem to be getting only 4000 messages per second. Or roughly 4 messages per 
millisecond. Now there are
a couple malloc() and free() and condition_wait() and condition_signal()s going 
on as the events and messages
get passed through the sender and receiver threads, but nothing (IMHO) that 
should slow things down too much.
Messages are very small, like 50 bytes, as I'm really just trying to get a 
handle on basic overhead. So pretty much,
yes, I'm beating the context-switching mechanisms to death...

My questions:
Is this normal(ish) throughput for a single-core x86_64 QEMU system?
Am I getting hit by a time-sliced scheduler issue and most of my CPU is being 
wasted?
How do I switch to a different non-time-sliced scheduler?
Thoughts on what I could try to improve throughput?

And lastly...
We are going to be signing up for training soon... do you have a recommendation 
for a big beefy AMD-based linux laptop?


Thanks!

Richard H. Clark


_______________________________________________
l4-hackers mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to