Hi, everybody.

I am going to propose several ideas for QEMU participation in GSoC/Outreachy in 
next few days. This is the first one. Please feel free to give an honest 
feedback.

Yours,
Aleksandar



Measure and Analyze Performance of
QEMU User and System Mode Emulation


PLANNED ACTIVITIES

PART I: (user mode)

   a) select around a dozen test programs (resembling components of SPEC 
benchmark, but must be open source, and preferably license compatible with 
QEMU); test programs should be distributed like this: 4-5 FPU CPU-intensive, 
4-5 non-FPU CPU intensive, 1-2 I/O intensive;
   b) measure execution time and other performance data in user mode across all 
platforms for ToT:
       - try to improve performance if there is an obvious bottleneck (but this 
is unlikely);
       - develop tests that will be protection against performance regressions 
in future.
   c) measure execution time in user-mode for selected platforms for all QEMU 
versions in last 5 years:
       - confirm performance improvements and/or detect performance 
degradations.
   d) summarize all results in a comprehensive form, using also graphics/data 
visualization.

PART II: (system mode)

   a) measure execution time and other performance data for boot/shutdown cycle 
for selected machines for ToT:
       - try to improve performance if there is an obvious bottleneck;
       - develop tests that will be protection against performance regressions 
in future.
   b) summarize all results in a comprehensive form.


DELIVERABLES

1) Each maintainer for target will be given a list of top 25 functions in terms 
of spent host time for each benchmark described in the previous section. 
Additional information and observations will be also provided, if the judgment 
is they are useful and relevant.

2) Each maintainer for machine (that has successful boot/shutdown cycle) will 
be given a list of top 25 functions in terms of spent host time during 
boot/shutdown cycle. Additional information and observations will be also 
provided, if the judgment is they are useful and relevant.

3) The community will be given all devised performance measurement methods in 
the form of easily reproducible step-by-step setup and execution procedures.

(parts 1) and 2) will be, of course, published to everybody, maintainers are 
simply singled out as main recipients and decision-makers on possible next 
action items)

Deliverable will be distributed over wide time interval (in other words, they 
will not be presented just at the end of project, but gradually during project 
execution).


Mentor: Aleksandar Markovic (myself) (but, I am perfectly fine if somebody else 
wants to mentor the project, if interested)

Student: open


That would be all, feel free to ask for additional info and/or clarification.
 

Reply via email to