On Tue, Apr 13, 2010 at 3:14 PM, Anson <yeal_c...@yahoo.com.cn> wrote:

> I agree with you on this measuring maximum throughput.  But I think the
> dedicated CPU makes sense if we want to perform the benchmark compared to
> distributed platform. And it's also meaningful if we try to perform the
> stress verification test. Generally, we won't want each test result would be
> different due to other guest machines' influence.  I think dedicate CPU to
> that Linux can help to eliminate the impact from other guest machines.

That's just too simplistic. The "single server throughput" may be
interesting for dedicated hardware where any resources not spent on
your application are wasted. Those measurements do not provide any
information on how the system would behave with two dozen Linux guests
running a real life workload.

>>We measure resource usage of the virtual machine and divide that by
>>the number of transactions. You do that at different levels of
>>utilization to understand how scalable the application is. You will
>>often find that efficiency gets worse at very high utilization (lock
>>contention, for example) and also often at low utilization (idle load,
>>polling, etc).
>
> I don't understand how you can know the scalable of the application via this
> approach. Can you get the accurate application scalability?  It's estimated
> pro rata?

Scalability is determined by interpolation (not extrapolation, as some
brave souls want us to believe) and understanding the application.
When an application is single threaded, adding virtual CPUs does not
help.

By proper performance measurements, you separate these two questions:
- how much resources does the workload need for acceptable response
- given resource constraint, how do we ensure the right application goes first

There's no such thing as a free lunch. When you share a resource, you
may sometimes have to wait for it. Less important work may have to
wait more. That's the price you pay for sharing resources, and you can
because in real life not everyone needs the resources at the same
time. The advantage is that you often can get more resources than what
you could afford for your application when it would be dedicated.

So you measure resource usage and do capacity planning to ensure that
you will be able to deliver the service within the SLA (or otherwise
determined acceptable response times). The desire to run a workload
"as fast as possible" does not translate well into business
requirements.

Rob
-- 
Rob van der Heij
Velocity Software
http://www.velocitysoftware.com/

Reply via email to