Hi Wei,

On 05/31/2013 03:13 PM, Wei ZHOU wrote:
Hi Wido,

Thanks. Good question.

I  thought about at the beginning. Finally I decided to ignore the
difference of read and write mainly because the network throttling did not
care the difference of sent and received bytes as well.

That reasoning seems odd. Networking and disk I/O completely different.

Disk I/O is much more expensive in most situations then network bandwith.

Implementing it will be some copy-paste work. It could be implemented in
few days. For the deadline of feature freeze, I will implement it after
that , if needed.


It think it's a feature we can't miss. But if it goes into the 4.2 window we have to make sure we don't release with only total IOps and fix it in 4.3, that will confuse users.

Wido

-Wei




2013/5/31 Wido den Hollander <w...@widodh.nl>

Hi Wei,


On 05/30/2013 06:03 PM, Wei ZHOU wrote:

Hi,
I would like to merge disk_io_throttling branch into master.
If nobody object, I will merge into master in 48 hours.
The purpose is :

Virtual machines are running on the same storage device (local storage or
share strage). Because of the rate limitation of device (such as iops), if
one VM has large disk operation, it may affect the disk performance of
other VMs running on the same storage device.
It is neccesary to set the maximum rate and limit the disk I/O of VMs.


Looking at the code I see you make no difference between Read and Write
IOps.

Qemu and libvirt support setting both a different rate for Read and Write
IOps which could benefit a lot of users.

It's also strange, in the polling side you collect both the Read and Write
IOps, but on the throttling side you only go for a global value.

Write IOps are usually much more expensive then Read IOps, so it seems
like a valid use-case where that an admin would set a lower value for write
IOps vs Read IOps.

Since this only supports KVM at this point I think it would be of great
value to at least have the mechanism in place to support both, implementing
this later would be a lot of work.

If a hypervisor doesn't support setting different values for read and
write you can always sum both up and set that as the total limit.

Can you explain why you implemented it this way?

Wido

  The feature includes:

(1) set the maximum rate of VMs (in disk_offering, and global
configuration)
(2) change the maximum rate of VMs
(3) limit the disk rate (total bps and iops)
JIRA ticket: 
https://issues.apache.org/**jira/browse/CLOUDSTACK-1192<https://issues.apache.org/jira/browse/CLOUDSTACK-1192>
FS (I will update later) :
https://cwiki.apache.org/**confluence/display/CLOUDSTACK/**
VM+Disk+IO+Throttling<https://cwiki.apache.org/confluence/display/CLOUDSTACK/VM+Disk+IO+Throttling>
Merge check list :-

* Did you check the branch's RAT execution success?
Yes

* Are there new dependencies introduced?
No

* What automated testing (unit and integration) is included in the new
feature?
Unit tests are added.

* What testing has been done to check for potential regressions?
(1) set the bytes rate and IOPS rate on CloudStack UI.
(2) VM operations, including
deploy, stop, start, reboot, destroy, expunge. migrate, restore
(3) Volume operations, including
Attach, Detach

To review the code, you can try
git diff c30057635d04a2396f84c588127d7e**be42e503a7
f2e5591b710d04cc86815044f5823e**73a4a58944

Best regards,
Wei

[1]
https://cwiki.apache.org/**confluence/display/CLOUDSTACK/**
VM+Disk+IO+Throttling<https://cwiki.apache.org/confluence/display/CLOUDSTACK/VM+Disk+IO+Throttling>
[2] refs/heads/disk_io_throttling
[3] 
https://issues.apache.org/**jira/browse/CLOUDSTACK-1301<https://issues.apache.org/jira/browse/CLOUDSTACK-1301>
<ht**tps://issues.apache.org/jira/**browse/CLOUDSTACK-2071<https://issues.apache.org/jira/browse/CLOUDSTACK-2071>
(**CLOUDSTACK-1301

-     VM Disk I/O Throttling)





Reply via email to