For now, This supports KVM only. There is an issue in Xen support as xenapi
returns I/O rate (bytes per seconds) instead of total I/O.

-Wei


2013/5/30 Wei ZHOU <ustcweiz...@gmail.com>

> Hi Wido,
>
> I have not tested on network block devices like Ceph, as I have no testing
> environment.
> I will  change the source code and point out this does not support network
> block devices.
> It will support in the future.
>
>
> -Wei
>
>
> 2013/5/30 Wido den Hollander <w...@widodh.nl>
>
>> On 05/30/2013 05:47 PM, Wei ZHOU wrote:
>>
>>> Hi,
>>>
>>> I would like to merge disk_io_stat branch into master.
>>> If nobody object, I will merge into master in 48 hours.
>>>
>>>
>> First of all, awesome work! I created the ticket but never came around
>> implementing it.
>>
>> I took a quick look at the code and saw this in LibvirtComputingResource:
>>
>> for (DiskDef disk : disks) {
>>   DomainBlockStats blockStats = dm.blockStats(disk.**getDiskLabel());
>>   String path = disk.getDiskPath(); // for example, path =
>> /mnt/pool_uuid/disk_path/
>>   String diskPath = null;
>>   if (path != null) {
>>   ..
>> }
>>
>> This will break with disks like RBD block devices running on Ceph. Since
>> this are virtual disks they don't have a "path" element.
>>
>> Their XML in libvirt:
>>
>>     <disk type='network' device='disk'>
>>       <driver name='qemu' type='raw' cache='writeback'/>
>>       <source protocol='rbd' name='rbd/mydisk:rbd_cache=1'>
>>         <host name='monitor.domain.lan' port='6789'/>
>>       </source>
>>       <auth username='libvirt'>
>>         <secret type='ceph' uuid='<uuid>'/>
>>       </auth>
>>       <target dev='vda' bus='virtio'/>
>>     </disk>
>>
>> Qemu/libvirt however counts the IOps, so I'm not sure why you completely
>> rely on the path element for that.
>>
>> Have you tested this with RBD?
>>
>> Wido
>>
>>  The feature includes
>>>
>>> (1) Add disk I/O polling for instances to CloudStack.
>>>
>>> (2) Add it to the instance vm disk statistics table.
>>>
>>> (3) and add it to the usage database for optional billing in public
>>> clouds.
>>>
>>> JIRA ticket: 
>>> https://issues.apache.org/**jira/browse/CLOUDSTACK-1192<https://issues.apache.org/jira/browse/CLOUDSTACK-1192>
>>> FS (I will update later) :
>>> https://cwiki.apache.org/**confluence/display/CLOUDSTACK/**
>>> Disk+IO+statistics+for+**instances<https://cwiki.apache.org/confluence/display/CLOUDSTACK/Disk+IO+statistics+for+instances>
>>>
>>> Merge check list :-
>>>
>>> * Did you check the branch's RAT execution success?
>>> Yes
>>>
>>> * Are there new dependencies introduced?
>>> No
>>>
>>> * What automated testing (unit and integration) is included in the new
>>> feature?
>>> Unit tests (UsageManagerTest) are added.
>>>
>>> * What testing has been done to check for potential regressions?
>>> (1) CloudStack UI display the bytes rate and IO rate.
>>>
>>> (2) VM operations, including
>>>
>>> deploy, stop, start, reboot, destroy, expunge. migrate, restore
>>>
>>> (3) Volume operations, including
>>>
>>> Attach, Detach
>>>
>>> * Existing issue
>>>
>>> (1)For XenServer/XCP, xepapi returns bytes per seconds instead of total
>>> I/O.
>>>
>>>
>>> To review the code, you can try
>>>
>>> git diff 7fb6eaa0ca5f0f58b23ab6af812db6**366743717a
>>> c30057635d04a2396f84c588127d7e**be42e503a7
>>>
>>> Best regards,
>>>
>>> Wei
>>>
>>> [1]
>>> https://cwiki.apache.org/**confluence/display/CLOUDSTACK/**
>>> Disk+IO+statistics+for+**instances<https://cwiki.apache.org/confluence/display/CLOUDSTACK/Disk+IO+statistics+for+instances>
>>> [2] refs/heads/disk_io_stat
>>> [3] 
>>> https://issues.apache.org/**jira/browse/CLOUDSTACK-1192<https://issues.apache.org/jira/browse/CLOUDSTACK-1192>
>>> <ht**tps://issues.apache.org/jira/**browse/CLOUDSTACK-2071<https://issues.apache.org/jira/browse/CLOUDSTACK-2071>
>>> >(**CLOUDSTACK-
>>> *1192* - Add disk I/O statistics of instances)
>>>
>>>
>>
>

Reply via email to