Hey, 

Sorry I missed this thread a couple of days ago. I am working on a first-pass 
of this and hope to have something soon. So far I've mostly focused on getting 
OpenVZ and the HP LH SAN driver working for online extend. I've had trouble 
with libvirt+kvm+lvm so I'd love some help there if you have ideas about how to 
get them working. For example, in a devstack VM the only way I can get the 
iSCSI target to show the new size (after an lvextend) is to delete and recreate 
the target, something jgriffiths said he doesn't want to support ;-). I also 
haven't dived into any of those other limits you mentioned (nfs_used_ratio, 
etc.). Feel free to ping me on IRC (pdmars).

Paul


On Mar 3, 2014, at 8:50 PM, Zhangleiqiang <zhangleiqi...@huawei.com> wrote:

> @john.griffith. Thanks for your information.
>  
> I have read the BP you mentioned ([1]) and have some rough thoughts about it.
>  
> As far as I know, the corresponding online-extend command for libvirt is 
> “blockresize”, and for Qemu, the implement differs among disk formats.
>  
> For the regular qcow2/raw disk file, qemu will take charge of the 
> drain_all_io and truncate_disk actions, but for raw block device, qemu will 
> only check if the *Actual* size of the device is larger than current size.
>  
> I think the former need more consideration, because the extend work is done 
> by libvirt, Nova may need to do this first and then notify Cinder. But if we 
> take allocation limit of different cinder backend drivers (such as quota, 
> nfs_used_ratio, nfs_oversub_ratio, etc) into account, the workflow will be 
> more complicated.
>  
> This scenario is not included by the Item 3 of BP ([1]), as it cannot be 
> simply “just work” or notified by the compute node/libvirt after the volume 
> is extended.
>  
> This regular qcow2/raw disk files are normally stored in file system based 
> storage, maybe the Manila project is more appropriate for this scenario?
>  
>  
> Thanks.
>  
>  
> [1]: 
> https://blueprints.launchpad.net/cinder/+spec/inuse-extend-volume-extension
>  
> ----------
> zhangleiqiang
>  
> Best Regards
>  
> From: John Griffith [mailto:john.griff...@solidfire.com] 
> Sent: Tuesday, March 04, 2014 1:05 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: Luohao (brian)
> Subject: Re: [openstack-dev] [Cinder] Do you think we should introduce the 
> online-extend feature to cinder ?
>  
>  
>  
> 
> On Mon, Mar 3, 2014 at 2:01 AM, Zhangleiqiang <zhangleiqi...@huawei.com> 
> wrote:
> Hi, stackers:
> 
>         Libvirt/qemu have supported online-extend for multiple disk formats, 
> including qcow2, sparse, etc. But Cinder only support offline-extend volumes 
> currently.
> 
>     Offline-extend volume will force the instance to be shutoff or the volume 
> to be detached. I think it will be useful if we introduce the online-extend 
> feature to cinder, especially for the file system based driver, e.g. nfs, 
> glusterfs, etc.
> 
>     Is there any other suggestions?
> 
>     Thanks.
> 
> 
> ----------
> zhangleiqiang
> 
> Best Regards
> 
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>  
> Hi Zhangleiqiang,
>  
> So yes, there's a rough BP for this here: [1], and some of the folks from the 
> Trove team (pdmars on IRC) have actually started to dive into this.  Last I 
> checked with him there were some sticking points on the Nova side but we 
> should synch up with Paul, it's been a couple weeks since I've last caught up 
> with him.
>  
> Thanks,
> John
> [1]: 
> https://blueprints.launchpad.net/cinder/+spec/inuse-extend-volume-extension
>  
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to