Re: [openstack-dev] [Trove] Proposal to add Craig Vyvial to trove-core

2014-05-06 Thread Paul Marshall
+1

On May 6, 2014, at 4:31 AM, Nikhil Manchanda nik...@manchanda.me wrote:

 
 Hello folks:
 
 I'm proposing to add Craig Vyvial (cp16net) to trove-core.
 
 Craig has been working with Trove for a while now. He has been a
 consistently active reviewer, and has provided insightful comments on
 numerous reviews. He has submitted quality code to multiple features in
 Trove, and most recently drove the implementation of configuration
 groups in Icehouse.
 
 https://review.openstack.org/#/q/reviewer:%22Craig+Vyvial%22,n,z
 https://review.openstack.org/#/q/owner:%22Craig+Vyvial%22,n,z
 
 Please respond with +1/-1, or any further comments.
 
 Thanks,
 Nikhil
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Do you think we should introduce the online-extend feature to cinder ?

2014-03-07 Thread Paul Marshall

On Mar 6, 2014, at 9:56 PM, Zhangleiqiang zhangleiqi...@huawei.com wrote:

 get them working. For example, in a devstack VM the only way I can get the
 iSCSI target to show the new size (after an lvextend) is to delete and 
 recreate
 the target, something jgriffiths said he doesn't want to support ;-).
 
 I know a method can achieve it, but it maybe need the instance to pause first 
 (during the step2 below), but without detaching/reattaching. The steps as 
 follows:
 
 1. Extend the LV
 2.Refresh the size info in tgtd:
  a) tgtadm --op show --mode target # get the tid and lun_id properties of 
 target related to the lv; the size property in output result is still the 
 old size before lvextend
  b) tgtadm --op delete --mode logicalunit --tid={tid} --lun={lun_id}  # 
 delete lun mapping in tgtd
  c) tgtadm --op new --mode logicalunit --tid={tid} --lun={lun_id} 
 --backing-store=/dev/cinder-volumes/{lv-name} # re-add lun mapping

Sure, this is my current workaround, but it's what I thought we *didn't* want 
to have to do.

  d) tgtadm --op show --mode target #now the size property in output result 
 is the new size
 *PS*:  
 a) During the procedure, the corresponding device on the compute node won't 
 disappear. But I am not sure the result if Instance has IO on this volume, so 
 maybe the instance may be paused during this procedure.

Yeah, but pausing the instance isn't an online extend. As soon as the user 
can't interact with their instance, even briefly, it's an offline extend in my 
view.

 b) Maybe we can modify tgtadm, and make it support the operation which is 
 just refresh the size of backing store.

Maybe. I'd be interested in any thoughts/patches you have to accomplish this. :)

 
 3. Rescan the lun info in compute node: iscsiadm -m node --targetname 
 {target_name} -R

Yeah, right now as part of this work I'm adding two extensions to Nova. One to 
issue this rescan on the compute host and another to get the size of the block 
device so Cinder can poll until the device is actually the new size (not an 
ideal solution, but so far I don't have a better one).

 
 I also
 haven't dived into any of those other limits you mentioned (nfs_used_ratio,
 etc.).
 
 Till now, we focused on the volume which is based on *block device*. Under 
 this scenario, we must first try to extend the volume and notify the 
 hypervisor, I think one of the preconditions is to make sure the extend 
 operation will not affect the IO in Instance.
 
 However, there is another scenario which maybe a little different. For 
 *online-extend virtual disks (qcow2, sparse, etc) whose backend storage is 
 file system (ext3, nfs, glusterfs, etc), the current implementation of QEMU 
 is as follows:
 1. QEMU drain all IO
 2. *QEMU* extend the virtual disk
 3. QEMU resume IO
 
 The difference is the *extend* work need be done by QEMU other than cinder 
 driver. 
 
 Feel free to ping me on IRC (pdmars).
 
 I don't know your time zone, we can continue the discussion on IRC, :)

Good point. :) I'm in the US central time zone.

Paul

 
 --
 zhangleiqiang
 
 Best Regards
 
 
 -Original Message-
 From: Paul Marshall [mailto:paul.marsh...@rackspace.com]
 Sent: Thursday, March 06, 2014 12:56 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Cc: Luohao (brian)
 Subject: Re: [openstack-dev] [Cinder] Do you think we should introduce the
 online-extend feature to cinder ?
 
 Hey,
 
 Sorry I missed this thread a couple of days ago. I am working on a 
 first-pass of
 this and hope to have something soon. So far I've mostly focused on getting
 OpenVZ and the HP LH SAN driver working for online extend. I've had trouble
 with libvirt+kvm+lvm so I'd love some help there if you have ideas about how 
 to
 get them working. For example, in a devstack VM the only way I can get the
 iSCSI target to show the new size (after an lvextend) is to delete and 
 recreate
 the target, something jgriffiths said he doesn't want to support ;-). I also
 haven't dived into any of those other limits you mentioned (nfs_used_ratio,
 etc.). Feel free to ping me on IRC (pdmars).
 
 Paul
 
 
 On Mar 3, 2014, at 8:50 PM, Zhangleiqiang zhangleiqi...@huawei.com
 wrote:
 
 @john.griffith. Thanks for your information.
 
 I have read the BP you mentioned ([1]) and have some rough thoughts about
 it.
 
 As far as I know, the corresponding online-extend command for libvirt is
 blockresize, and for Qemu, the implement differs among disk formats.
 
 For the regular qcow2/raw disk file, qemu will take charge of the 
 drain_all_io
 and truncate_disk actions, but for raw block device, qemu will only check if 
 the
 *Actual* size of the device is larger than current size.
 
 I think the former need more consideration, because the extend work is done
 by libvirt, Nova may need to do this first and then notify Cinder. But if we 
 take
 allocation limit of different cinder backend drivers (such as quota,
 nfs_used_ratio, nfs_oversub_ratio, etc) into account

Re: [openstack-dev] [Cinder] Do you think we should introduce the online-extend feature to cinder ?

2014-03-07 Thread Paul Marshall

On Mar 7, 2014, at 7:55 AM, Paul Marshall paul.marsh...@rackspace.com
 wrote:

 
 On Mar 6, 2014, at 9:56 PM, Zhangleiqiang zhangleiqi...@huawei.com wrote:
 
 get them working. For example, in a devstack VM the only way I can get the
 iSCSI target to show the new size (after an lvextend) is to delete and 
 recreate
 the target, something jgriffiths said he doesn't want to support ;-).
 
 I know a method can achieve it, but it maybe need the instance to pause 
 first (during the step2 below), but without detaching/reattaching. The steps 
 as follows:
 
 1. Extend the LV
 2.Refresh the size info in tgtd:
 a) tgtadm --op show --mode target # get the tid and lun_id properties of 
 target related to the lv; the size property in output result is still the 
 old size before lvextend
 b) tgtadm --op delete --mode logicalunit --tid={tid} --lun={lun_id}  # 
 delete lun mapping in tgtd
 c) tgtadm --op new --mode logicalunit --tid={tid} --lun={lun_id} 
 --backing-store=/dev/cinder-volumes/{lv-name} # re-add lun mapping
 
 Sure, this is my current workaround, but it's what I thought we *didn't* want 
 to have to do.
 
 d) tgtadm --op show --mode target #now the size property in output result 
 is the new size
 *PS*:  
 a) During the procedure, the corresponding device on the compute node won't 
 disappear. But I am not sure the result if Instance has IO on this volume, 
 so maybe the instance may be paused during this procedure.
 
 Yeah, but pausing the instance isn't an online extend. As soon as the user 
 can't interact with their instance, even briefly, it's an offline extend in 
 my view.
 
 b) Maybe we can modify tgtadm, and make it support the operation which is 
 just refresh the size of backing store.
 
 Maybe. I'd be interested in any thoughts/patches you have to accomplish this. 
 :)
 
 
 3. Rescan the lun info in compute node: iscsiadm -m node --targetname 
 {target_name} -R
 
 Yeah, right now as part of this work I'm adding two extensions to Nova. One 
 to issue this rescan on the compute host and another to get the size of the 
 block device so Cinder can poll until the device is actually the new size 
 (not an ideal solution, but so far I don't have a better one).

Sorry, I should correct myself here: I'm adding one extension with two calls. 
One to issue the rescan on the compute host and one to get the blockdev size so 
Cinder can wait until it's actually the new size.

 
 
 I also
 haven't dived into any of those other limits you mentioned (nfs_used_ratio,
 etc.).
 
 Till now, we focused on the volume which is based on *block device*. Under 
 this scenario, we must first try to extend the volume and notify the 
 hypervisor, I think one of the preconditions is to make sure the extend 
 operation will not affect the IO in Instance.
 
 However, there is another scenario which maybe a little different. For 
 *online-extend virtual disks (qcow2, sparse, etc) whose backend storage is 
 file system (ext3, nfs, glusterfs, etc), the current implementation of QEMU 
 is as follows:
 1. QEMU drain all IO
 2. *QEMU* extend the virtual disk
 3. QEMU resume IO
 
 The difference is the *extend* work need be done by QEMU other than cinder 
 driver. 
 
 Feel free to ping me on IRC (pdmars).
 
 I don't know your time zone, we can continue the discussion on IRC, :)
 
 Good point. :) I'm in the US central time zone.
 
 Paul
 
 
 --
 zhangleiqiang
 
 Best Regards
 
 
 -Original Message-
 From: Paul Marshall [mailto:paul.marsh...@rackspace.com]
 Sent: Thursday, March 06, 2014 12:56 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Cc: Luohao (brian)
 Subject: Re: [openstack-dev] [Cinder] Do you think we should introduce the
 online-extend feature to cinder ?
 
 Hey,
 
 Sorry I missed this thread a couple of days ago. I am working on a 
 first-pass of
 this and hope to have something soon. So far I've mostly focused on getting
 OpenVZ and the HP LH SAN driver working for online extend. I've had trouble
 with libvirt+kvm+lvm so I'd love some help there if you have ideas about 
 how to
 get them working. For example, in a devstack VM the only way I can get the
 iSCSI target to show the new size (after an lvextend) is to delete and 
 recreate
 the target, something jgriffiths said he doesn't want to support ;-). I also
 haven't dived into any of those other limits you mentioned (nfs_used_ratio,
 etc.). Feel free to ping me on IRC (pdmars).
 
 Paul
 
 
 On Mar 3, 2014, at 8:50 PM, Zhangleiqiang zhangleiqi...@huawei.com
 wrote:
 
 @john.griffith. Thanks for your information.
 
 I have read the BP you mentioned ([1]) and have some rough thoughts about
 it.
 
 As far as I know, the corresponding online-extend command for libvirt is
 blockresize, and for Qemu, the implement differs among disk formats.
 
 For the regular qcow2/raw disk file, qemu will take charge of the 
 drain_all_io
 and truncate_disk actions, but for raw block device, qemu will only check 
 if the
 *Actual* size of the device

Re: [openstack-dev] [Cinder] Do you think we should introduce the online-extend feature to cinder ?

2014-03-05 Thread Paul Marshall
Hey, 

Sorry I missed this thread a couple of days ago. I am working on a first-pass 
of this and hope to have something soon. So far I've mostly focused on getting 
OpenVZ and the HP LH SAN driver working for online extend. I've had trouble 
with libvirt+kvm+lvm so I'd love some help there if you have ideas about how to 
get them working. For example, in a devstack VM the only way I can get the 
iSCSI target to show the new size (after an lvextend) is to delete and recreate 
the target, something jgriffiths said he doesn't want to support ;-). I also 
haven't dived into any of those other limits you mentioned (nfs_used_ratio, 
etc.). Feel free to ping me on IRC (pdmars).

Paul


On Mar 3, 2014, at 8:50 PM, Zhangleiqiang zhangleiqi...@huawei.com wrote:

 @john.griffith. Thanks for your information.
  
 I have read the BP you mentioned ([1]) and have some rough thoughts about it.
  
 As far as I know, the corresponding online-extend command for libvirt is 
 “blockresize”, and for Qemu, the implement differs among disk formats.
  
 For the regular qcow2/raw disk file, qemu will take charge of the 
 drain_all_io and truncate_disk actions, but for raw block device, qemu will 
 only check if the *Actual* size of the device is larger than current size.
  
 I think the former need more consideration, because the extend work is done 
 by libvirt, Nova may need to do this first and then notify Cinder. But if we 
 take allocation limit of different cinder backend drivers (such as quota, 
 nfs_used_ratio, nfs_oversub_ratio, etc) into account, the workflow will be 
 more complicated.
  
 This scenario is not included by the Item 3 of BP ([1]), as it cannot be 
 simply “just work” or notified by the compute node/libvirt after the volume 
 is extended.
  
 This regular qcow2/raw disk files are normally stored in file system based 
 storage, maybe the Manila project is more appropriate for this scenario?
  
  
 Thanks.
  
  
 [1]: 
 https://blueprints.launchpad.net/cinder/+spec/inuse-extend-volume-extension
  
 --
 zhangleiqiang
  
 Best Regards
  
 From: John Griffith [mailto:john.griff...@solidfire.com] 
 Sent: Tuesday, March 04, 2014 1:05 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Cc: Luohao (brian)
 Subject: Re: [openstack-dev] [Cinder] Do you think we should introduce the 
 online-extend feature to cinder ?
  
  
  
 
 On Mon, Mar 3, 2014 at 2:01 AM, Zhangleiqiang zhangleiqi...@huawei.com 
 wrote:
 Hi, stackers:
 
 Libvirt/qemu have supported online-extend for multiple disk formats, 
 including qcow2, sparse, etc. But Cinder only support offline-extend volumes 
 currently.
 
 Offline-extend volume will force the instance to be shutoff or the volume 
 to be detached. I think it will be useful if we introduce the online-extend 
 feature to cinder, especially for the file system based driver, e.g. nfs, 
 glusterfs, etc.
 
 Is there any other suggestions?
 
 Thanks.
 
 
 --
 zhangleiqiang
 
 Best Regards
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 Hi Zhangleiqiang,
  
 So yes, there's a rough BP for this here: [1], and some of the folks from the 
 Trove team (pdmars on IRC) have actually started to dive into this.  Last I 
 checked with him there were some sticking points on the Nova side but we 
 should synch up with Paul, it's been a couple weeks since I've last caught up 
 with him.
  
 Thanks,
 John
 [1]: 
 https://blueprints.launchpad.net/cinder/+spec/inuse-extend-volume-extension
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Proposal to add Auston McReynolds to trove-core

2013-12-31 Thread Paul Marshall
+1

On Dec 30, 2013, at 1:13 PM, Vipul Sabhaya vip...@gmail.com wrote:

 +1
 
 Sent from my iPhone
 
 On Dec 30, 2013, at 10:50 AM, Craig Vyvial cp16...@gmail.com wrote:
 
 +1
 
 
 On Mon, Dec 30, 2013 at 12:00 PM, Greg Hill greg.h...@rackspace.com wrote:
 +1
 
 On Dec 27, 2013, at 4:48 PM, Michael Basnight mbasni...@gmail.com wrote:
 
 Howdy,
 
 Im proposing Auston McReynolds (amcrn) to trove-core.
 
 Auston has been working with trove for a while now. He is a great reviewer. 
 He is incredibly thorough, and has caught more than one critical error with 
 reviews and helps connect large features that may overlap (config edits + 
 multi datastores comes to mind). The code he submits is top notch, and we 
 frequently ask for his opinion on architecture / feature / design.
 
 https://review.openstack.org/#/dashboard/8214
 https://review.openstack.org/#/q/owner:8214,n,z
 https://review.openstack.org/#/q/reviewer:8214,n,z
 
 Please respond with +1/-1, or any further comments.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev