Re: [OpenStack-Infra] What is the final manila merge point in Kilo?
I've been asked this question privately so many times I just make a ML post about it. Please use the below thread: http://lists.openstack.org/pipermail/openstack-dev/2014-November/050049.html -Ben -Original Message- From: Thierry Carrez [mailto:thie...@openstack.org] Sent: Monday, November 10, 2014 7:58 AM To: liuxinguo; openstack-infra@lists.openstack.org Cc: Fanyaohong; Swartzlander, Ben Subject: Re: What is the final manila merge point in Kilo? liuxinguo wrote: We want to add a manila driver in Kilo. I want to know what is the latest(final) time point we should submit our driver or get our dirver merged in kilo, while guaranteeing our driver can be merged in kilo successfully? Should we must submit our driver or get our dirver merged in kilo before Kilo-1 at 18/12/2014 or we can do this later just before Kilo-2 at 05/02/2014? Manila being an incubated project, the rules are slightly less strict than with a project that would already be integrated. I would ask directly to the PTL (Ben, CC-ed) to see how he wanted to organize his development cycle. Regards, -- Thierry Carrez (ttx) ___ OpenStack-Infra mailing list OpenStack-Infra@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
[openstack-dev] [Manila] Docs plan for Juno
Now that the project is incubated, we should be moving our docs from the openstack wiki to the openstack-manuals project. Rushil Chugh has volunteered to lead this effort so please coordinate any updates to documentation with him (and me). Our goal is to have the updates to openstack-manuals upstream by Sept 22. It will go faster if we can split up the work and do it in parallel. -Ben ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Manila] Incubation request
I just saw the agenda for tomorrow's TC meeting and we're on it. I plan to be there. https://wiki.openstack.org/wiki/Meetings#Technical_Committee_meeting -Ben From: Swartzlander, Ben [mailto:ben.swartzlan...@netapp.com] Sent: Monday, July 28, 2014 9:53 PM To: openstack-dev@lists.openstack.org Subject: [openstack-dev] [Manila] Incubation request Manila has come a long way since we proposed it for incubation last autumn. Below are the formal requests. https://wiki.openstack.org/wiki/Manila/Incubation_Application https://wiki.openstack.org/wiki/Manila/Program_Application Anyone have anything to add before I forward these to the TC? -Ben Swartzlander ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Manila] File-storage for Manila service image
On Tue, 2014-08-05 at 23:13 +0300, Valeriy Ponomaryov wrote: Hello everyone, Currently used image for Manila is located in dropbox: ubuntu_1204_nfs_cifs.qcow2 and dropbox has limit for traffic, see https://www.dropbox.com/help/4204 Due to generation of excessive traffic, public links were banned and image could not be downloaded with error code 509, now it is unbanned, until another excess reached. Traffic limit should not threat possibility to use project, so we need find stable file storage with permanent public links and without traffic limit. Does anyone have any suggestions for more suitable file storage to use? Let's try creating a github repo and sharing it there. For hopefully obvious reasons, let's NOT put this into the manila repos directly -- let's keep it separate. -- Kind Regards Valeriy Ponomaryov ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Manila] File-storage for Manila service image
On Tue, 2014-08-05 at 23:50 +0300, Valeriy Ponomaryov wrote: Github has file size limit in 100 Mb, see https://help.github.com/articles/what-is-my-disk-quota Our current image is about 300 Mb. Do you think we could upload the file to launchpad somehow? I've seen LP hosting various downloadable files. If that fails maybe the openstack-infra team has a place for blobs. Worst case we will just host in on S3 and pay for it out of our pockets. On Tue, Aug 5, 2014 at 11:43 PM, Swartzlander, Ben ben.swartzlan...@netapp.com wrote: On Tue, 2014-08-05 at 23:13 +0300, Valeriy Ponomaryov wrote: Hello everyone, Currently used image for Manila is located in dropbox: ubuntu_1204_nfs_cifs.qcow2 and dropbox has limit for traffic, see https://www.dropbox.com/help/4204 Due to generation of excessive traffic, public links were banned and image could not be downloaded with error code 509, now it is unbanned, until another excess reached. Traffic limit should not threat possibility to use project, so we need find stable file storage with permanent public links and without traffic limit. Does anyone have any suggestions for more suitable file storage to use? Let's try creating a github repo and sharing it there. For hopefully obvious reasons, let's NOT put this into the manila repos directly -- let's keep it separate. -- Kind Regards Valeriy Ponomaryov ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Kind Regards Valeriy Ponomaryov ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Manila] Incubation request
On Tue, 2014-07-29 at 13:38 +0200, Thierry Carrez wrote: Swartzlander, Ben a écrit : Manila has come a long way since we proposed it for incubation last autumn. Below are the formal requests. https://wiki.openstack.org/wiki/Manila/Incubation_Application https://wiki.openstack.org/wiki/Manila/Program_Application Anyone have anything to add before I forward these to the TC? When ready, propose a governance change a bit like this one: https://github.com/openstack/governance/commit/52d9b4cf2f3ba9d0b757e16dc040a1c174e1d27e Thierry, does the governance change process replace the process of sending an email to the openstack-tc ML? -Ben ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Manila] Incubation request
Manila has come a long way since we proposed it for incubation last autumn. Below are the formal requests. https://wiki.openstack.org/wiki/Manila/Incubation_Application https://wiki.openstack.org/wiki/Manila/Program_Application Anyone have anything to add before I forward these to the TC? -Ben Swartzlander ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Manila] GenericDriver cinder volume error during manila create
On Mon, 2014-06-16 at 23:06 +0530, Deepak Shetty wrote: I am trying devstack on F20 setup with Manila sources. When i am trying to do manila create --name cinder_vol_share_using_nfs2 --share-network-id 36ec5a17-cef6-44a8-a518-457a6f36faa0 NFS 2 I see the below error in c-vol due to which even tho' my service VM is started, manila create errors out as cinder volume is not getting exported as iSCSI 2014-06-16 16:39:36.151 INFO cinder.volume.flows.manager.create_volume [req-15d0b435-f6ce-41cd-ae4a-3851b07cf774 1a7816e5f0144c539192360cdc9672d5 b65a066f32df4aca80fa9a 6d5c795095] Volume 8bfd424d-9877-4c20-a9d1-058c06b9bdda: being created as raw with specification: {'status': u'creating', 'volume_size': 2, 'volume_name': u'volume-8bfd 424d-9877-4c20-a9d1-058c06b9bdda'} 2014-06-16 16:39:36.151 DEBUG cinder.openstack.common.processutils [req-15d0b435-f6ce-41cd-ae4a-3851b07cf774 1a7816e5f0144c539192360cdc9672d5 b65a066f32df4aca80fa9a6d5c 795095] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf lvcreate -n volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda stack-volumes -L 2g from (pid=4 623) execute /opt/stack/cinder/cinder/openstack/common/processutils.py:142 2014-06-16 16:39:36.828 INFO cinder.volume.flows.manager.create_volume [req-15d0b435-f6ce-41cd-ae4a-3851b07cf774 1a7816e5f0144c539192360cdc9672d5 b65a066f32df4aca80fa9a 6d5c795095] Volume volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda (8bfd424d-9877-4c20-a9d1-058c06b9bdda): created successfully 2014-06-16 16:39:38.404 WARNING cinder.context [-] Arguments dropped when creating context: {'user': u'd9bb59a6a2394483902b382a991ffea2', 'tenant': u'b65a066f32df4aca80 fa9a6d5c795095', 'user_identity': u'd9bb59a6a2394483902b382a991ffea2 b65a066f32df4aca80fa9a6d5c795095 - - -'} 2014-06-16 16:39:38.426 DEBUG cinder.volume.manager [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2 b65a066f32df4aca80fa9a6d5c795095] Volume 8bfd424d-9877-4c20-a9d1-058c06b9bdda: creating export from (pid=4623) initialize_connection /opt/stack/cinder/cinder/volume/manager.py:781 2014-06-16 16:39:38.428 INFO cinder.brick.iscsi.iscsi [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2 b65a066f32df4aca80fa9a6d5c795095] Creat ing iscsi_target for: volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda 2014-06-16 16:39:38.440 DEBUG cinder.brick.iscsi.iscsi [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2 b65a066f32df4aca80fa9a6d5c795095] Crea ted volume path /opt/stack/data/cinder/volumes/volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda, content: target iqn.2010-10.org.openstack:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda backing-store /dev/stack-volumes/volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda lld iscsi IncomingUser kZQ6rqqT7W6KGQvMZ7Lr k4qcE3G9g5z7mDWh2woe /target from (pid=4623) create_iscsi_target /opt/stack/cinder/cinder/brick/iscsi/iscsi.py:183 2014-06-16 16:39:38.440 DEBUG cinder.openstack.common.processutils [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2 b65a066f32df4aca80fa9a6d5c 795095] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf tgt-admin --update iqn.2010-10.org.openstack:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdd a from (pid=4623) execute /opt/stack/cinder/cinder/openstack/common/processutils.py:142 2014-06-16 16:39:38.981 DEBUG cinder.openstack.common.processutils [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2 b65a066f32df4aca80fa9a6d5c 795095] Result was 107 from (pid=4623) execute /opt/stack/cinder/cinder/openstack/common/processutils.py:167 2014-06-16 16:39:38.981 WARNING cinder.brick.iscsi.iscsi [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2 b65a066f32df4aca80fa9a6d5c795095] Fa iled to create iscsi target for volume id:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda: Unexpected error while running command. Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf tgt-admin --update iqn.2010-10.org.openstack:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda Exit code: 107 Stdout: 'Command:\n\ttgtadm -C 0 --lld iscsi --op new --mode target --tid 1 -T iqn.2010-10.org.openstack:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda \nexited with code: 107.\n' Stderr: 'tgtadm: failed to send request hdr to tgt daemon, Transport endpoint is not connected\ntgtadm: failed to send request hdr to tgt daemon, Transport endpoint is not connected\ntgtadm: failed to send request hdr to tgt daemon, Transport endpoint is not connected \ntgtadm: failed to send request hdr to tgt daemon, Transport endpoint is not connected\n' 2014-06-16 16:39:38.982 ERROR oslo.messaging.rpc.dispatcher [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2 b65a066f32df4aca80fa9a6d5c795095] Exception during message handling: Failed to create iscsi target for volume volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda. 2014-06-16
[openstack-dev] [Manila] Welcome Xing Yang to the Manila core team!
The Manila core team welcomes Xing Yang! She has been a very active reviewer and has been consistently involved with the project. Xing, thank you for all your effort and keep up the great work! -Ben Swartzlander ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Cinder][Manila]
-Original Message- From: Alun Champion [mailto:p...@achampion.net] Sent: Saturday, April 26, 2014 7:19 PM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [Cinder][Manila] I'm sure this has been discussed I just couldn't find any reference to it, perhaps someone can point me to the discussion/rationale. Is there any reason why there needs to be another service to present a control-plane to storage? Obviously object storage is different as that is presenting a data-plane API but from a control-plane I'm confused why there needs to be another service, surely control-planes are pretty similar and the underlying networking issues for iSCSI would be similar for NFS/CIFS. Trove is looking to be a general purpose data container (control-plane) service for traditional RDBMS, NoSQL, KeyValue, etc., why is the Cinder API not suitable for providing a general purpose storage container (control-plane) service? Creating separate services will complicate other services, e.g. Trove. Thoughts? There are good arguments on both sides of this question. There is substantial overlap between Cinder and Manila in their API constructs and backends (they both deal with storage, after all). In the long run it's entirely possible that the 2 projects could be merged. However there are also some very important differences. In particular Cinder knows almost nothing about networking, but Manila needs to know a great deal about individual tenant networks in order to deliver NAS storage to tenants. Cinder can rely on hypervisors to do some of the hard work of translating block protocols and managing attaching/detaching whereas Manila routes around the hypervisor entirely and connects guest VMs with storage directly. The most important reason Manila ended up as a separate project from Cinder was because the Cinder team didn't want the distraction of dealing with some of the very hard technical problems that needed solving for Manila to be successful. After working on Manila for the past year and struggling with a lot of hard technical decisions I think it was the right decision to split the projects. If Manila had remained a subproject of Cinder then it either wouldn't have received near the attention it needed or it would have sucked attention away from a lot of important issues that the Cinder team is dealing with. If there's a future where Manila and Cinder merge back together then I'm pretty sure it's quite far away. The best thing we can do is strive to make both projects successful and keep asking these hard questions. -Ben Swartzlander (Manila PTL) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][cinder] create server from a volume snapshot, 180 reties is sufficient?
Options may be bad, but hardcoded values chosen arbitrarily are worse. Unless someone can justify why the value needs to be 180 and not 179 or 181 then it should be configurable. That's my opinion at any rate. -Ben From: Lingxian Kong [mailto:anlin.k...@gmail.com] Sent: Tuesday, April 08, 2014 11:59 AM To: OpenStack Development Mailing List Subject: [openstack-dev] [nova][cinder] create server from a volume snapshot, 180 reties is sufficient? hi there: According to the patch https://review.openstack.org/#/c/80619/, Nova will wait for volume creation for 180s, the config option is rejected by Russell and Nikola. But the reason I raise it up is, we found the server creation failed due to timeout in our deployment, with LVM as Cinder backend. So, I wander is 180s really suitable here? Are there some guidences about when should we add an option? But at least, we should not avoid an option, just because of the existing overwhelming number of them, right? Thoughts? -- --- Lingxian Kong Huawei Technologies Co.,LTD. IT Product Line CloudOS PDU China, Xi'an Mobile: +86-18602962792 Email: konglingx...@huawei.commailto:konglingx...@huawei.com; anlin.k...@gmail.commailto:anlin.k...@gmail.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Modularity of generic driver (network mediated)
Raja, this is one of a few workable approaches that I've thought about. I'm not convinced it's the best approach, but it does look to be less effort so we should examine it carefully. One thing to consider is that if we go down the route of using service VMs for the mediated drivers (such as gluster) then we don't need to be tied to Ganesha-NFS -- we could use nfs-kernel-server instead. Perhaps Ganesha-NFS is still the better choice but I'd like to compare the two in this context. One downside is that service VMs with full virtualization are a relatively heavyweight way to deliver file share services to tenants. If there were approaches that could use container-based virtualization or no virtualization at all, then those would probably be more efficient (although also possibly more work). -Ben -Original Message- From: Ramana Raja [mailto:rr...@redhat.com] Sent: Wednesday, February 05, 2014 11:42 AM To: openstack-dev@lists.openstack.org Cc: vponomar...@mirantis.com; aostape...@mirantis.com; yportn...@mirantis.com; Csaba Henk; Vijay Bellur; Swartzlander, Ben Subject: [Manila] Modularity of generic driver (network mediated) Hi, The first prototype of the multi-tenant capable GlusterFS driver would piggyback on the generic driver, which implements the network plumbing model [1]. We'd have NFS-Ganesha server running on the service VM. The Ganesha server would mediate access to the GlusterFS backend (or any other Ganesha compatible clustered file system backends such as CephFS, GPFS, among others), while the tenant network isolation would be done by the service VM networking [2][3]. To implement this idea, we'd have to reuse much of the generic driver code especially that related to the service VM networking. So we were wondering whether the current generic driver can be made more modular? The service VM could not just be used to expose a formatted cinder volume, but instead be used as an instrument to convert the existing single tenant drivers (with slight modification) - LVM, GlusterFS - to a multi-tenant ready driver. Do you see any issues with this thought - generic driver, a modular multi-tenant driver that implements the network plumbing model? And is this idea feasible? [1] https://wiki.openstack.org/wiki/Manila_Networking [2] https://docs.google.com/document/d/1WBjOq0GiejCcM1XKo7EmRBkOdfe4f5IU_Hw1ImPmDRU/edit [3] https://docs.google.com/a/mirantis.com/drawings/d/1Fw9RPUxUCh42VNk0smQiyCW2HGOGwxeWtdVHBB5J1Rw/edit Thanks, Ram ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Incubation request for Manila
Please consider our formal request for incubation status of the Manila project: https://wiki.openstack.org/wiki/Manila_Overview thanks! -Ben Swartzlander ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev