Re: [openstack-dev] [Cinder] FFE request for RBD replication

2016-09-13 Thread Walter A. Boring IV

+1

Since this is very isolated to the rbd driver and it's passing already


Walt
On 09/09/2016 12:32 PM, Gorka Eguileor wrote:

Hi,

As some of you may know, Jon Bernard (jbernard on IRC) has been working
on the RBD v2.1 replication implementation [1] for a while, and we would
like to request a Feature Freeze Exception for that work, as we believe
it is a good candidate being a low risk change for the integrity of
the existing functionality in the driver:

- It's non intrusive if it's not enabled (enabled using
   replication_device configuration option).
- It doesn't affect existing deployments (disabled by default).
- Changes are localized to the driver itself (rbd.py) and the driver
   unit tests file (test_rbd.py).

Jon would have liked to make this request himself, but due to the
untimely arrival of his newborn baby this is not possible.

For obvious reasons Jon will not be available for a little while, but
this will not be a problem, as I am well acquainted with the code -and
I'll be able to reach Jon if necessary- and will be taking care of the
final steps of the review process of his patch: replying to comments in
a timely fashion, making changes to the code as required, and answering
pings on IRC regarding the patch.

Since some people may be interested in testing this functionality during
the reviewing process -or just for fun- I'll be publishing a post with
detailed explanation on how to deploy and test this feature as well as
an automated way to deploy 2 Ceph clusters -linked to be mirroring one
another-, and one devstack node with everything ready to test the
functionality (configuration and keys for the Ceph clusters, cinder
configuration, the latest upstream patch, and a volume type with the
right configuration).

Please, do not hesitate to ask if there are any questions to or concerns
related to this request.

Thank you for taking the time to evaluate this request.

Cheers,
Gorka.

[1]: https://review.openstack.org/333565

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][cinder] tag:follows-standard-deprecation should be removed

2016-08-12 Thread Walter A. Boring IV



I was leaning towards a separate repo until I started thinking about all
the overhead and complications this would cause. It's another repo for
cores to watch. It would cause everyone extra complication in setting up
their CI, which is already one of the biggest roadblocks. It would make
it a little harder to do things like https://review.openstack.org/297140
and https://review.openstack.org/346470 to be able to generate this:
http://docs.openstack.org/developer/cinder/drivers.html. Plus more infra
setup, more moving parts to break, and just generally more
complications.

All things that can be solved for sure. I just question whether it would
be worth having that overhead. Frankly, there are better things I'd like
to spend my time on.

I think at this point my first preference would actually be to define a
new tag. This addresses both the driver removal issue as well as the
backporting of driver bug fixes. I would like to see third party drivers
recognized and treated as being different, because in reality they are
very different than the rest of the code. Having something like
follows_deprecation_but_has_third_party_drivers_that_dont would make a
clear statement that their is a vendor component to this project that
really has to be treated differently and has different concerns
deployers need to be aware of.

Barring that, I think my next choice would be to remove the tag. That
would really be unfortunate as we do want to make it clear to users that
Cinder will not arbitrarily break APIs or do anything between releases
without warning when it comes to non-third party drivers. But if that is
what we need to do to effectively communicate what to expect from
Cinder, then I'm OK with that.

My last choice (of the ones I'm favorable towards) would be marking a
driver as untested/unstable/abandoned/etc rather than removing it. We
could flag these a certain way and have then spam the logs like crazy
after upgrade to make it very and painfully clear that they are not
being maintained. But as Duncan pointed out, this doesn't have as much
impact for getting vendor attention. It's amazing the level of executive
involvement that can happen after a patch is put up for driver removal
due to non-compliance.

Sean

__
I believe there is a compromise that we could implement in Cinder that 
enables us to have a deprecation
of unsupported drivers that aren't meeting the Cinder driver 
requirements and allow upgrades to work

without outright immediately removing a driver.

1. Add a 'supported = True' attribute to every driver.
2. When a driver no longer meets Cinder community requirements, put a
   patch up against the driver
3. When c-vol service starts, check the supported flag.  If the flag is
   False, then log an exception, and disable the driver.
4. Allow the admin to put an entry in cinder.conf for the driver in
   question "enable_unsupported_driver = True".  This will allow the
   c-vol service to start the driver and allow it to work.  Log a
   warning on every driver call.
5. This is a positive acknowledgement by the operator that they are
   enabling a potentially broken driver. Use at your own risk.
6. If the vendor doesn't get the CI working in the next release, then
   remove the driver.
7. If the vendor gets the CI working again, then set the supported flag
   back to True and all is good.


This allows a deprecation period for a driver, and keeps operators who 
upgrade their deployment from losing access to their volumes they have 
on those back-ends.  It will give them time to contact the community 
and/or do some research, and find out what happened to the driver.   
This also potentially gives the operator time to find a new supported 
backend and start migrating volumes.  I say potentially, because the 
driver may be broken, or it may work enough to migrate volumes off of it 
to a new backend.


Having unsupported drivers in tree is terrible for the Cinder community, 
and in the long run terrible for operators.
Instantly removing drivers because CI is unstable is terrible for 
operators in the short term, because as soon as they upgrade OpenStack, 
they lose all access to managing their existing volumes.   Just because 
we leave a driver in tree in this state, doesn't mean that the operator 
will be able to migrate if the drive is broken, but they'll have a 
chance depending on the state of the driver in question.  It could be 
horribly broken, but the breakage might be something fixable by someone 
that just knows Python.   If the driver is gone from tree entirely, then 
that's a lot more to overcome.


I don't think there is a way to make everyone happy all the time, but I 
think this buys operators a small window of opportunity to still manage 
their existing volumes before the driver is removed. It also still 
allows the Cinder community to deal with unsupported drivers in a way 
that will motivate vendors to keep their 

Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-09 Thread Walter A. Boring IV

On 08/09/2016 11:52 AM, Ihar Hrachyshka wrote:

Walter A. Boring IV <walter.bor...@hpe.com> wrote:


On 08/08/2016 02:28 PM, Ihar Hrachyshka wrote:

Duncan Thomas <duncan.tho...@gmail.com> wrote:

On 8 August 2016 at 21:12, Matthew Treinish <mtrein...@kortar.org> 
wrote:
Ignoring all that, this is also contrary to how we perform testing 
in OpenStack.
We don't turn off entire classes of testing we have so we can land 
patches,

that's just a recipe for disaster.

But is it more of a disaster (for the consumers) than zero testing, 
zero review, scattered around the internet 
if-you're-lucky-with-a-good-wind you'll maybe get the right patch 
set? Because that's where we are right now, and vendors, 
distributors and the cinder core team are all saying it's a disaster.


If consumers rely on upstream releases, then they are expected to 
migrate to newer releases after EOL, not switch to a random branch 
on the internet. If they rely on some commercial product, then they 
usually have an extended period of support and certification for 
their drivers, so it’s not a problem for them.


Ihar
This is entirely unrealistic.  Force customers to upgrade. Good luck 
explaining to a bank that in order to get their cinder driver fix in, 
they have to upgrade their entire OpenStack deployment. Real world 
customers simply will balk at this all day long.


Real world customers will pay for engineering to support their 
software, either their own or of one of OpenStack vendors. There is no 
free lunch from upstream here.


  Our customers are already paying us to support them and it's what we 
are doing.  Nobody is asking for a free lunch from upstream.  We are 
simply asking for a way to have a centralized repository that each 
vendor uses to support their drivers.


The problem is how to get customers patches against older drivers and 
then support following that.  We have no place to centrally place our 
patches against our driver other than our forked github account for 
older releases.   This is exactly what the rest of the Cinder driver 
vendors are doing, and is what we are trying to avoid.  The problem even 
gets worse when a customer has a LeftHand array and a SolidFire and/or 
Netapp and/or Pure array.  The customer will have to get fixes from each 
separate repository and monitor each of those for changes in the 
future.   Which fork to they follow?  This is utter chaos from a 
customer perspective as well as a distributor's perspective and is 
terrible for OpenStack users/deployers.



Walt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-09 Thread Walter A. Boring IV

On 08/08/2016 02:28 PM, Ihar Hrachyshka wrote:

Duncan Thomas  wrote:

On 8 August 2016 at 21:12, Matthew Treinish  
wrote:
Ignoring all that, this is also contrary to how we perform testing in 
OpenStack.
We don't turn off entire classes of testing we have so we can land 
patches,

that's just a recipe for disaster.

But is it more of a disaster (for the consumers) than zero testing, 
zero review, scattered around the internet 
if-you're-lucky-with-a-good-wind you'll maybe get the right patch 
set? Because that's where we are right now, and vendors, distributors 
and the cinder core team are all saying it's a disaster.


If consumers rely on upstream releases, then they are expected to 
migrate to newer releases after EOL, not switch to a random branch on 
the internet. If they rely on some commercial product, then they 
usually have an extended period of support and certification for their 
drivers, so it’s not a problem for them.


Ihar
This is entirely unrealistic.  Force customers to upgrade.   Good luck 
explaining to a bank that in order to get their cinder driver fix in, 
they have to upgrade their entire OpenStack deployment. Real world 
customers simply will balk at this all day long.


Walt


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-09 Thread Walter A. Boring IV



I think "currently active stable branches" is key there. These branches
would no longer be "currently active". They would get an EOL tag when it
reaches the end of the support phases. We just wouldn't delete the
branch.

This argument comes up at least once a cycle and there is a reason we don't do
this. When we EOL a branch all of the infrastructure for running any ci against
it goes away. This means devstack support, job definitions, tempest skip checks,
etc. Leaving the branch around advertises that you can still submit patches to
it which you can't anymore. As a community we've very clearly said that we don't
land any code without ensuring it passes tests first, and we do not maintain any
of the infrastructure for doing that after an EOL.


And it's this exact policy that has lead us to this mess we are in 
today.   As a vendor that has customers that use OpenStack, we have to 
support very old releases.  Customers in the wild do not like to upgrade 
once they get OpenStack up and running because it's very difficult, time 
consuming and dangerous to do.  We have customers still running Icehouse 
and they will most likely won't upgrade any time soon.  Banks hate 
upgrading software after they have customers running on it.   This is a 
community wide problem that needs to be addressed.


Because of this problem, (not being able to backport bug fixes in our 
drivers), we have been left with forking Cinder on our own github to put 
our driver fixes there.   This is a terrible practice for the OpenStack 
community in general, and terrible for customers/users of OpenStack, as 
we have N driver vendors that have N different mechanisms for getting 
bug fixes to their customers.  I believe this is a major problem for 
users of OpenStack and it needs to be addressed.
At the Cinder midcycle, we came up with a solution that would satisfy 
Cinder customers, as Sean planned out.  We acknowledge that it's a 
driver maintainer's responsibility to make sure they test any changes 
that get into the stable branches, because there is no infra support for 
running CI against the patches of old stable branches. I think that risk 
is far better than the existing reality of N cinder forks floating 
around github.   It's just no way to ship software to actual customers.


$0.02,
Walt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Nominating Scott D'Angelo to Cinder core

2016-07-05 Thread Walter A. Boring IV
This is great!   I know I'm a bit late to replying to this on the ML, 
due to my vacation,

but I whole heartedly agree!

+1

Walt
On 06/27/2016 10:27 AM, Sean McGinnis wrote:

I would like to nominate Scott D'Angelo to core. Scott has been very
involved in the project for a long time now and is always ready to help
folks out on IRC. His contributions [1] have been very valuable and he
is a thorough reviewer [2].

Please let me know if there are any objects to this within the next
week. If there are none I will switch Scott over by next week, unless
all cores approve prior to then.

Thanks!

Sean McGinnis (smcginnis)

[1] 
https://review.openstack.org/#/q/owner:%22Scott+DAngelo+%253Cscott.dangelo%2540hpe.com%253E%22+status:merged
[2] http://cinderstats-dellstorage.rhcloud.com/cinder-reviewers-90.txt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] No middle-man - when does/will Nova directly connect iSCSI volumes?

2016-06-24 Thread Walter A. Boring IV



Does QEMU support hardware initiators? iSER?

No, this is only for case where you're doing pure software based
iSCSI client connections. If we're relying on local hardware that's
a different story.


We regularly fix issues with iSCSI attaches in the release cycles of
OpenStack,
because it's all done in python using existing linux packages.  How often

This is a great example of the benefit that in-QEMU client gives us. The
Linux iSCSI client tools have proved very unreliable in use by OpenStack.
This is a reflection of the very architectural approach. We have individual
resources needed by distinct VMs, but we're having to manage them as a host
wide resource and that's creating us unneccessary complexity and having a
poor effect on our reliability overall.

I've been doing more and more digging and research into doing this
and it seems that canonical removed libiscsi from qemu due to security 
problems

in the 14.04 LTS release cycle.

Trying to fire up a new vm manually with qemu attaching an iscsi disk via
the documented mechanism ends up with qemu complaining that it can't
open the disk 'unknown protocol'.

qemu-system-x86_64 -drive 
file=iscsi://10.52.1.11/iqn.2000-05.com.3pardata:20810002ac00383d/0 
-iscsi initiator-name=iqn.walt-qemu-initiator
qemu-system-x86_64: -drive 
file=iscsi://10.52.1.11/iqn.2000-05.com.3pardata:20810002ac00383d/0: 
could not open disk image 
iscsi://10.52.1.11/iqn.2000-05.com.3pardata:20810002ac00383d/0: Unknown 
protocol


There was bug filed against qemu back in 2014 and was marked as wont fix 
due to security issues.

https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1271573

That looks like it has been fixed since here:
https://bugs.launchpad.net/ubuntu/+source/libiscsi/+bug/1271653
But that's only xenial (16.04) support and won't be in 14.x tree.


I have also confirmed that the 
nova.virt.libvirt.volume.net.LibvirtNetVolumeDriver

fails for iscsi for the same exact reason against nova master.

I modified the nova/virt/libvirt/driver.py and changed iscsi to point to 
the LibvirtNetVolumeDriver
and tried to attach an iSCSI volume.  It failed and the libvirtd log 
showed the unknown protocol error.


The n-cpu.log entry:
2016-06-24 08:09:21.555 8891 DEBUG nova.virt.libvirt.guest 
[req-46954106-c728-43ba-b40a-5b91a1639610 admin admin] attach device 
xml: 

  
  name="iqn.2000-05.com.3pardata:20810002ac00383d/0">


  
  
  a1d0c85e-d6e6-424f-9ca7-76ecd0ce45fb

 attach_device /opt/stack/nova/nova/virt/libvirt/guest.py:251
2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver 
[req-46954106-c728-43ba-b40a-5b91a1639610 admin admin] [instance: 
74092b75-dc20-47e5-9127-c63367d05b29] Failed to attach volume at 
mountpoint: /dev/vdb
2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver [instance: 
74092b75-dc20-47e5-9127-c63367d05b29] Traceback (most recent call last):
2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver [instance: 
74092b75-dc20-47e5-9127-c63367d05b29]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 1160, in attach_volume
2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver [instance: 
74092b75-dc20-47e5-9127-c63367d05b29] guest.attach_device(conf, 
persistent=True, live=live)
2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver [instance: 
74092b75-dc20-47e5-9127-c63367d05b29]   File 
"/opt/stack/nova/nova/virt/libvirt/guest.py", line 252, in attach_device
2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver [instance: 
74092b75-dc20-47e5-9127-c63367d05b29] 
self._domain.attachDeviceFlags(device_xml, flags=flags)
2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver [instance: 
74092b75-dc20-47e5-9127-c63367d05b29]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 186, in 
doit
2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver [instance: 
74092b75-dc20-47e5-9127-c63367d05b29] result = 
proxy_call(self._autowrap, f, *args, **kwargs)
2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver [instance: 
74092b75-dc20-47e5-9127-c63367d05b29]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 144, in 
proxy_call
2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver [instance: 
74092b75-dc20-47e5-9127-c63367d05b29] rv = execute(f, *args, **kwargs)
2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver [instance: 
74092b75-dc20-47e5-9127-c63367d05b29]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 125, in 
execute
2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver [instance: 
74092b75-dc20-47e5-9127-c63367d05b29] six.reraise(c, e, tb)
2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver [instance: 
74092b75-dc20-47e5-9127-c63367d05b29]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 83, in 
tworker
2016-06-24 08:09:21.574 8891 ERROR nova.virt.libvirt.driver [instance: 
74092b75-dc20-47e5-9127-c63367d05b29] rv = meth(*args, **kwargs)
2016-06-24 

Re: [openstack-dev] [cinder] No middle-man - when does/will Nova directly connect iSCSI volumes?

2016-06-23 Thread Walter A. Boring IV


volumes connected to QEMU instances eventually become directly connected?


Our long term goal is that 100% of all network storage will be connected
to directly by QEMU. We already have the ability to partially do this with
iSCSI, but it is lacking support for multipath. As & when that gap is
addressed though, we'll stop using the host OS for any iSCSI stuff.

So if you're requiring access to host iSCSI volumes, it'll work in the
short-medium term, but in the medium-long term we're not going to use
that so plan accordingly.


What is the benefit of this largely monolithic approach?  It seems that
moving everything into QEMU is diametrically opposed to the unix model 
itself and
is just a re-implementation of what already exists in the linux world 
outside of QEMU.


Does QEMU support hardware initiators? iSER?

We regularly fix issues with iSCSI attaches in the release cycles of 
OpenStack,
because it's all done in python using existing linux packages.  How 
often are QEMU
releases done and upgraded on customer deployments vs. python packages 
(os-brick)?


I don't see a compelling reason for re-implementing the wheel,
and it seems like a major step backwards.




Xiao's unanswered query (below) presents another question. Is this a
site-choice? Could I require my customers to configure their OpenStack
clouds to always route iSCSI connections through the nova-compute host? (I
am not a fan of this approach, but I have to ask.)

In the short term that'll work, but long term we're not intending to
support that once QEMU gains multi-path. There's no timeframe on when
that will happen though.



Regards,
Daniel



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] No middle-man - when does/will Nova directly connect iSCSI volumes?

2016-06-16 Thread Walter A. Boring IV

One major disadvantage is lack of multipath support.

Multipath is still done outside of qemu and there is no native multipath 
support inside of qemu from what I can tell.  Another
disadvantage is that qemu iSCSI support is all s/w based. There are 
hardware iSCSI initiators that are supported by os-brick today.  I think 
migrating attaches into qemu itself isn't a good idea and will always be 
behind the level of support already provided by the tools that have been 
around forever.  Also, what kind of support does QEMU have for target 
portal discovery?  Can it discover all targets via a single portal, and 
can you pass in multiple portals to do discovery for the same volume?  
This is also related to multipath support.  Some storage arrays can't do 
discovery on a single portal, they have to have discovery on each interface.


Do you have some actual numbers to prove that host based attaches passed 
into libvirt are slower than QEMU direct attaches?


You can't really compare RBD to iSCSI.  RBD is a completely different 
beast.  The kernel rbd driver hasn't been as stable and as fast as the 
rbdclient that qemu uses.


Walt


On 06/15/2016 04:59 PM, Preston L. Bannister wrote:
QEMU has the ability to directly connect to iSCSI volumes. Running the 
iSCSI connections through the nova-compute host *seems* somewhat 
inefficient.


There is a spec/blueprint and implementation that landed in Kilo:

https://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/qemu-built-in-iscsi-initiator.html
https://blueprints.launchpad.net/nova/+spec/qemu-built-in-iscsi-initiator

From looking at the OpenStack Nova sources ... I am not entirely clear 
on when this behavior is invoked (just for Ceph?), and how it might 
change in future.


Looking for a general sense where this is headed. (If anyone knows...)

If there is some problem with QEMU and directly attached iSCSI 
volumes, that would explain why this is not the default. Or is this 
simple inertia?



I have a concrete concern. I work for a company (EMC) that offers 
backup products, and we now have backup for instances in OpenStack. To 
make this efficient, we need to collect changed-block information from 
instances.


1)  We could put an intercept in the Linux kernel of the nova-compute 
host to track writes at the block layer. This has the merit of working 
for containers, and potentially bare-metal instance deployments. But 
is not guaranteed for instances, if the iSCSI volumes are directly 
attached to QEMU.


2)  We could use the QEMU support for incremental backup (first bit 
landed in QEMU 2.4). This has the merit of working with any storage, 
by only for virtual machines under QEMU.


As our customers are (so far) only asking about virtual machine 
backup. I long ago settled on (2) as most promising.


What I cannot clearly determine is where (1) will fail. Will all iSCSI 
volumes connected to QEMU instances eventually become directly connected?



Xiao's unanswered query (below) presents another question. Is this a 
site-choice? Could I require my customers to configure their OpenStack 
clouds to always route iSCSI connections through the nova-compute 
host? (I am not a fan of this approach, but I have to ask.)


To answer Xiao's question, can a site configure their cloud to 
*always* directly connect iSCSI volumes to QEMU?




On Tue, Feb 16, 2016 at 4:54 AM, Xiao Ma (xima2) > wrote:


Hi, All

I want to make the qemu communicate with iscsi target using
libiscsi directly, and I
followed https://review.openstack.org/#/c/135854/ to add
'volume_drivers =
iscsi=nova.virt.libvirt.volume.LibvirtNetVolumeDriver’ in nova.conf
 and then restarted nova services and cinder services, but still
the volume configuration of vm is as bellow:


  
  
  
076bb429-67fd-4c0c-9ddf-0dc7621a975a
  



I use centos7 and Liberty version of OpenStack.
Could anybody tell me how can I achieve it?


Thanks.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] os-brick privsep failures and an upgrade strategy?

2016-06-14 Thread Walter A. Boring IV
I just put up a WIP patch in os-brick that tests to see if os-privsep is 
configured with
the helper_command.  If it's not, then os-brick falls back to using 
processutils

with the root_helper and run_as_root kwargs passed in.

https://review.openstack.org/#/c/329586
If you can check this out that would be helpful.  If this is the route 
we want to go,

then I'll add unit tests and take it out of WIP and try to get it in.


So, if nova.conf and cinder.conf aren't updated with the privsep_osbrick 
sections
providing the helper_command, then os_brick will assume local 
processutils calls

with the configured root_helper passed in.

This should be backwards compatible (grenade upgrade tests).  But we 
should encourage
admins to add that section to their nova.conf and cinder.conf files.  
The other downside
to this is that if we have to keep this code in place, then we 
effectively still have to maintain

rootwrap filters in place and keep them up to date.   *sadness*


Walt

On 06/14/2016 04:49 AM, Sean Dague wrote:

os-brick 1.4 was released over the weekend, and was the first os-brick
to include privsep. We got a really odd failure rate in the
grenade-multinode jobs (1/3 - 1/2) after wards which was super non
obvious why. Hemma looks to have figured it out (this is a summary of
what I've seen on IRC to pull it all together)

Remembering the following -
https://github.com/openstack-dev/grenade#theory-of-upgrade and
https://governance.openstack.org/reference/tags/assert_supports-upgrade.html#requirements
- New code must work with N-1 configs. So this is `master` running with
`mitaka` configuration.

privsep requires a sudo rule or rootwrap rule (to get to sudo) to allow
the privsep daemon to be spawned for volume actions.

During gate testing we have a blanket sudoer rule for the stack user
during the run of grenade.sh. It has to do system level modifications
broadly to perform the upgrade. This sudoer rule is deleted at the end
of the grenade.sh run before Tempest tests are run, so that Tempest
tests don't accidentally require root privs on their target environment.

Grenade *also* makes sure that some resources live across the upgrade
boundary. This includes a boot from volume guest, which is torn down
before testing starts. And this is where things get interesting.

This means there is a volume teardown needed before grenade ends. But
there is only one. In single node grenade this happens about 30 seconds
for the end of the script, triggers the privsep daemon start, and then
we're done. And the 50_stack_sh sudoers file is removed. In multinode,
*if* the boot from volume server is on the upgrade node, then the same
thing happens. *However*, if it instead ended up on the subnode, which
is not upgraded, then the volume tear down in on the old node. No
os-brick calls are made on the upgraded node before grenade finishes.
The 50_stack_sh sudoers file is removed, as expected.

And now all volume tests on those nodes fail.

Which is what should happen. The point is that in production no one is
going to put a blanket sudoers rule like that in place. It's just we
needed it for this activity, and the userid on the services being the
same as the shell user (which is not root) let this fallback rule be used.

The crux of the problem is that os-brick 1.4 and privsep can't be used
without a config file change during the upgrade. Which violates our
policy, because it breaks rolling upgrades.

So... we have a few options:

1) make an exception here with release notes, because it's the only way
to move forward.

2) have some way for os-brick to use either mode for a transition period
(depending on whether privsep is configured to work)

3) Something else ?

https://bugs.launchpad.net/os-brick/+bug/1592043 is the bug we've got on
this. We should probably sort out the path forward here on the ML as
there are a bunch of folks in a bunch of different time zones that have
important perspectives here.

-Sean




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Nominating Michał Dulko to Cinder Core

2016-05-04 Thread Walter A. Boring IV

+1

Walt

Hey everyone,

I would like to nominate Michał Dulko to the Cinder core team. Michał's
contributions with both code reviews [0] and code contributions [1] have
been significant for some time now.

His persistence with versioned objects has been instrumental in getting
support in the Mitaka release for rolling upgrades.

If there are no objections from current cores by next week, I will add
Michał to the core group.

[0] http://cinderstats-dellstorage.rhcloud.com/cinder-reviewers-90.txt
[1]
https://review.openstack.org/#/q/owner:%22Michal+Dulko+%253Cmichal.dulko%2540intel.com%253E%22++status:merged

Thanks!

Sean McGinnis (smcginnis)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Status of cinder-list bug delay with 1000's of volumes

2016-03-03 Thread Walter A. Boring IV

Adam,
  As the bug shows, it was fixed in the Juno release.  The icehouse 
release is no longer supported.  I would recommend upgrading your 
deployment if possible or looking at the patch and see if it can work 
against your Icehouse codebase.


https://review.openstack.org/#/c/96548/

Walt

On 03/03/2016 03:12 PM, Adam Lawson wrote:

Hey all (hi John),

What's the status of this [1]? We're experiencing this behavior in 
Icehouse - wondering where it was addressed and if so, when. I always 
get confused when I look at the launchpad/review portals.


[1] https://bugs.launchpad.net/cinder/+bug/1317606

*/
Adam Lawson/*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-24 Thread Walter A. Boring IV

On 02/23/2016 06:14 AM, Qiming Teng wrote:



I don't think the proposal removes that opportunity. Contributors
/can/ still go to OpenStack Summits. They just don't /have to/. I
just don't think every contributor needs to be present at every
OpenStack Summit, while I'd like to see most of them present at
every separated contributors-oriented event[tm].

Yes they can, but if contributors go to the design summit, then they
also have to get travel budget to go to the new Summit.   So, design
summits,  midcycle meetups, and now the split off marketing summit.
This is making it overall more expensive for contributors that meet
with customers.


My take of this is that we are saving the cost by isolating developers
(contributors) from users/customers.

And that is exactly a problem for contributors like myself that use the
conference to meet with customers.   If we split off the summit from
developers, then I'll also have to travel to yet another meetup, just to
meet with customers.

For contributors that just focus on design and development, the proposed
change is probably fine, but for everyone else this seems to make things 
worse

and adds additional costs and travel.

Walt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] volumes stuck detaching attaching and force detach

2016-02-22 Thread Walter A. Boring IV

On 02/22/2016 11:24 AM, John Garbutt wrote:

Hi,

Just came up on IRC, when nova-compute gets killed half way through a
volume attach (i.e. no graceful shutdown), things get stuck in a bad
state, like volumes stuck in the attaching state.

This looks like a new addition to this conversation:
http://lists.openstack.org/pipermail/openstack-dev/2015-December/082683.html
And brings us back to this discussion:
https://blueprints.launchpad.net/nova/+spec/add-force-detach-to-nova

What if we move our attention towards automatically recovering from
the above issue? I am wondering if we can look at making our usually
recovery code deal with the above situation:
https://github.com/openstack/nova/blob/834b5a9e3a4f8c6ee2e3387845fc24c79f4bf615/nova/compute/manager.py#L934

Did we get the Cinder APIs in place that enable the force-detach? I
think we did and it was this one?
https://blueprints.launchpad.net/python-cinderclient/+spec/nova-force-detach-needs-cinderclient-api

I think diablo_rojo might be able to help dig for any bugs we have
related to this. I just wanted to get this idea out there before I
head out.

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
.


The problem is a little more complicated.

In order for cinder backends to be able to do a force detach correctly, 
the Cinder driver needs to have the correct 'connector' dictionary 
passed in to terminate_connection.  That connector dictionary is the 
collection of initiator side information which is gleaned here:

https://github.com/openstack/os-brick/blob/master/os_brick/initiator/connector.py#L99-L144

The plan was to save that connector information in the Cinder 
volume_attachment table.  When a force detach is called, Cinder has the 
existing connector saved if Nova doesn't have it.  The problem was live 
migration.  When you migrate to the destination n-cpu host, the 
connector that Cinder had is now out of date.  There is no API in Cinder 
today to allow updating an existing attachment.


So, the plan at the Mitaka summit was to add this new API, but it 
required microversions to land, which we still don't have in Cinder's 
API today.



Walt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-22 Thread Walter A. Boring IV

On 02/22/2016 09:45 AM, Thierry Carrez wrote:

Amrith Kumar wrote:

[...]
As a result of this proposal, there will still be four events each 
year, two "OpenStack Summit" events and two "MidCycle" events.


Actually, the OpenStack summit becomes the midcycle event. The new 
separated contributors-oriented event[tm] happens at the beginning of 
the new cycle.



[...]
Given the number of projects, and leaving aside high bandwidth 
internet and remote participation, providing dedicated meeting room 
for the duration of the MidCycle event for each project is a 
considerable undertaking. I believe therefore that the consequence is 
that the MidCycle event will end up being of comparable scale to the 
current Design Summit or larger, and will likely need a similar venue.


It still is an order of magnitude smaller than the "OpenStack Summit". 
Think 600 people instead of 6000. The idea behind co-hosting is to 
facilitate cross-project interactions. You know where to find people, 
and you can easily arrange a meeting between two teams for an hour.



[...]
At the current OpenStack Summit, there is an opportunity for 
contributors, customers and operators to interact, not just in 
technical meetings, but also in a social setting. I think this is 
valuable, even though there seems to be a number of people who 
believe that this is not necessarily the case.


I don't think the proposal removes that opportunity. Contributors 
/can/ still go to OpenStack Summits. They just don't /have to/. I just 
don't think every contributor needs to be present at every OpenStack 
Summit, while I'd like to see most of them present at every separated 
contributors-oriented event[tm].


Yes they can, but if contributors go to the design summit, then they 
also have to get travel budget to go to the new Summit.   So, design 
summits,  midcycle meetups, and now the split off marketing summit.   
This is making it overall more expensive for contributors that meet with 
customers.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-22 Thread Walter A. Boring IV

On 02/22/2016 07:14 AM, Thierry Carrez wrote:

Hi everyone,

TL;DR: Let's split the events, starting after Barcelona.


Time is ripe for a change. After Tokyo, we at the Foundation have been 
considering options on how to evolve our events to solve those issues. 
This proposal is the result of this work. There is no perfect solution 
here (and this is still work in progress), but we are confident that 
this strawman solution solves a lot more problems than it creates, and 
balances the needs of the various constituents of our community.


The idea would be to split the events. The first event would be for 
upstream technical contributors to OpenStack. It would be held in a 
simpler, scaled-back setting that would let all OpenStack project 
teams meet in separate rooms, but in a co-located event that would 
make it easy to have ad-hoc cross-project discussions. It would happen 
closer to the centers of mass of contributors, in less-expensive 
locations.
I'm trying to follow this here.   If we want all of the projects in the 
same location to hold a design summit, then all of the contributors are 
still going to have to do international travel, which is the primary 
cost for attendees.   I'm not sure how this saves the attendees much at 
all, unless they just stop attending. Part of the justification for 
myself for the summits is the ability to meet up with customers, as well 
as do presentations on the work that my team has done over the last 
release cycle, as well as the contributors meet up and cross project 
networking.   If we break the summits up, then I may lose the ability to 
justify my travel, if I don't get to meet with customers and do 
presentations to the wider audience.



What kind of locations are we talking about here? Are we looking to stay 
with one continent as it's deemed 'less-expensive'?  Will we still 
alternate between Americas, Europe, Asia?  I'm not sure there is a way 
to make it less expensive for all the projects as there are people from 
around the globe working on each project.



Walt





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] adding a new /v3 endpoint for api-microversions

2016-02-22 Thread Walter A. Boring IV

On 02/20/2016 02:42 PM, Duncan Thomas wrote:



On 20 Feb 2016 00:21, "Walter A. Boring IV" <walter.bor...@hpe.com 
<mailto:walter.bor...@hpe.com>> wrote:


> Not that I'm adding much to this conversation that hasn't been said 
already, but I am pro v2 API, purely because of how painful and long 
it's been to get the official OpenStack projects to adopt the v2 API 
from v1.


I think there's a slightly different argument here. We aren't taking 
away the v2 API, probably ever. Clients that are satisfied with it can 
continue to use it, as it is, forever. For clients that aren't trying 
to do anything beyond the current basics will quite possibly be happy 
with that. Consumers have no reason to change over without compelling 
value from the change - that will come from what we implement on top 
of microversions, or not. Unlike the v1 transition, we aren't trying 
to get rid of v2, just stop changing existing semantics of it.




I'm more concerned with us not getting OpenStack projects themselves to 
not use the new features/fixes we are going to make after microversions 
land.  If we force folks to use a new endpoint, then it's the same 
problem we had getting OS projects to migrate from v1 to v2.  That took 
years.   :(


Walt



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] adding a new /v3 endpoint for api-microversions

2016-02-19 Thread Walter A. Boring IV



But, there are no such clients today. And there is no library that does
this yet. It will be 4 - 6 months (or even more likely 12+) until that's
in the ecosystem. Which is why adding the header validation to existing
v2 API, and backporting to liberty / kilo, will provide really
substantial coverage for the concern the bswartz is bringing forward.

Yeah, I have to agree with that. We can certainly have the protection
out in time.

The only concern there is the admin who set up his Kilo initial release
cloud and doesn't want to touch it for updates. But they likely have
more pressing issues than this any way.


-Sean




Not that I'm adding much to this conversation that hasn't been said 
already, but I am pro v2 API, purely because of how painful and long 
it's been to get the official OpenStack projects to adopt the v2 API 
from v1.  I know we need to be sort of concerned about other 'client's 
that call the API, but for me that's way down the lists of concerns.   
If we go to v3 API, most likely it's going to be another 3+ years before 
folks can use the new Cinder features that the microversioned changes 
will provides.  This in effect invalidates the microversion capability 
in Cinder's API completely.


/sadness
Walt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Multi-attach, determining when to call os-brick's connector.disconnect_volume

2016-02-16 Thread Walter A. Boring IV

On 02/12/2016 04:35 PM, John Griffith wrote:



On Thu, Feb 11, 2016 at 10:31 AM, Walter A. Boring IV 
<walter.bor...@hpe.com <mailto:walter.bor...@hpe.com>> wrote:


There seems to be a few discussions going on here wrt to
detaches.   One is what to do on the Nova side with calling
os-brick's disconnect_volume, and also when to or not to call
Cinder's terminate_connection and detach.

My original post was simply to discuss a mechanism to try and
figure out the first problem.  When should nova call brick to remove
the local volume, prior to calling Cinder to do something.
​


Nova needs to know if it's safe to call disconnect_volume or not.
Cinder already tracks each attachment, and it can return the
connection_info for each attachment with a call to
initialize_connection.   If 2 of those connection_info dicts are
the same, it's a shared volume/target.  Don't call
disconnect_volume if there are any more of those left.

On the Cinder side of things, if terminate_connection, detach is
called, the volume manager can find the list of attachments for a
volume, and compare that to the attachments on a host.  The
problem is, Cinder doesn't track the host along with the
instance_uuid in the attachments table.  I plan on allowing that
as an API change after microversions lands, so we know how many
times a volume is attached/used on a particular host.  The driver
can decide what to do with it at terminate_connection, detach
time. This helps account for
the differences in each of the Cinder backends, which we will
never get all aligned to the same model.  Each array/backend
handles attachments different and only the driver knows if it's
safe to remove the target or not, depending on how many
attachments/usages it has
on the host itself.   This is the same thing as a reference
counter, which we don't need, because we have the count in the
attachments table, once we allow setting the host and the
instance_uuid at the same time.

​ Not trying to drag this out or be difficult I promise. But, this 
seems like it is in fact the same problem, and I'm not exactly 
following; if you store the info on the compute side during the attach 
phase, why would you need/want to then create a split brain scenario 
and have Cinder do any sort of tracking on the detach side of things?


Like the earlier posts said, just don't call terminate_connection if 
you don't want to really terminate the connection?  I'm sorry, I'm 
just not following the logic of why Cinder should track this and 
interfere with things?  It's supposed to be providing a service to 
consumers and "do what it's told" even if it's told to do the wrong thing.


The only reason to store the connector information on the cinder 
attachments side is in the few use cases when there is no way to get 
that connector any more.  Such as the case for nova evacuate, and force 
detach where nova has no information about where the original attachment 
was, because the instance is gone.   Cinder backends still need the 
connector at terminate_connection time, to find the right 
exports/targets to remove.


Walt
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Cinder] Multi-attach, determining when to call os-brick's connector.disconnect_volume

2016-02-09 Thread Walter A. Boring IV

Hey folks,
   One of the challenges we have faced with the ability to attach a 
single volume to multiple instances, is how to correctly detach that 
volume.  The issue is a bit complex, but I'll try and explain the 
problem, and then describe one approach to solving one part of the 
detach puzzle.


Problem:
  When a volume is attached to multiple instances on the same host. 
There are 2 scenarios here.


  1) Some Cinder drivers export a new target for every attachment on a 
compute host.  This means that you will get a new unique volume path on 
a host, which is then handed off to the VM instance.


  2) Other Cinder drivers export a single target for all instances on a 
compute host.  This means that every instance on a single host, will 
reuse the same host volume path.



When a user issues a request to detach a volume, the workflow boils down 
to first calling os-brick's connector.disconnect_volume before calling 
Cinder's terminate_connection and detach. disconnect_volume's job is to 
remove the local volume from the host OS and close any sessions.


There is no problem under scenario 1.  Each disconnect_volume only 
affects the attached volume in question and doesn't affect any other VM 
using that same volume, because they are using a different path that has 
shown up on the host.  It's a different target exported from the Cinder 
backend/array.


The problem comes under scenario 2, where that single volume is shared 
for every instance on the same compute host.  Nova needs to be careful 
and not call disconnect_volume if it's a shared volume, otherwise the 
first disconnect_volume call will nuke every instance's access to that 
volume.



Proposed solution:
  Nova needs to determine if the volume that's being detached is a 
shared or non shared volume.  Here is one way to determine that.


  Every Cinder volume has a list of it's attachments.  In those 
attachments it contains the instance_uuid that the volume is attached 
to.  I presume Nova can find which of the volume attachments are on the 
same host.  Then Nova can call Cinder's initialize_connection for each 
of those attachments to get the target's connection_info dictionary.  
This connection_info dictionary describes how to connect to the target 
on the cinder backend.  If the target is shared, then each of the 
connection_info dicts for each attachment on that host will be 
identical.  Then Nova would know that it's a shared target, and then 
only call os-brick's disconnect_volume, if it's the last attachment on 
that host.  I think at most 2 calls to cinder's initialize_connection 
would suffice to determine if the volume is a shared target.  This would 
only need to be done if the volume is multi-attach capable and if there 
are more than 1 attachments on the same host, where the detach is happening.


Walt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Multi-attach, determining when to call os-brick's connector.disconnect_volume

2016-02-09 Thread Walter A. Boring IV

On 02/09/2016 02:04 PM, Ildikó Váncsa wrote:

Hi Walt,

Thanks for starting this thread. It is a good summary of the issue and the 
proposal also looks feasible to me.

I have a quick, hopefully not too wild idea based on the earlier discussions we 
had. We were considering earlier to store the target identifier together with 
the other items of the attachment info. The problem with this idea is that when 
we call initialize_connection from Nova, Cinder does not get the relevant 
information, like instance_id, to be able to do this. This means we cannot do 
that using the functionality we have today.

My idea here is to extend the Cinder API so that Nova can send the missing 
information after a successful attach. Nova should have all the information 
including the 'target', which means that it could update the attachment 
information through the new Cinder API.
I think we need to do is to allow the connector to be passed at 
os-attach time.   Then cinder can save it in the attachment's table entry.


We will also need a new cinder API to allow that attachment to be 
updated during live migration, or the connector for the attachment will 
get stale and incorrect.


Walt


It would mean that when we request for the volume info from Cinder at detach 
time the 'attachments' list would contain all the required information for each 
attachments the volume has. If we don't have the 'target' information because 
of any reason we can still use the approach described below as fallback. This 
approach could even be used in case of live migration I think.

The Cinder API extension would need to be added with a new microversion to 
avoid problems with older Cinder versions talking to new Nova.

The advantage of this direction is that we can reduce the round trips to Cinder 
at detach time. The round trip after a successful attach should not have an 
impact on the normal operation as if that fails the only issue we have is we 
need to use the fall back method to be able to detach properly. This would 
still affect only multiattached volumes, where we have more than one 
attachments on the same host. By having the information stored in Cinder as 
well we can also avoid removing a target when there are still active 
attachments connected to it.

What do you think?

Thanks,
Ildikó



-Original Message-
From: Walter A. Boring IV [mailto:walter.bor...@hpe.com]
Sent: February 09, 2016 20:50
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Nova][Cinder] Multi-attach, determining when to call 
os-brick's connector.disconnect_volume

Hey folks,
 One of the challenges we have faced with the ability to attach a single 
volume to multiple instances, is how to correctly detach that
volume.  The issue is a bit complex, but I'll try and explain the problem, and 
then describe one approach to solving one part of the
detach puzzle.

Problem:
When a volume is attached to multiple instances on the same host.
There are 2 scenarios here.

1) Some Cinder drivers export a new target for every attachment on a 
compute host.  This means that you will get a new unique
volume path on a host, which is then handed off to the VM instance.

2) Other Cinder drivers export a single target for all instances on a 
compute host.  This means that every instance on a single host, will
reuse the same host volume path.


When a user issues a request to detach a volume, the workflow boils down to 
first calling os-brick's connector.disconnect_volume
before calling Cinder's terminate_connection and detach. disconnect_volume's 
job is to remove the local volume from the host OS
and close any sessions.

There is no problem under scenario 1.  Each disconnect_volume only affects the 
attached volume in question and doesn't affect any
other VM using that same volume, because they are using a different path that 
has shown up on the host.  It's a different target
exported from the Cinder backend/array.

The problem comes under scenario 2, where that single volume is shared for 
every instance on the same compute host.  Nova needs
to be careful and not call disconnect_volume if it's a shared volume, otherwise 
the first disconnect_volume call will nuke every
instance's access to that volume.


Proposed solution:
Nova needs to determine if the volume that's being detached is a shared or 
non shared volume.  Here is one way to determine that.

Every Cinder volume has a list of it's attachments.  In those attachments 
it contains the instance_uuid that the volume is attached to.
I presume Nova can find which of the volume attachments are on the same host.  
Then Nova can call Cinder's initialize_connection for
each of those attachments to get the target's connection_info dictionary.
This connection_info dictionary describes how to connect to the target on the 
cinder backend.  If the target is shared, then each of the
connection_info dicts for each attachment on that host will be identical

Re: [openstack-dev] [Nova][Cinder] Cleanly detaching volumes from failed nodes

2016-02-04 Thread Walter A. Boring IV
My plan was to store the connector object at attach_volume time.   I was 
going to add an additional column to the cinder volume attachment table 
that stores the connector that came from nova.   The problem is live 
migration. After live migration the connector is out of date.  Cinder 
doesn't have an existing API to update attachment.  That will have to be 
added, so that the connector info can be updated.

We have needed this for force detach for some time now.

It's on my list, but most likely not until N, or at least not until the 
microversions land in Cinder.

Walt



Hi all,
I was wondering if there was any way to cleanly detach volumes from 
failed nodes.  In the case where the node is up nova-compute will call 
Cinder's terminate_connection API with a "connector" that includes 
information about the node - e.g., hostname, IP, iSCSI initiator name, 
FC WWPNs, etc.
If the node has died, this information is no longer available, and so 
the attachment cannot be cleaned up properly.  Is there any way to 
handle this today?  If not, does it make sense to save the connector 
elsewhere (e.g., DB) for cases like these?


Thanks,
Avishay

--
*Avishay Traeger, PhD*
/System Architect/

Mobile:+972 54 447 1475
E-mail: avis...@stratoscale.com 



Web  | Blog 
 | Twitter 
 | Google+ 
 | 
Linkedin 




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Nominating Patrick East to Cinder Core

2016-02-01 Thread Walter A. Boring IV
+1 from me.   Patrick has done a great job the last several releases and 
his dedication to making Cinder better has been very visible.



Patrick has been a strong contributor to Cinder over the last few releases, 
both with great code submissions and useful reviews. He also participates 
regularly on IRC helping answer questions and providing valuable feedback.

I would like to add Patrick to the core reviewers for Cinder. Per our 
governance process [1], existing core reviewers please respond with any 
feedback within the next five days. Unless there are no objections, I will add 
Patrick to the group by February 3rd.

Thanks!

Sean (smcginnis)

[1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][DRBD] questions about pep8/flake8 etc.

2015-12-21 Thread Walter A. Boring IV

On 12/21/2015 06:40 AM, Philipp Marek wrote:

Hi everybody,

in the current patch https://review.openstack.org/#/c/259973/1 the test
script needs to use a lot of the constant definitions of the backend driver
it's using (DRBDmanage).

As the DRBDmanage libraries need not be installed on the CI nodes, I'm
providing a minimum of upstream files, accumulated in a separate directory
- they get imported and "fixed" to the expected location, so that the
driver that should be tested runs as if DRBDmanage is installed.


My problem is now that the upstream project doesn't accept all the pep8
conventions like Openstack does; so the CI run
 
http://logs.openstack.org/73/259973/1/check/gate-cinder-pep8/5032b16/console.html
gives a lot of messages like "E221 multiple spaces before operator" and
similar. (It even crashes during AST parsing ;)


So, I can see these options now:

   * Make pep8 ignore these files - they're only used by one test script,
 and are never used in production anyway.
 + Simple
 + New upstream files can simply be dropped in as needed
 - bad example?
   
   * Reformat the files to conform to pep8

 - some work for every new version that needs to be incorporated
 - can't be compared for equality with upstream any more
 - might result in mismatches later on, ie. production code uses
   different values from test code
I would suggest this option.  We don't want to allow code in cinder that 
bypasses the pep8 checks.  Since you are trying to get drbd support into 
Cinder, it then falls upon you to make sure the code you are submitting 
follows the same standards as the rest of the project.


Walt



   * Throw upstream files away, and do "manual" fakes
 - A lot of work
 - Work needed for every new needed constant
 - lots of duplicated code
 - might result in mismatches later on, ie. production code uses
   different values from test code
 + whole checkout still "clean" for pep8

   * Require DRBDmanage to be installed
 + uses same values as upstream and production
 - Need to get it upstream into PyPi
 - Meaning delay
 - delay for every new release of DRBDmanage
 - Might not even be compatible with every used distribution/CI
   out there


I would prefer the first option - make pep8 ignore these files.
But I'm only a small player here, what's the opinion of the Cinder cores?
Would that be acceptable?


Regards,

Phil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder]Do we have project scope for cinder?

2015-11-30 Thread Walter A. Boring IV
As a side note to the DR discussion here, there was a session in Tokyo 
that talked about a new

DR project called Smaug.   You can see their mission statement here:
https://launchpad.net/smaug

https://github.com/openstack/smaug

There is another service in the making called DRagon:
https://www.youtube.com/watch?v=upCzuFnswtw
http://www.slideshare.net/AlonMarx/dragon-and-cinder-v-brownbag-54639869

Yes that's 2 DR like service starting in OpenStack that are related to 
dragons.


Walt



Sean and Michal,

In fact, there is a reason that I ask this question. Recently I have a
confusion about if cinder should provide the ability of Disaster
Recovery to storage resources, like volume. I mean we have volume
replication v1, but for DR, specially DR between two independent
OpenStack sites(production and DR site), I feel we still need more
features to support it, for example consistency group for replication,
etc. I'm not sure if those features belong in Cinder or some new
project for DR.

BR
WangHao

2015-11-30 3:02 GMT+08:00 Sean McGinnis :

On Sun, Nov 29, 2015 at 11:44:19AM +, Dulko, Michal wrote:

On Sat, 2015-11-28 at 10:56 +0800, hao wang wrote:

Hi guys,

I notice nova have a clarification of project scope:
http://docs.openstack.org/developer/nova/project_scope.html

I want to find cinder's, but failed,  do you know where to find it?

It's important to let developers know what feature should be
introduced into cinder and what shouldn't.

BR
Wang Hao

I believe Nova team needed to formalize the scop to have an explanation
for all the "this doesn't belong in Nova" comments on feature requests.
Does Cinder suffer from similar problems? From my perspective it's not
critically needed.

I agree. I haven't seen a need for something like that with Cinder. Wang
Hao, is there a reason you feel you need that?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova]Move encryptors to os-brick

2015-11-20 Thread Walter A. Boring IV

On 11/20/2015 10:19 AM, Daniel P. Berrange wrote:

On Fri, Nov 20, 2015 at 02:45:15PM +0200, Duncan Thomas wrote:

Brick does not have to take over the decisions in order to be a useful
repository for the code. The motivation for this work is to avoid having
the dm setup code copied wholesale into cinder, where it becomes difficult
to keep in sync with the code in nova.

Cinder needs a copy of this code since it is on the data path for certain
operations (create from image, copy to image, backup/restore, migrate).

A core goal of using volume encryption in Nova to provide protection for
tenant data, from a malicious storage service. ie if the decryption key
is only ever used by Nova on the compute node, then cinder only ever sees
ciphertext, never plaintext.  Thus if cinder is compromised, then it can
not compromise any data stored in any encrypted volumes.

If cinder is looking to get access to the dm-seutp code, this seems to
imply that cinder will be getting access to the plaintext data, which
feels to me like it de-values the volume encryption feature somewhat.

I'm fuzzy on the details of just what code paths cinder needs to be
able to convert from plaintext to ciphertext or vica-verca, but in
general I think it is desirable if we can avoid any such operation
in cinder, and keep it so that only Nova compute nodes ever see the
decrypted data.
Being able to limit the number of points where an encrypted volume can 
be used unencrypted

is obviously a good goal.
Unfortunately, it's entirely unrealistic to expect Cinder to never be 
able to have access that access.
Cinder currently needs access to write data to volumes that are 
encrypted for several operations.


1) copy volume to image
2) copy image to volume
3) backup

Cinder already has the ability to do this for encrypted volumes. What 
Lisa Li's patch is trying to provide
is a single point of shared code for doing encryptors.  os-brick seems 
like a reasonable place to put this
as it could be shared with other services that need to do the same 
thing, including Nova, if desired.


There is also ongoing work to support attaching Cinder volumes to bare 
metal nodes.  The client that does the
attaching to a bare metal node, will be using os-brick connectors to do 
the volume attach/detach.  So, it makes
sense from this perspective as well that the encryptor code lives in 
os-brick.


I'm ok with the idea of moving common code into os-brick.  This was the 
main reason os-brick was created

to begin with.
Walt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Google Hangout recording of volume manger locks

2015-10-07 Thread Walter A. Boring IV

Hello folks,
  I just wanted to post up the YouTube link for the video hangout that 
the Cinder team just had.


We had a good discussion about the local file locks in the volume 
manager and how it affects the interaction
of Nova with Cinder in certain cases.  We are trying to iron out how to 
proceed ahead with removing the
volume manager locks in a way that doesn't break the world.  The hope of 
this is to eventually allow Cinder

to run active/active HA c-vol services.

The Youtube.com link for the recording is here on my personal account:
https://www.youtube.com/watch?v=D_iXpNcWDv8


We discussed several things in the meeting:
* The etherpad that was used as a basis for discussion:
https://etherpad.openstack.org/p/cinder-active-active-vol-service-issues
* What to do with the current volume manager locks and how do we remove 
them?

* How do we move forward with checking 'ING' states for volume actions?
* What is the process for moving forward with the compare/swap patches 
that Gorka has in gerrit.



Action Items:
*  We agreed to take a deeper look into the main compare/swap changes 
that Gorka has in gerrit and see if we can get those to land.

  * https://review.openstack.org/#/c/205834/
  * https://review.openstack.org/#/c/218012/
* Gorka is to update the patches and add the references to the 
specs/blueprints for reference.
* Gorka is going to post up follow up patch sets to test the removal of 
each lock and see if it is sufficient to remove each individual lock.



Follow up items:
* Does it make sense for the community to create an OpenStack Cinder 
youtube account, where the PTL owns the account, and we run
each of our google hangouts through that.  The advantage of this is to 
allow the community to participate openly, as well as record each of
our Cinder hangouts for folks that can't attend the live event.  We 
could use this account for the meetups as well as the conference sessions,

and have them all recorded and saved in one spot.


Cheers,
Walt


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] The Absurdity of the Milestone-1 Deadline for Drivers

2015-09-28 Thread Walter A. Boring IV

On 09/28/2015 10:29 AM, Ben Swartzlander wrote:
I've always thought it was a bit strange to require new drivers to 
merge by milestone 1. I think I understand the motivations of the 
policy. The main motivation was to free up reviewers to review "other 
things" and this policy guarantees that for 75% of the release 
reviewers don't have to review new drivers. The other motivation was 
to prevent vendors from turning up at the last minute with crappy 
drivers that needed a ton of work, by encouraging them to get started 
earlier, or forcing them to wait until the next cycle.


I believe that the deadline actually does more harm than good.
But harm to whom?   It certainly puts the pressure on driver developers 
to make sure they get involved in the Cinder community and get aware of 
when the deadlines are.
I believe it simply shifts the time in which drivers get into tree. My 
$0.02 of opinion is, that if a new driver developer misses the 
milestone, then they have the rest of the release to work on getting CI 
up and running and ready to go for the next release.   I'm not sure I 
see the harm to the Cinder community or the project.   It's a deadline 
that a driver developer has to be aware of and compensate for.  We've 
had how many drivers land in the last 2 releases using this 
requirement?  I believe it's somewhere of 20+ drivers?




First of all, to those that don't want to spend time on driver 
reviews, there are other solutions to that problem. Some people do 
want to review the drivers, and those who don't can simply ignore them 
and spend time on what they care about. I've heard people who spend 
time on driver reviews say that the milestone-1 deadline doesn't mean 
they spend less time reviewing drivers overall, it just all gets 
crammed into the beginning of each release. It should be obvious that 
setting a deadline doesn't actually affect the amount of reviewer 
effort, it just concentrates that effort.


The argument about crappy code is also a lot weaker now that there are 
CI requirements which force vendors to spend much more time up front 
and clear a much higher quality bar before the driver is even 
considered for merging. Drivers that aren't ready for merge can always 
be deferred to a later release, but it seems weird to defer drivers 
that are high quality just because they're submitted during milestones 
2 or 3.
I disagree here.  CI doesn't prevent you from having a crappy driver.  
Your driver just needs to pass CI tests.  CI ensures that your driver 
works, but doesn't ensure that it
really meats the core reviewers standards for code.  Do we care?  I 
think we do.  Having drivers talk directly to the db, or FC drivers 
missing the FCZM decorators for auto zoning, etc.




All the the above is just my opinion though, and you shouldn't care 
about my opinions, as I don't do much coding and reviewing in Cinder. 
There is a real reason I'm writing this email...


In Manila we added some major new features during Liberty. All of the 
new features merged in the last week of L-3. It was a nightmare of 
merge conflicts and angry core reviewers, and many contributors worked 
through a holiday weekend to bring the release together. While asking 
myself how we can avoid such a situation in the future, it became 
clear to me that bigger features need to merge earlier -- the earlier 
the better.


When I look at the release timeline, and ask myself when is the best 
time to merge new major features, and when is the best time to merge 
new drivers, it seems obvious that *features* need to happen early and 
drivers should come *later*. New major features require FAR more 
review time than new drivers, and they require testing, and even after 
they merge they cause merge conflicts that everyone else has to deal 
with. Better that that works happens in milestones 1 and 2 than right 
before feature freeze. New drivers can come in right before feature 
freeze as far as I'm concerned. Drivers don't cause merge conflicts, 
and drivers don't need huge amounts of testing (presumably the CI 
system ensure some level of quality).


It also occurs to me that new features which require driver 
implementation (hello replication!) *really* should go in during the 
first milestone so that drivers have time to implement the feature 
during the same release.


So I'm asking the Cinder core team to reconsider the milestone-1 
deadline for drivers, and to change it to a deadline for new major 
features (in milestone-1 or milestone-2), and to allow drivers to 
merge whenever*. This is the same pitch I'll be making to the Manila 
core team. I've been considering this idea for a few weeks now but I 
wanted to wait until after PTL elections to suggest it here.


-Ben Swartzlander


* I don't actually care if/when there is a driver deadline, what I 
care about is that reviewers are free during M-1 to work on 
reviewing/testing of features. The easiest way to achieve that seems 
to be moving the driver deadline.
I'm 

Re: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?

2015-09-24 Thread Walter A. Boring IV

>> ​To be honest this is probably my fault, AZ's were pulled in as part of
>> the nova-volume migration to Cinder and just sort of died.  Quite
>> frankly I wasn't sure "what" to do with them but brought over the
>> concept and the zones that existing in Nova-Volume.  It's been an issue
>> since day 1 of Cinder, and as you note there are little hacks here and
>> there over the years to do different things.
>>
>> I think your question about whether they should be there at all or not
>> is a good one.  We have had some interest from folks lately that want to
>> couple Nova and Cinder AZ's (I'm really not sure of any details or
>> use-cases here).
>>
>> My opinion would be until somebody proposes a clear use case and need
>> that actually works that we consider deprecating it.
>>
>> While we're on the subject (kinda) I've never been a very fond of having
>> Nova create the volume during boot process either; there's a number of
>> things that go wrong here (timeouts almost guaranteed for a "real"
>> image) and some things that are missing last I looked like type
>> selection etc.
>>
>> We do have a proposal to talk about this at the Summit, so maybe we'll
>> have a descent primer before we get there :)
>>
>> Thanks,
>>
>> John
>>
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> Heh, so when I just asked in the cinder channel if we can just
> deprecate nova boot from volume with source=(image|snapshot|blank)
> (which automatically creates the volume and polls for it to be
> available) and then add a microversion that doesn't allow it, I was
> half joking, but I see we're on the same page.  This scenario seems to
> introduce a lot of orchestration work that nova shouldn't necessarily
> be in the business of handling.
I tend to agree with this.   I believe the ability to boot from a volume
with source=image was just a convenience thing and shortcut for users. 
As John stated, we know that we have issues with large images and/or
volumes here with timeouts.  If we want to continue to support this,
then the only way to make sure we don't run into timeout issues is to
look into a callback mechanism from Cinder to Nova, but that seems
awfully heavy handed, just to continue to support Nova orchestrating
this.   The good thing about the Nova and Cinder clients/APIs is that
anyone can write a quick python script to do the orchestration
themselves, if we want to deprecate this.  I'm all for deprecating this.

Walt


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] repairing so many OpenStack components writing configuration files in /usr/etc

2015-09-24 Thread Walter A. Boring IV
Hi Thomas,
  I can't speak to the other packages, but as far as os-brick goes,  the
/usr/local/etc stuff is simply for the embedded
rootwrap filter that os-brick is currently exporting.   You can see it here:
https://github.com/openstack/os-brick/blob/master/etc/os-brick/rootwrap.d/os-brick.filters
https://github.com/openstack/os-brick/blob/master/setup.cfg#L30-L31

The intention was to have devstack pull that file and dump it into
/etc/nova and /etc/cinder
for usage, but this has several problems.   We have plans to talk about
a workable solution in Tokyo, which will most likely embed the
rootwrap filter file into the package itself and it won't go into
/usr/local/etc.  

As of the current release of os-brick, no project is currently using
that rootwrap filter file directly. 
For Liberty, we decided just to manually update the nova and cinder's
filter files manually for all of
the entries. 


Walt

On 09/24/2015 07:25 AM, Thomas Goirand wrote:
> Hi,
>
> It's about the 3rd time just this week, that I'm repairing an OpenStack
> component which is trying to write config files in /usr/etc. Could this
> non-sense stop please?
>
> FYI, this time, it's with os-brick... but it happened with so many
> components already:
> - bandit (with an awesome reply from upstream to my launchpad bug,
> basically saying he doesn't care about downstream distros...)
> - neutron
> - neutron-fwaas
> - tempest
> - lots of Neutron drivers (ie: networking-FOO)
> - pycadf
> - and probably more which I forgot.
>
> Yes, I can repair things at the packaging level, but I just hope I wont
> have to do this for each and every OpenStack component, and I suppose
> everyone understands how frustrating it is...
>
> I also wonder where this /usr/etc is coming from. If it was
> /usr/local/etc, I could somehow get it. But here... ?!?
>
> Cheers,
>
> Thomas Goirand (zigo)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> .
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [CINDER] [PTL Candidates] Questions

2015-09-21 Thread Walter A. Boring IV
>
> 1. Do you actually have the time to spend to be PTL
>
> I don't think many people realize the time commitment. Between being
> on top of reviews and having a pretty consistent view of what's going
> on and in process; to meetings, questions on IRC, program management
> type stuff etc.  Do you feel you'll have the ability for PTL to be
> your FULL Time job?  Don't forget you're working with folks in a
> community that spans multiple time zones.
The short answer to this is yes.   Prior to even putting up my candidacy
I spoke with my management and informed them of what would be involved
with being PTL for Cinder, and that meant it was an upstream job.  I've
been working on Cinder for 3 years now and have seen the amount of time
that you and Mike have spent on the project, and it's significant to say
the least.   The wiki has a good guide for PTL candidates here:
https://wiki.openstack.org/wiki/PTL_Guide.   It's a decent start and
more of a "PTL for dummies" guide and is by no means everything a PTL is
and has to do.  Being a PTL means more than just attending meetings,
doing reviews, and communication.  It means being the lead evangelist
and ambassador for Cinder.   As PTL of a project, it's also important
not to forget about the future of the community and encourage new
members to contribute code to Cinder core itself, to help make Cinder a
better project.  For example, the recent additions by Kendall Nelson to
work on the cinder.conf.sample file
(https://review.openstack.org/#/c/219700).  The patch itself might have
more follow up work, as noted in the review, but she was very responsive
and was on top of the code to try and get it to land.  Sean, John and
myself all helped with reviews on that patch and worked together as a
team to help Kendall with her efforts.  We need more new contributors
like her.  The more inclusive and encouraging of new members in the
community the better.   I remember starting out working on Cinder back
in the Grizzly time frame and I also remember John, as the PTL, being
very helpful and encouraging of my efforts to learn how to write a
driver and how to contribute in general.  It was a very welcoming
experience at the time.  That is the type of PTL I'd like to be to help
repay the community.
>
> 2. What are your plans to make the Cinder project as a core component
> better (no... really, what specifically and how does it make Cinder
> better)?
>
> Most candidates are representing a storage vendor naturally.  Everyone
> says "make Cinder better"; But how do you intend to balance vendor
> interest and the interest of the general project?  Where will your
> focus in the M release be?  On your vendor code or on Cinder as a
> whole?  Note; I'm not suggesting that anybody isn't doing the "right"
> thing here, I'm just asking for specifics.
  I believe I detailed some of these in my candidacy letter.   I firmly
believe that there are some Nova and Cinder interactions that need to
get fixed.  This will be a good first step along the way to allowing
active/active c-vol services.   Making Cinder better means not only
guiding the direction of features and fixes, but it also means
encouraging the community of driver developers to get involved and
informed about Cinder core itself.   We need a cinder driver developer
how to guide.  There are some items for driver developers that they need
to be aware of, and it would be great to be able to point folks to that
place.  For example, Fibre Channel drivers need to use the Fibre Channel
Zone Manager utils decorators during initialize_connection and
terminate_connection time.   Also, during terminate_connection time, a
driver needs to not always return the initiator_target_map.   Where is
that documented?  It's not, and it's only being caught in reviews.  The
trick as always is keeping that guide relevant with updates. 

   I've been pretty fortunate at HP, to be able to convince my
management, that working on Cinder specific issues as a priority, such
as multi-attach, os-brick, live migration, Nova <--> Cinder interactions
to name a few.   My team at HP isn't just responsible for maintaining
3PAR/LeftHand drivers to Cinder.  We are also involved in making Cinder
a more robust, scalable project, so that we can make a better Helion
product for our customers.  Helion is OpenStack and how we work on
Helion is to first and foremost work on OpenStack Cinder and Nova.   So,
from my perspective. HP's interests allow me to work on Cinder core
first and foremost. 

>
> 3. ​Why do you want to be PTL for Cinder?
>
> Seems like a silly question, but really when you start asking that
> question the answers can be surprising and somewhat enlightening. 
> There's different motivators for people, what's yours?  By the way,
> "my employer pays me a big bonus if I win" is a perfectly acceptable
> answer in my opinion, I'd prefer honesty over anything else.  You may
> not get my vote, but you'd get respect.
I've been working on various Open Source projects 

[openstack-dev] [Cinder] PTL Candidacy

2015-09-16 Thread Walter A. Boring IV

Cinder community,

I am announcing my candidacy for Cinder PTL for the Mitaka release.

Cinder is a fundamental piece of the puzzle for the success
of OpenStack.  I've been lucky enough to work on Cinder since the
Grizzly release cycle.  The Cinder community has grown every release,
and we've gotten a lot of great features implemented as well as many
new drivers.  We've instituted a baseline requirement for third party
CI, which is critical to the quality of Cinder.  I believe this goes
a long way to proving to deployers that Cinder is dedicated to building
a quality product.

I believe the single best component of the project, is the diverse
community itself.  We have people from all over the world helping
Cinder grow.  We have companies that compete directly with each other,
in the marketplace for customers, working together to solve complex
problems.

I would like to continue to encourage more driver developers to get
involved in Cinder core features.   This is the future of the
community itself and the lifeblood of Cinder.  We also need to get more
active in Nova to ensure that the interactions are stable.

The following is a list of a few areas of focus that I would
like to encourage the community to consider over the next release.

* Solidify all/any deadlines milestone deadlines early in the release
* Iron out the Nova <--> Cinder interactions
* Get active-active c-vol services working
* Get driver bug fixes into previous releases
* Continue the stabilization of the 3rd party CI system.
* Support any efforts to integrate with Ironic


There is always a long list of cool stuff to work on and issues to fix
in Cinder, and the more participation we have with Cinder core the better.
We have a strong and vibrant community and I look forward to working on
Cinder for many releases ahead.

Thank you for considering me.

Walter A. Boring IV (hemna)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] PTL Non-Candidacy

2015-09-14 Thread Walter A. Boring IV

Thanks for your leadership and service Mike.   You've done a great job!

Walt

Hello all,

I will not be running for Cinder PTL this next cycle. Each cycle I ran
was for a reason [1][2], and the Cinder team should feel proud of our
accomplishments:

* Spearheading the Oslo work to allow *all* OpenStack projects to have
their database being independent of services during upgrades.
* Providing quality to OpenStack operators and distributors with over
60 accepted block storage vendor drivers with reviews and enforced CI
[3].
* Helping other projects with third party CI for their needs.
* Being a welcoming group to new contributors. As a result we grew greatly [4]!
* Providing documentation for our work! We did it for Kilo [5], and I
was very proud to see the team has already started doing this on their
own to prepare for Liberty.

I would like to thank this community for making me feel accepted in
2010. I would like to thank John Griffith for starting the Cinder
project, and empowering me to lead the project through these couple of
cycles.

With the community's continued support I do plan on continuing my
efforts, but focusing cross project instead of just Cinder. The
accomplishments above are just some of the things I would like to help
others with to make OpenStack as a whole better.


[1] - 
http://lists.openstack.org/pipermail/openstack-dev/2014-September/046788.html
[2] - http://lists.openstack.org/pipermail/openstack-dev/2015-April/060530.html
[3] - 
http://superuser.openstack.org/articles/what-you-need-to-know-about-openstack-cinder
[4] - http://thing.ee/cinder/active_contribs.png
[5] - https://wiki.openstack.org/wiki/ReleaseNotes/Kilo#Key_New_Features_7

--
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Using storage drivers outside of openstack/cinder

2015-09-08 Thread Walter A. Boring IV

Hey Tony,
  This has been a long running pain point/problem for some of the 
drivers in Cinder.
As a reviewer, I try and -1 drivers that talk directly to the database 
as I don't think
drivers *should* be doing that.   But, for some drivers, unfortunately, 
in order to
implement the features, they currently need to talk to the DB. :(  One 
of the new
features in Cinder, namely consistency groups, has a bug that basically 
requires
drivers to talk to the DB to fetch additional data.  There are plans to 
remedy this
problem in the M release of Cinder.   For other DB calls in drivers, 
it's a case by
case basis for removing the call, that's not entirely obvious how to do 
it at the
current time.   It's a topic that has come up now and again within the 
community,

and I for one, would like to see the DB calls removed as well. Feel free to
help contribute!  It's OpenSource after all. :)

Cheers,
Walt

Openstack/Cinder has a wealth of storage drivers to talk to different
storage subsystems, which is great for users of openstack.  However, it
would be even greater if this same functionality could be leveraged
outside of openstack/cinder.  So that other projects don't need to
duplicate the same functionality when trying to talk to hardware.


When looking at cinder and asking around[1] about how one could
potentially do this I find out that is there is quite a bit of coupling
with openstack, like:

* The NFS driver is initialized with knowledge about whether any volumes
exist in the database or not, and if not, can trigger certain behavior
to set permissions, etc.  This means that something other than the
cinder-volume service needs to mimic the right behavior if using this
driver.

* The LVM driver touches the database when creating a backup of a volume
(many drivers do), and when managing a volume (importing an existing
external LV to use as a Cinder volume).

* A few drivers (GPFS, others?) touch the db when managing consistency
groups.

* EMC, Hitachi, and IBM NFS drivers touch the db when creating/deleting
snapshots.


Am I the only one that thinks this would be useful?  What ideas do
people have for making the cinder drivers stand alone, so that everyone
could benefit from this great body of work?

Thanks,
Tony

[1] Special thanks to Eric Harney for the examples of coupling

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Extending attached disks

2015-08-21 Thread Walter A. Boring IV
This isn't as simple as making calls to virsh after an attached volume 
is extended on the cinder backend, especially when multipath is involved.
You need the host system to understand that the volume has changed size 
first, or virsh will really never see it.


For iSCSI/FC volumes you need to issue a rescan on the bus (iSCSI 
session, FC fabric),  and then when multipath is involved, it gets quite 
a bit more complex.


This lead to one of the sticking points with doing this at all, is 
because when cinder extends the volume, it needs to tell nova that it 
has happened, and the nova (or something on the compute node), will have 
to issue the correct commands in sequence for it all to work.


You'll also have to consider multi-attached volumes as well, which adds 
yet another wrinkle.


A good quick source of some of the commands and procedures that are 
needed you can see here:

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Online_Storage_Reconfiguration_Guide/online-logical-units.html


You can see that volumes with multipath requires a lot of hand holding 
to be done correctly.  It's non trivial.  I see this as being very error 
prone, and any failure

in the multipath process could lead to big problems :(

Walt

Hi everyone,

Apologises for the duplicate send, looks like my mail client doesn't create 
very clean HTML messages. Here is the message in plain-text. I'll make sure to 
send to the list in plain-text from now on.

In my current pre-production deployment we were looking for a method to live 
extend attached volumes to an instance. This was one of the requirements for 
deployment. I've worked with libvirt hypervisors before so it didn't take long 
to find a workable solution. However I'm not sure how transferable this will be 
across deployment models. Our deployment model is using libvirt for nova and 
ceph for backend storage. This means obviously libvirt is using rdb to connect 
to volumes.

Currently the method I use is:

- Force cinder to run an extend operation.
- Tell Libvirt that the attached disk has been extended.

It would be worth discussing if this can be ported to upstream such that the 
API can handle the leg work, rather than this current manual method.

Detailed instructions.
You will need: volume-id of volume you want to resize, hypervisor_hostname and 
instance_name from instance volume is attached to.

Example: extending volume f9fa66ab-b29a-40f6-b4f4-e9c64a155738 attached to 
instance-0012 on node-6 to 100GB

$ cinder reset-state --state available f9fa66ab-b29a-40f6-b4f4-e9c64a155738
$ cinder extend f9fa66ab-b29a-40f6-b4f4-e9c64a155738 100
$ cinder reset-state --state in-use f9fa66ab-b29a-40f6-b4f4-e9c64a155738

$ssh node-6
node-6$ virsh qemu-monitor-command instance-0012 --hmp info block | grep 
f9fa66ab-b29a-40f6-b4f4-e9c64a155738
drive-virtio-disk1: removable=0 io-status=ok 
file=rbd:volumes-slow/volume-f9fa66ab-b29a-40f6-b4f4-e9c64a155738:id=cinder:key=keyhere==:auth_supported=cephx\\;none:mon_host=10.1.226.64\\:6789\\;10.1.226.65\\:6789\\;10.1.226.66\\:6789
 ro=0 drv=raw encrypted=0 bps=0 bps_rd=0 bps_wr=0 iops=0 iops_rd=0 iops_wr=0

This will get you the disk-id, which in this case is drive-virtio-disk1.

node-6$ virsh qemu-monitor-command instance-0012 --hmp block_resize 
drive-virtio-disk1 100G

Finally, you need to perform a drive rescan on the actual instance and resize 
and extend the file-system. This will be OS specific.

I've tested this a few times and it seems very reliable.

Taylor Bertie
Enterprise Support Infrastructure Engineer

Mobile +64 27 952 3949
Phone +64 4 462 5030
Email taylor.ber...@solnet.co.nz

Solnet Solutions Limited
Level 12, Solnet House
70 The Terrace, Wellington 6011
PO Box 397, Wellington 6140

www.solnet.co.nz

Attention:
This email may contain information intended for the sole use of
the original recipient. Please respect this when sharing or
disclosing this email's contents with any third party. If you
believe you have received this email in error, please delete it
and notify the sender or postmas...@solnetsolutions.co.nz as
soon as possible. The content of this email does not necessarily
reflect the views of Solnet Solutions Ltd.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Proposing Gorka Eguileor for core

2015-08-14 Thread Walter A. Boring IV

+1

It gives me great pleasure to nominate Gorka Eguileor for Cinder core.

Gorka's contributions to Cinder core have been much apprecated:

https://review.openstack.org/#/q/owner:%22Gorka+Eguileor%22+project:openstack/cinder,p,0035b6410002dd11

60/90 day review stats:

http://russellbryant.net/openstack-stats/cinder-reviewers-60.txt
http://russellbryant.net/openstack-stats/cinder-reviewers-90.txt

Cinder core, please reply with a +1 for approval. This will be left
open until August 19th. Assuming there are no objections, this will go
forward after voting is closed.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] I have a question about openstack cinder zonemanager driver.

2015-08-14 Thread Walter A. Boring IV
Currently, The FCZM doesn't support this.   Also, from my experience 
Brocade and Cisco switches don't play well together when managing the 
same fabrics.


Walt


Hi, guys

 I am using Brocade FC switch in my OpenStack environment. I have 
a question about OpenStack cinder zonemanger driver.


I find that *[fc-zone-manager] *can only configure one zone driver. If 
I want to use two FC switches from different vendors at the same time.


One is Brocade FC switch, the other one is Cisco FC switch. Is there a 
method or solution configure two FC switch zone driver in one 
cinder.conf ?


I want them both to support Zone Manager.

**

**



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder as generic volume manager

2015-07-09 Thread Walter A. Boring IV

I missed this whole thread due to my mail filtering.  Sorry about that.

Anyway, Ivan and I have an open Blueprint here:
https://blueprints.launchpad.net/cinder/+spec/use-cinder-without-nova

That starts the discussion of adding the end to end ability of attaching 
a Cinder

volume to a host using the cinderclient in combination with os-brick.

The idea being, that cinderclient would coordinate the calling of 
existing Cinder's APIs,
to do the attach/detach, along with os-brick, to collect the initiator 
information needed,

as well as the volume discovery after the volume has been exported.

So, for example a user would simply use a new cinderclient shell command 
to initiate

the attachment for an existing cinder volume.
cinder attach volume uuid

The client would then collect the initiator information, using os-brick, 
and then make the correct Cinder API
calls to ensure the volume is exported.   Then, use os-brick again to 
discover the volume showing up

on the host.

This is basically the same process that Nova does today, but uses 
libvirt volume drivers to discover the volume

after being exported instead of os-brick.

We just haven't had time to write up the cinder-spec and start the work 
on it.

Walt

On 07/07/2015 06:25 AM, Jan Safranek wrote:

Hello,

I'd like to (mis-)use Cinder outside of OpenStack, i.e. without Nova.

I can easily create/manage volumes themselves, Cinder API is pretty
friendly here. Now, how can I attach a volume somewhere? Something like
'nova volume-attach server volume', but without Nova and with host
(=anything) instead of server (=virtual machine inside OpenStack).

I guess I am not the first one to ask for such feature, has anyone tried it?

Would it be possible to separate a new library from Nova, which would
just attach the volume to the host where it is runnig and mark the
volume as 'in-use'? How hard would it be?

Jan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder as generic volume manager

2015-07-09 Thread Walter A. Boring IV

On 07/09/2015 12:21 PM, Tomoki Sekiyama wrote:

Hi all,

Just FYI, here is a sample script I'm using for testing os-brick which
attaches/detaches the cinder volume to the host using cinderclient and
os-brick:

https://gist.github.com/tsekiyama/ee56cc0a953368a179f9

python attach.py volume-uuid will attach the volume to the executed
host and shows a volume path. When you hit the enter key, the volume is
detached.

Note this is skipping reserve or start_detaching APIs so the volume
state is not changed to Attaching or Detaching.

Regards,
Tomoki


Very cool Tomoki.  After chatting with folks in the Cinder IRC channel
it looks like we are going to look at going with something more like what
your script is doing.   We are most likely going to create a separate 
command
line tool that does this same orchestration, using cinder client, a new 
Cinder

API that John Griffith is working on, and os-brick.

Walt



On 7/9/15, 14:44 , Walter A. Boring IV walter.bor...@hp.com wrote:


I missed this whole thread due to my mail filtering.  Sorry about that.

Anyway, Ivan and I have an open Blueprint here:
https://blueprints.launchpad.net/cinder/+spec/use-cinder-without-nova

That starts the discussion of adding the end to end ability of attaching
a Cinder
volume to a host using the cinderclient in combination with os-brick.

The idea being, that cinderclient would coordinate the calling of
existing Cinder's APIs,
to do the attach/detach, along with os-brick, to collect the initiator
information needed,
as well as the volume discovery after the volume has been exported.

So, for example a user would simply use a new cinderclient shell command
to initiate
the attachment for an existing cinder volume.
cinder attach volume uuid

The client would then collect the initiator information, using os-brick,
and then make the correct Cinder API
calls to ensure the volume is exported.   Then, use os-brick again to
discover the volume showing up
on the host.

This is basically the same process that Nova does today, but uses
libvirt volume drivers to discover the volume
after being exported instead of os-brick.

We just haven't had time to write up the cinder-spec and start the work
on it.
Walt

On 07/07/2015 06:25 AM, Jan Safranek wrote:

Hello,

I'd like to (mis-)use Cinder outside of OpenStack, i.e. without Nova.

I can easily create/manage volumes themselves, Cinder API is pretty
friendly here. Now, how can I attach a volume somewhere? Something like
'nova volume-attach server volume', but without Nova and with host
(=anything) instead of server (=virtual machine inside OpenStack).

I guess I am not the first one to ask for such feature, has anyone
tried it?

Would it be possible to separate a new library from Nova, which would
just attach the volume to the host where it is runnig and mark the
volume as 'in-use'? How hard would it be?

Jan


_
_
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] modeling connection_info with a versioned object in os-brick

2015-06-10 Thread Walter A. Boring IV

On 06/10/2015 08:40 AM, Matt Riedemann wrote:
This is a follow-on to the thread [1] asking about modeling the 
connection_info dict returned from the os-initialize_connection API.


The more I think about modeling that in Nova, the more I think it 
should really be modeled in Cinder with an oslo.versionedobject since 
it is an API contract with the caller (Nova in this case) and any 
changes to the connection_info should require a version change 
(new/renamed/dropped fields).


That got me thinking that if both Cinder and Nova are going to use 
this model, it needs to live in a library, so that would be os-brick 
now, right?


In terms of modeling, I don't think we want an object for each vendor 
specific backend since (1) there are a ton of them so it'd be like 
herding cats and (2) most are probably sharing common attributes.  So 
I was thinking something more along the lines of classes or types of 
backends, like local vs shared storage, fibre channel, etc.


I'm definitely not a storage guy so I don't know the best way to 
delineate all of these, but here is a rough idea so far. [2]  This is 
roughly based on how I see things modeled in the 
nova.virt.libvirt.volume module today, but there isn't a hierarchy there.


os-brick could contain the translation shim for converting the 
serialized connection_info dict into a hydrated ConnectionInfo object 
based on the type (have some kind of factory pattern in os-brick that 
does the translation based on driver_volume_type maybe given some 
mapping).


Then when Nova gets the connection_info back from Cinder 
os-initialize_connection, it can send that into os-brick's translator 
utility and get back the ConnectionInfo object and access the 
attributes from that.


Thoughts?

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2015-June/066450.html
[2] 
https://docs.google.com/drawings/d/1geSKQXz4SqfXllq1Pk5o2YVCycZVf_i6ThY88r9YF4A/edit?usp=sharing




The same can be said about the connector dict that Nova is sending to 
Cinder that's needed by drivers at initialize_connection time. This is 
also a 'contract' that Cinder drivers need to do the export of the volumes.


I'm currently working on the initial patch for Nova to import os-brick 
and use the os-brick initiator connector objects for doing the 
attach/detach calls.


So, until that patch lands, Nova doesn't have access to os-brick. Cinder 
is already using os-brick, so we could add a BP and a patch against 
os-brick to pull in the oslo versioned objects and create the connection 
info (target information) object as well as the connector (initiator 
information) object.



Walt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Some Changes to Cinder Core

2015-05-26 Thread Walter A. Boring IV

+1 for Sean.   He's done a great job doing reviews and getting involved
in core Cinder features.

Walt

On 05/22/2015 04:34 PM, Mike Perez wrote:

This is long overdue, but it gives me great pleasure to nominate Sean
McGinnis for
Cinder core.

Reviews:
https://review.openstack.org/#/q/reviewer:+%22Sean+McGinnis%22,n,z

Contributions:
https://review.openstack.org/#/q/owner:+%22Sean+McGinnis%22,n,z

30/90 day review stats:
http://stackalytics.com/report/contribution/cinder-group/30
http://stackalytics.com/report/contribution/cinder-group/90

As new contributors step up to help in the project, some move onto
other things. I would like to recognize Avishay Traeger for his
contributions, and now
unfortunately departure from the Cinder core team.

Cinder core, please reply with a +1 for approval. This will be left
open until May 29th. Assuming there are no objections, this will go
forward after voting is closed.

--
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] CHAP secret is visible in cinder volume log

2015-04-16 Thread Walter A. Boring IV

Can you please file a defect for this against cinder and os-brick.
I'll fix it ASAP.


Walt

Hi,

I am wondering why screen-c-vol.log is displaying the CHAP secret.

Logs:

2015-04-16 16:04:23.288 7306 DEBUG oslo_concurrency.processutils 
[req-23c699df-7b21-48d2-ba14-d8ed06642050 
ce8dccba9ccf48fb956060b3e54187a2 4ad219788df049e0b131e17f603d5faa - - 
-] CMD sudo cinder-rootwrap /etc/cinder/rootwrap.conf iscsiadm -m 
node -T iqn.2015-04.acc1.tsm1:acc171fe6fc15fcc4bd4a841594b7876e3df -p 
192.10.44.48:3260 http://192.10.44.48:3260 --op update 
-n*node.session.auth.password -v *** returned:* 0 in 0.088s execute 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:225


Above log hides the secret.

2015-04-16 16:04:23.290 7306 DEBUG cinder.brick.initiator.connector 
[req-23c699df-7b21-48d2-ba14-d8ed06642050 
ce8dccba9ccf48fb956060b3e54187a2 4ad219788df049e0b131e17f603d5faa - - 
-] *iscsiadm ('--op', 'update', '-n', 'node.session.auth.password', 
'-v', u'fakeauthgroupchapsecret')*: stdout= stderr= _run_iscsiadm 
/opt/stack/cinder/cinder/brick/initiator/connector.py:455


However, this one does not hide the secret.

In addition, i find that the CHAP credentials are stored as plain 
string the database table (volumes).


I guess these are security risks in the current implementation. Any 
comments ?



Regards,
Yogesh
/CloudByte Inc./ http://www.cloudbyte.com/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] CHAP secret is visible in cinder volume log

2015-04-16 Thread Walter A. Boring IV

I went ahead and filed a bug, and I have 2 fixes posted up already that
mirror's how nova fixed this issue in the libvirt volume driver for iSCSI.

https://bugs.launchpad.net/os-brick/+bug/1445137

Walt

On 04/16/2015 05:54 AM, Yogesh Prasad wrote:

Hi,

I am wondering why screen-c-vol.log is displaying the CHAP secret.

Logs:

2015-04-16 16:04:23.288 7306 DEBUG oslo_concurrency.processutils 
[req-23c699df-7b21-48d2-ba14-d8ed06642050 
ce8dccba9ccf48fb956060b3e54187a2 4ad219788df049e0b131e17f603d5faa - - 
-] CMD sudo cinder-rootwrap /etc/cinder/rootwrap.conf iscsiadm -m 
node -T iqn.2015-04.acc1.tsm1:acc171fe6fc15fcc4bd4a841594b7876e3df -p 
192.10.44.48:3260 http://192.10.44.48:3260 --op update 
-n*node.session.auth.password -v *** returned:* 0 in 0.088s execute 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:225


Above log hides the secret.

2015-04-16 16:04:23.290 7306 DEBUG cinder.brick.initiator.connector 
[req-23c699df-7b21-48d2-ba14-d8ed06642050 
ce8dccba9ccf48fb956060b3e54187a2 4ad219788df049e0b131e17f603d5faa - - 
-] *iscsiadm ('--op', 'update', '-n', 'node.session.auth.password', 
'-v', u'fakeauthgroupchapsecret')*: stdout= stderr= _run_iscsiadm 
/opt/stack/cinder/cinder/brick/initiator/connector.py:455


However, this one does not hide the secret.

In addition, i find that the CHAP credentials are stored as plain 
string the database table (volumes).


I guess these are security risks in the current implementation. Any 
comments ?



Regards,
Yogesh
/CloudByte Inc./ http://www.cloudbyte.com/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder]Driver broken

2015-03-25 Thread Walter A. Boring IV
This is a real defect related to the multiattach patch that I worked 
on.   I have posted a fix for your driver.


https://review.openstack.org/#/c/167683/

Walt

Hi,

Just reported an issue: https://bugs.launchpad.net/cinder/+bug/1436367
Seems to be related to https://review.openstack.org/#/c/85847/ which 
introduced another parameter to be passed to the driver, but our 
driver didn't get updated so detach_volume fails for us.


How can we get this fixed asap?

Thanks,
Eduard



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder Third-Party CI: what next? (was Re: [cinder] Request exemption for removal of NetApp FC drivers (no voting CI))

2015-03-23 Thread Walter A. Boring IV

On 03/23/2015 01:50 PM, Mike Perez wrote:

On 12:59 Mon 23 Mar , Stefano Maffulli wrote:

On Mon, 2015-03-23 at 11:43 -0700, Mike Perez wrote:

We've been talking about CI's for a year. We started talking about CI deadlines
in August. If you post a driver for Kilo, it was communicated that you're
required to have a CI by the end of Kilo [1][2][3][4][5][6][7][8]. This
should've been known by your engineers regardless of when you submitted your
driver.

Let's work to fix the CI bits for Liberty and beyond. I have the feeling
that despite your best effort to communicate deadlines, some quite
visible failure has happened.

You've been clear about Cinder's deadlines, I've been trying to add them
also to the weekly newsletter, too.

To the people whose drivers don't have their CI completed in time: what
do you suggest should change so that you won't miss the deadlines in the
future? How should the processes and tool be different so you'll be
successful with your OpenStack-based products?

Just to be clear, here's all the communication attempts made to vendors:

1) Talks during the design summit and the meetup on Friday at the summit.

2) Discussions at the Cinder midcycle meetups in Fort Collins and Austin.

4) Individual emails to driver maintainers. This includes anyone else who has
worked on the driver file according to the git logs.

5) Reminders on the mailing list.

6) Reminders on IRC and Cinder IRC meetings every week.

7) If you submitted a new driver in Kilo, you had the annoying reminder from
reviewers that your driver needs to have a CI by Kilo.

And lastly I have made phone calls to companies that have shown zero responses
to my emails or giving me updates. This is very difficult with larger
companies because you're redirected from one person to another of who their
OpenStack person is.  I've left reminders on given voice mail extensions.

I've talked to folks at the OpenStack foundation to get feedback on my
communication, and was told this was good, and even better than previous
communication to controversial changes.

I expected nevertheless people to be angry with me and blame me regardless of
my attempts to help people be successful and move the community forward.


I completely agree here Mike.   The Cinder cores, PTL, and the rest of the
community have been talking about getting CI as a requirement for quite 
some time now.
It's really not the fault of the Cinder PTL, or core members, that your 
driver got pulled from the Kilo
release, because you had issues getting your CI up and stable in the 
required time frame.
Mike made every possible attempt to let folks know, up front, that the 
deadline was going to happen.


Getting CI in place is critical for the stability of Cinder in general. 
  We have already benefited from
having 3rd Party CI in place.  It wasn't but a few weeks ago that a 
change that was submitted actually
broke the HP drivers.   The CI we had in place discovered it, and 
brought it to the surface.   Without
having that CI in place for our drivers, we would be in a bad spot now. 
  In other words,  it should be a top
priority for vendors to get CI in place, if for the selfish reason of 
protecting their code!!!


That being said, I look forward to seeing folks submit their drivers 
back in the early L time
frame.   If my driver got pulled for K, It would be my top priority to 
get CI working NOW,

and the day L opens up, I have my driver patch up, with CI reporting.

Thanks Mike for all of your efforts on this,
Walt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][cinder][nova][neutron] going forward to oslo-config-generator ...

2015-03-23 Thread Walter A. Boring IV
Maybe we can leverage Cinder's use of the abc in the drivers.py now.   
We could create an OptionsVD that drivers would add and implement.  The 
config generator could inspect objects looking for OptionsVD and then 
call list_opts() on it.   That way, driver maintainers don't also have 
to patch the setup.cfg.   Just require new drivers to add OptionsVD and 
implement list_opts().   We could put a hacking check in for that as well.


My $0.02,
Walt
We could even further reduce the occurrence of such issues by moving 
the list_opts() function down into each driver and have an entry point 
for oslo.config.opts in setup.cfg for each of the drivers.  As with 
the currently proposed solution, the developer doesn't have to edit a 
top level file for a new configuration option.  This solution adds 
that the developer doesn't have to edit a top level file to add a new 
configuration item list to their driver.  With this approach the 
change would happen in the driver's list_opts() function, rather than 
in cinder/opts.py .  The only time that setup.cfg would needed to 
edited is when a new package is added or when a new driver is added.  
This would reduce some of the already minimal burden on the 
developer.  We, however, would need to agree upon some method for 
aggregating together the options lists on a per package (i.e. 
cinder.scheduler, cinder.api) level.  This approach, however, also has 
the advantage of providing a better indication in the sample config 
file of where the options are coming from.  That is an improvement 
over what I have currently proposed.


Does Doug's proposal sound more agreeable to everyone?  It is 
important to note that the fact that some manual intervention is 
required to 'plumb' in the new configuration options was done by 
design.  There is a little more work required to make options 
available to oslo-config-generator but the ability to use different 
namespaces, different sample configs, etc were added with the new 
generator.  These additional capabilities were requested by other 
projects.  So, moving to this design does have the potential for more 
long-term gain.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] May you reconsider about huawei driver?

2015-03-20 Thread Walter A. Boring IV

On 03/19/2015 07:13 PM, liuxinguo wrote:


Hi Mike,

I have seen the patch at https://review.openstack.org/#/c/165990/ 
saying that huawei driver will be removed because “the maintainer does 
not have a CI reporting to ensure their driver integration is successful”.




Looking at this patch, there is no CI reporting from the Huawei Volume 
CI check.

Your CI needs to be up and stable, running on all patches.

But in fact we really have a CI months ago and it is really reporting 
to reviews, the most resently posts are:‍


*https://review.openstack.org/#/c/165796/

Post time:‍2015-3-19 0:14:56

*https://review.openstack.org/#/c/164697/

Post time: 2015-3-18 23:55:37

I don't see any 3rd PARTY CI Reporting here because the patch is in 
merge conflict.



*https://review.openstack.org/164702/

Post time: 2015-3-18 23:55:37


Same


*https://review.openstack.org/#/c/152401/

Post time: 3-18 23:08:45


This patch also has NO Huawei Volume CI check results.


From what I'm seeing there isn't any consistent evidence prooving that 
the Huawei Volume CI checks are stable and running on every Cinder patch.


Walt
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] cinder is broken until someone fixes the forking code

2015-03-11 Thread Walter A. Boring IV
We have this patch in review currently.   I think this one should 'fix' 
it no?


Please review.

https://review.openstack.org/#/c/163551/

Walt

On 03/11/2015 10:47 AM, Mike Bayer wrote:

Hello Cinder -

I’d like to note that for issue
https://bugs.launchpad.net/oslo.db/+bug/1417018, no solution that actually
solves the problem for Cinder is scheduled to be committed anywhere. The
patch I proposed for oslo.db is on hold, and the patch proposed for
oslo.incubator in the service code will not fix this issue for Cinder, it
will only make it fail harder and faster.

I’ve taken myself off as the assignee on this issue, as someone on the
Cinder team should really propose the best fix of all which is to call
engine.dispose() when first entering a new child fork. Related issues are
already being reported, such as
https://bugs.launchpad.net/cinder/+bug/1430859. Right now Cinder is very
unreliable on startup and this should be considered a critical issue.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder]Request to to revisit this patch

2015-03-04 Thread Walter A. Boring IV
Since the Nova side isn't in, and won't land for Kilo, then there is no 
reason for Cinder to have it for Kilo, as it will simply not work.


We can revisit this for the L release if you like.

Also, make sure you have 3rd Party CI setup for this driver, or it won't 
be accepted in the L release either.


$0.02
Walt


Hi Mike, Jay, and Walter,

Please revisit this patch https://review.openstack.org/#/c/151959/and 
don’t revert this, thank you very much!


I think it’s apposite to merge the SDSHypervisor driver in cinder 
first, and next to request nova to add a new libvirt volume driver.


Meanwhile nova side always ask whether the driver is merged into 
Cinder, please see my comments in nova spec 
https://review.openstack.org/#/c/130919/, thank you very much!


Best regards

ZhangNi



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] 3rd Party CI failures ignored, caused driver to break

2015-02-26 Thread Walter A. Boring IV

Hey folks,
   Today we found out that a patch[1] that was made against our 
lefthand driver caused the driver to fail.   The 3rd party CI that we 
have setup
to test our driver (hp_lefthand_rest_proxy.py) caught the CI failure and 
reported it correctly[2].  The patch that broke the driver was reviewed and

approved without a mention of 3rd Party CI failures.

This is a perfect example of 3rd Party CI working perfectly and catching 
a failure, and being completely ignored by everyone

involved in the review process for the patch

I know that 3rd party CI isn't perfect, and has been ripe with false 
failures, which is one of the reasons why they aren't voting today.
But, that being said, if patch submitters aren't even looking at the 
failures for CI when they are touching drivers that they don't maintain, 
and reviewers

aren't looking at the CI failures, then why are we even doing 3rd party CI?

Our team is partially responsible for not seeing the failure as well.  
We should be watching the CI failures closely, but we are doing the best we
can.  There are enough patches for Cinder ongoing at any one time, that 
we simply can't watch every single one of them for failures. We did 
eventually
see that every single patchset in gerrit was now failing against our 
driver, and this is how we caught it.  Yes, it was a bit after the fact, 
but we did notice
it and now have a new patch up that fixes it.   So, in that regard 3rd 
party CI did eventually vet out a problem that our team caught.


How can we prevent this in the future?
1) Make 3rd party CI voting.  I don't believe this is ready yet.
2) Authors and reviews need to look at 3rd party CI failures when a 
patch touches a driver.  If a failure is seen, contact the CI maintainer 
and work with them and
see if the failure is related to the patch, if it's not obvious. In this 
case, the failure was obvious.  The import changed, and now the package 
can't find the module.
3) CI maintainers watch every single patchset and report -1's on 
reviews?  (ouch)

4) ?



Here is the patch that broke the lefthand driver[1]
Here is the reported failure in the c-vol log for the patch by our 3rd 
party CI system[2]

Here is my new patch that fixes the lefthand driver again.[3]

[1] https://review.openstack.org/#/c/145780/
[2] 
http://15.126.198.151/80/145780/15/check/lefthand-iscsi-driver-master-client-pip-dsvm/3927e3d/logs/screen-c-vol.txt.gz?level=ERROR

[3] https://review.openstack.org/#/c/159586/

$0.02
Walt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Question about the plan of L

2015-02-10 Thread Walter A. Boring IV
Yes, assume NEW drivers have to land in before the L-1 milestone.  This 
also includes getting a CI system up and running.


Walt


Hi,

In Kilo the cinder driver is requested to be merged before K-1, I want 
to ask that in L does the driver will be requested to be merged before 
L-1?


Thanks and regards,

Liu



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][nova] Cinder Brick pypi library?

2015-02-03 Thread Walter A. Boring IV

Hey folks,
   I wanted to get some feedback from the Nova folks on using Cinder's 
Brick library.  As some of you
may or may not know, Cinder has an internal module called Brick. It's 
used for discovering and removing
volumes attached to a host.  Most of the code in the Brick module in 
cinder originated from the Nova libvirt
volume drivers that do the same thing (discover attached volumes and 
then later remove them).
Cinder uses the brick library for copy volume to image, as well as copy 
image to volume operations
where the Cinder node needs to attach volumes to itself to do the work.  
The Brick code inside of Cinder has been

used since the Havana release.

  Our plans in Cinder for the Kilo release is to extract the Brick 
module into it's own separate library
that is maintained by the Cinder team as a subproject of Cinder and 
released as a pypi lib.   Then for the L release, refactor
Nova's libvirt volume drivers to use the Brick library.   This will 
enable us to eliminate the duplicate
code between Nova's libvirt volume drivers and Cinder's internal brick 
module.   Both projects can benefit

from a shared library.

So the question I have is, does Nova have an interest in using the code 
in a pypi brick library?  If not, then it doesn't
make any sense for the Cinder team to extract it's brick module into a 
shared (pypi) library.


The first release of brick will only contain the volume discovery and 
removal code.  This is contained in the

initiator directory of cinder/brick/

You can view the current brick code in Cinder here:
https://github.com/openstack/cinder/tree/master/cinder/brick

Thanks for the feedback,
Walt



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Changes to Cinder Core

2015-01-22 Thread Walter A. Boring IV

sorry I didn't see this earlier.

I'd welcome Ivan to the team!

+1


Walt

On Wed, Jan 21, 2015 at 10:14 AM, Mike Perez thin...@gmail.com wrote:

It gives me great pleasure to nominate Ivan Kolodyazhny (e0ne) for
Cinder core. Ivan's reviews have been valuable in decisions, and his
contributions to Cinder core code have been greatly appreciated.

Reviews:
https://review.openstack.org/#/q/reviewer:%22Ivan+Kolodyazhny+%253Ce0ne%2540e0ne.info%253E%22,n,z

Contributions:
https://review.openstack.org/#/q/owner:%22Ivan+Kolodyazhny%22+project:+openstack/cinder,n,z

30/90 day review stats:
http://stackalytics.com/report/contribution/cinder-group/30
http://stackalytics.com/report/contribution/cinder-group/90

As new contributors step up to help in the project, some move onto
other things. I would like to recognize Josh Durgin for his early
contributions to Nova volume, early involvement with Cinder, and now
unfortunately departure from the Cinder core team.

Cinder core, please reply with a +1 for approval. This will be left
open until Jan 26th. Assuming there are no objections, this will go
forward after voting is closed.

And apologies for missing the [cinder] subject prefix.

--
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Not seeking another term as PTL

2014-09-23 Thread Walter A. Boring IV

John,
  Thanks for the term as PTL since Cinder got it's start.  Without your 
encouragement when Kurt and I started working on Cinder back during 
Grizzly.  We wouldn't have been successful, and might not even be 
working on the project to this day.   I can't say enough about how 
helpful you have been to our team.   I have really enjoyed working with 
you the last few years and hope to continue doing so!


Walt

Hey Everyone,

I've been kinda mixed on this one, but I think it's a good time for me 
to not run for Cinder PTL.  I've been filling the role since we 
started the idea back at the Folsom Summit, and it's been an absolute 
pleasure and honor for me.


I don't plan on going anywhere and will still be involved as I am 
today, but hopefully I'll also now have a good opportunity to 
contribute elsewhere in OpenStack.  We have a couple of good 
candidates running for Cinder PTL as well as a strong team backing the 
project so I think it's a good time to let somebody else take the 
official PTL role for a bit.


Thanks,
John


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Nova][Oslo] Moving Brick out of Cinder

2014-09-17 Thread Walter A. Boring IV
Thanks for the effort Ivan.   Your interest in brick is also helping us 
push forward with the idea of the agent that we've had in mind for quite 
some time.


For those interested, I have created an etherpad that discusses some of 
the requirements and design decisions/discussion on the cinder/storage 
agent

here:
https://etherpad.openstack.org/p/cinder-storage-agent


Walt



Thanks a lot for a comments!

As discussed in IRC (#openstack-cinder), moving Brick to Oslo or 
Stackforge isn't the best solution.


We're moving on making Cinder Agent (or Cinder Storage agent) [1] 
 based on Brick code instead of making Brick as a separate python 
library used in Cinder and Nova.


I'll deprecate my oslo.storage GitHub repo and rename it to not 
confuse anybody in a future.


[1] https://etherpad.openstack.org/p/cinder-storage-agent

Regards,
Ivan Kolodyazhny,
Web Developer,
http://blog.e0ne.info/,
http://notacash.com/,
http://kharkivpy.org.ua/

On Wed, Sep 17, 2014 at 8:16 PM, Davanum Srinivas dava...@gmail.com 
mailto:dava...@gmail.com wrote:


+1 to Doug's comments.

On Wed, Sep 17, 2014 at 1:02 PM, Doug Hellmann
d...@doughellmann.com mailto:d...@doughellmann.com wrote:

 On Sep 16, 2014, at 6:02 PM, Flavio Percoco fla...@redhat.com
mailto:fla...@redhat.com wrote:

 On 09/16/2014 11:55 PM, Ben Nemec wrote:
 Based on my reading of the wiki page about this it sounds like
it should
 be a sub-project of the Storage program. While it is targeted
for use
 by multiple projects, it's pretty specific to interacting with
Cinder,
 right?  If so, it seems like Oslo wouldn't be a good fit. 
We'd just end

 up adding all of cinder-core to the project anyway. :-)

 +1 I think the same arguments and conclusions we had on
glance-store
 make sense here. I'd probably go with having it under the Block
Storage
 program.

 I agree. I’m sure we could find some Oslo contributors to give
you advice about APIs if you like, but I don’t think the library
needs to be part of Oslo to be reusable.

 Doug


 Flavio


 -Ben

 On 09/16/2014 12:49 PM, Ivan Kolodyazhny wrote:
 Hi Stackers!

 I'm working on moving Brick out of Cinder for K release.

 There're a lot of open questions for now:

   - Should we move it to oslo or somewhere on stackforge?
   - Better architecture of it to fit all Cinder and Nova
requirements
   - etc.

 Before starting discussion, I've created some
proof-of-concept to try it. I
 moved Brick to some lib named oslo.storage for testing only.
It's only one
 of the possible solution to start work on it.

 All sources are aviable on GitHub [1], [2].

 [1] - I'm not sure that this place and name is good for it,
it's just a PoC.

 [1] https://github.com/e0ne/oslo.storage
 [2] https://github.com/e0ne/cinder/tree/brick - some tests
still failed.

 Regards,
 Ivan Kolodyazhny

 On Mon, Sep 8, 2014 at 4:35 PM, Ivan Kolodyazhny
e...@e0ne.info mailto:e...@e0ne.info wrote:

 Hi All!

 I would to start moving Cinder Brick [1] to oslo as was
described on
 Cinder mid-cycle meetup [2]. Unfortunately I missed meetup
so I want be
 sure that nobody started it and we are on the same page.

 According to the Juno 3 release, there was not enough time
to discuss [3]
 on the latest Cinder weekly meeting and I would like to get
some feedback
 from the all OpenStack community, so I propose to start this
discussion on
 mailing list for all projects.

 I anybody didn't started it and it is useful at least for
both Nova and
 Cinder I would to start this work according oslo guidelines
[4] and
 creating needed blueprints to make it finished until Kilo 1
is over.



 [1] https://wiki.openstack.org/wiki/CinderBrick
 [2] https://etherpad.openstack.org/p/cinder-meetup-summer-2014
 [3]


http://lists.openstack.org/pipermail/openstack-dev/2014-September/044608.html
 [4] https://wiki.openstack.org/wiki/Oslo/CreatingANewLibrary

 Regards,
 Ivan Kolodyazhny.




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [Cinder][Nova][Oslo] Moving Brick out of Cinder

2014-09-16 Thread Walter A. Boring IV
Originally I wrote the connector side of brick to be the LUN discovery 
shared code between Cinder and Nova.
I tried to make a patch in Havana that would remove do this but it 
didn't make it in.


The upside to brick not making it in Nova is that it has given us some 
time to rethink things a bit.  What I would actually
like to see happen now is to create a new cinder/storage agent instead 
of just a brick library.   The agent would run on every cinder node,
nova node and potentially ironic nodes to do LUN discovery. Duncan and I 
are looking into this for the Kilo release.


The other portion of brick that exists in Cinder today some of the LVM 
code.   This makes a lot of sense to have in an agent as well for
each nova compute node to manage ephemeral storage used for boot disks 
for nova vms.  This would help remove some of the remaining

storage code from nova itself that does this.

Duncan and I will be at the Paris summit.   We would both welcome any 
interest to work on this concept of a new storage agent (which would

contain the existing brick code).


My $0.02,
Walt


On 09/16/2014 11:55 PM, Ben Nemec wrote:

Based on my reading of the wiki page about this it sounds like it should
be a sub-project of the Storage program.  While it is targeted for use
by multiple projects, it's pretty specific to interacting with Cinder,
right?  If so, it seems like Oslo wouldn't be a good fit.  We'd just end
up adding all of cinder-core to the project anyway. :-)

+1 I think the same arguments and conclusions we had on glance-store
make sense here. I'd probably go with having it under the Block Storage
program.

Flavio


-Ben

On 09/16/2014 12:49 PM, Ivan Kolodyazhny wrote:

Hi Stackers!

I'm working on moving Brick out of Cinder for K release.

There're a lot of open questions for now:

- Should we move it to oslo or somewhere on stackforge?
- Better architecture of it to fit all Cinder and Nova requirements
- etc.

Before starting discussion, I've created some proof-of-concept to try it. I
moved Brick to some lib named oslo.storage for testing only. It's only one
of the possible solution to start work on it.

All sources are aviable on GitHub [1], [2].

[1] - I'm not sure that this place and name is good for it, it's just a PoC.

[1] https://github.com/e0ne/oslo.storage
[2] https://github.com/e0ne/cinder/tree/brick - some tests still failed.

Regards,
Ivan Kolodyazhny

On Mon, Sep 8, 2014 at 4:35 PM, Ivan Kolodyazhny e...@e0ne.info wrote:


Hi All!

I would to start moving Cinder Brick [1] to oslo as was described on
Cinder mid-cycle meetup [2]. Unfortunately I missed meetup so I want be
sure that nobody started it and we are on the same page.

According to the Juno 3 release, there was not enough time to discuss [3]
on the latest Cinder weekly meeting and I would like to get some feedback
from the all OpenStack community, so I propose to start this discussion on
mailing list for all projects.

I anybody didn't started it and it is useful at least for both Nova and
Cinder I would to start this work according oslo guidelines [4] and
creating needed blueprints to make it finished until Kilo 1 is over.



[1] https://wiki.openstack.org/wiki/CinderBrick
[2] https://etherpad.openstack.org/p/cinder-meetup-summer-2014
[3]
http://lists.openstack.org/pipermail/openstack-dev/2014-September/044608.html
[4] https://wiki.openstack.org/wiki/Oslo/CreatingANewLibrary

Regards,
Ivan Kolodyazhny.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] proposal of definitions/processes for cinder-spec

2014-04-24 Thread Walter A. Boring IV

On 04/23/2014 05:09 PM, Jay S. Bryant wrote:

All,

I have gotten questions from our driver developers asking for details
regarding the move to using cinder-specs for proposing Blueprints.  I
brought this topic up in today's Cinder Weekly Meeting, but the meeting
was lightly attended so we decided to move the discussion here.

I am going to put this note in the form of 'question' and proposed
answer based on the brief discussion we had today.  Note that the
answers here are based on the assumption that we want to keep Cinder's
use of 'specs' as close to Nova's as possible.  I used the following
mailing list thread as a starting point for some of these answers:
http://lists.openstack.org/pipermail/openstack-dev/2014-April/032796.html

Q: When is a spec approved?
A:  When it receives a +2 from the PTL and at least one other Core
reviewer.

Q: How long are specs valid for?
A: For the duration of the release cycle.  Any specs that are not
approved during that period of type will need to be resubmitted for the
subsequent release.

Q: What will the spec template look like?
A: This is one of the points I would like to discuss.  The Nova template
currently looks like this:
https://github.com/openstack/nova-specs/blob/master/specs/template.rst
Do we want to follow the same template.  In the interest of staying in
sync with Nova's implementation I would say yes, but does this meet our
needs?  Are there other/different fields we want to consider to help for
instances where the Blueprint is for a new driver or change to a driver?
I think we might need, for instance, a 'Drivers Impacted' field.

I think for starters, we should use the same template until we find
it doesn't fit our needs.   I just filed my first nova-spec bp
and rather liked the template and think it would be nice to have this
for Cinder  cinder-spec.




Q: Will driver developers have to use the same template for functions in
their drivers?
A: Also a point I would like to discuss.  Developers had asked if a more
limited template would be used for changes going into the developer's
driver.  At first I thought maybe a different template for Blueprints
against a driver might be appropriate, but after looking more closely at
Nova's template perhaps that is not necessary.  I would lean towards
keeping one template, but maybe not requiring all fields depending on
what our final template ends up looking like.

for now I vote for using the same template.


Q: Where do specs for python-cinderclient go?
A: Looks like Nova has added a python-novaclient directory.  I don't
think we would need a separate python-cinderclient-specs repository but
don't have a strong opinion on this point.

I am sure this is not an exhaustive list of questions/answers at this
point in time but I wanted to start the discussion so we could help move
this process forward.  I look forward to your feedback.

-Jay Bryant
jsbry...@electronicjungle.net
Freenode:  jungleboyj




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [Cinder] Cinder driver verification

2014-02-13 Thread Walter A. Boring IV

On 02/13/2014 02:51 AM, Thierry Carrez wrote:

John Griffith wrote:

So we've talked about this a bit and had a number of ideas regarding
how to test and show compatibility for third-party drivers in Cinder.
This has been an eye opening experience (the number of folks that have
NEVER run tempest before, as well as the problems uncovered now that
they're trying it).

I'm even more convinced now that having vendors run these tests is a
good thing and should be required.  That being said there's a ton of
push back from my proposal to require that results from a successful
run of the tempest tests to accompany any new drivers submitted to
Cinder.

Could you describe the nature of the pushback ? Is it that the tests are
too deep and reject valid drivers ? Is it that it's deemed unfair to
block new drivers while the existing ones aren't better ? Is it that
it's difficult for them to run those tests and get a report ? Or is it
because they care more about having their name covered in mainline and
not so much about having the code working properly ?


The consensus from the Cinder community for now is that we'll
log a bug for each driver after I3, stating that it hasn't passed
certification tests.  We'll then have a public record showing
drivers/vendors that haven't demonstrated functional compatibility,
and in order to close those bugs they'll be required to run the tests
and submit the results to the bug in Launchpad.

So, this seems to be the approach we're taking for Icehouse at least,
it's far from ideal IMO, however I think it's still progress and it's
definitely exposed some issues with how drivers are currently
submitted to Cinder so those are positive things that we can learn
from and improve upon in future releases.

To add some controversy and keep the original intent of having only
known tested and working drivers in the Cinder release, I am going to
propose that any driver that has not submitted successful functional
testing by RC1 that that driver be removed.  I'd at least like to see
driver maintainers try... if the test fails a test or two that's
something that can be discussed, but it seems that until now most
drivers just flat out are not even being tested.

I think there are multiple stages here.

Stage 0: noone knows if drivers work
Stage 1: we know the (potentially sad) state of the drivers that are in
the release
Stage 2: only drivers that pass tests are added, drivers that don't pass
tests have a gap analysis and a plan to fix it
Stage 3: drivers that fail tests are removed before release
Stage 4: 3rd-party testing rigs must run tests on every change in order
to stay in tree

At the very minimum you should be at stage 1 for the Icehouse release,
so I agree with your last paragraph. I'd recommend that you start the
Juno cycle at stage 2 (for new drivers), and try to reach stage 3 for
the end of the Juno release.

I have to agree with Thierry here.  I think if we can get drivers to 
pass the tests

in the Juno timeframe, then it's fine to remove then during Juno.
I think the idea of having drivers run their code through tempest and work
towards passing all of those tests is a great thing for Cinder and 
OpenStack in general.


What I would do different for the Icehouse release is this:

If a driver doesn't pass the certification test by IceHouse RC1, then we 
have a bug filed
against the driver.   I would also put a warning message in the log for 
that driver that it
doesn't pass the certification test.  I would not remove it from the 
codebase.


Also:
   if a driver hasn't even run the certification test by RC1, then we 
mark the driver as
uncertified and deprecated in the code and throw an error at driver init 
time.
We can have a option in cinder.conf that says 
ignore_uncertified_drivers=False.
If an admin wants to ignore the error, they set the flag to True, and we 
let the driver init at next startup.

The admin then takes full responsibility for running uncertified code.

  I think removing the drivers outright is premature for Icehouse, 
since the certification process is a new thing.
For Juno, we remove any drivers that are still marked as uncertified and 
haven't run the tests.


I think the purpose of the tests is to get vendors to actually run their 
code through tempest and
prove to the community that they are willing to show that they are 
fixing their code.  At the end of the day,

it better serves the community and Cinder if we have many working drivers.

My $0.02,
Walt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [Cinder] Cinder driver verification

2014-02-13 Thread Walter A. Boring IV

On 02/13/2014 09:51 AM, Avishay Traeger wrote:

Walter A. Boring IV walter.bor...@hp.com wrote on 02/13/2014 06:59:38
PM:

What I would do different for the Icehouse release is this:

If a driver doesn't pass the certification test by IceHouse RC1, then we
have a bug filed
against the driver.   I would also put a warning message in the log for
that driver that it
doesn't pass the certification test.  I would not remove it from the
codebase.

Also:
 if a driver hasn't even run the certification test by RC1, then we
mark the driver as
uncertified and deprecated in the code and throw an error at driver init
time.
We can have a option in cinder.conf that says
ignore_uncertified_drivers=False.
If an admin wants to ignore the error, they set the flag to True, and we
let the driver init at next startup.
The admin then takes full responsibility for running uncertified code.

I think removing the drivers outright is premature for Icehouse,
since the certification process is a new thing.
For Juno, we remove any drivers that are still marked as uncertified and
haven't run the tests.

I think the purpose of the tests is to get vendors to actually run their
code through tempest and
prove to the community that they are willing to show that they are
fixing their code.  At the end of the day,
it better serves the community and Cinder if we have many working

drivers.

My $0.02,
Walt


I like this.  Make that $0.04 now :)


I wrote a bit of code so we had something to discuss if anyone thinks 
it's a good enough

compromise.
https://review.openstack.org/#/c/73464/

Walt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] weekly meeting

2013-12-17 Thread Walter A. Boring IV

4 or 5 UTC works better for me.   I can't attend the current meeting
time, due to taking my kids to school in the morning at 1620UTC

Walt

Hi All,

Prompted by a recent suggestion from Tom Fifield, I thought I'd gauge
some interest in either changing the weekly Cinder meeting time, or
proposing a second meeting to accomodate folks in other time-zones.

A large number of folks are already in time-zones that are not
friendly to our current meeting time.  I'm wondering if there is
enough of an interest to move the meeting time from 16:00 UTC on
Wednesdays, to 04:00 or 05:00 UTC?  Depending on the interest I'd be
willing to look at either moving the meeting for a trial period or
holding a second meeting to make sure folks in other TZ's had a chance
to be heard.

Let me know your thoughts, if there are folks out there that feel
unable to attend due to TZ conflicts and we can see what we might be
able to do.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] TaskFlow 0.1 integration

2013-11-19 Thread Walter A. Boring IV

Awesome guys,
  Thanks for picking this up.   I'm looking forward to the reviews :)

Walt

On 19.11.2013 10:38, Kekane, Abhishek wrote:

Hi All,

Greetings!!!

Hi there!

And thanks for your interest in cinder and taskflow!


We are in process of implementing the TaskFlow 0.1 in Cinder for copy
volume to image and delete volume.

I have added two blueprints for the same.
1. https://blueprints.launchpad.net/cinder/+spec/copy-volume-to-image-task-flow
2. https://blueprints.launchpad.net/cinder/+spec/delete-volume-task-flow

I would like to know if any other developers/teams are working or
planning to work on any cinder api apart from above two api's.

Your help is appreciated.

Anastasia Karpinska works on updating existing flows to use released
TaskFlow 0.1.1 instead of internal copy:

https://review.openstack.org/53922

It was blocked because taskflow was not in openstack/requirements, but
now we're there, and Anastasia promised to finish the work and submit
updated changeset for review in couple of days.

There are also two changesets that convert cinder APIs to use TaskFlow:
- https://review.openstack.org/53480 for create_backup by Victor
   Rodionov
- https://review.openstack.org/55134 for create_snapshot by Stanislav
   Kudriashev

As far as I know both Stanislav and Victor suspended their work unitil
Anastasia's change lands.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [cinder] Proposal for Ollie Leahy to join cinder-core

2013-07-17 Thread Walter A. Boring IV



___
Mailing list: https://launchpad.net/~openstack
https://launchpad.net/%7Eopenstack
Post to : openst...@lists.launchpad.net
mailto:openst...@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
https://launchpad.net/%7Eopenstack
More help   : https://help.launchpad.net/ListHelp

Just to point out a few things here, first off there is no guideline 
that states a company affiliation should have anything to do with the 
decision on voting somebody as core.  I have ABSOLUTELY NO concern 
about representation of company affiliation what so ever.


Quite frankly I wouldn't mind if there were 20 core members from HP, 
if they're all actively engaged and participating then that's great. 
 I don't think there has been ANY incidence of folks exerting 
inappropriate influence based on their affiliated interest, and if 
there ever was I think it would be easy to identify and address.


As far as don't need more I don't agree with that either, if there 
are folks contributing and doing the work then there's no reason not 
to add them.  Cinder IMO does NOT have an excess of reviewers by a 
very very long stretch.


The criteria here should be review consistency and quality as well as 
knowledge of the project, nothing more nothing less.  If there's an 
objection to the individuals participation or contribution that's 
fine, but company affiliation should have no bearing.



+1 to Ollie from me.

+1 to John's points.   If a company is colluding with other core 
members, from the same company, to do bad things within a project, it 
should become pretty obvious at some point and the project's community 
should take action.   If someone is putting in an extra effort to 
provide quality code and reviews on a regular basis, then why wouldn't 
we want that person on the team?  Besides, being a core member really 
just means that you are required to do reviews and help out with the 
community.  You do get some gerrit privileges for reviews, but that's 
about it.   I for one think that we absolutely can use more core members 
to help out with reviews during the milestone deadlines :)


Walt
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev