[Openstack-operators] [openstack-dev] [cinder] [all] The future of Cinder API v1

2015-09-28 Thread Ivan Kolodyazhny
Hi all,

As you may know, we've got 2 APIs in Cinder: v1 and v2. Cinder v2 API was
introduced in Grizzly and v1 API is deprecated since Juno.

After [1] is merged, Cinder API v1 is disabled in gates by default. We've
got a filed bug [2] to remove Cinder v1 API at all.


According to Deprecation Policy [3] looks like we are OK to remote it. But
I would like to ask Cinder API users if any still use API v1.
Should we remove it at all Mitaka release or just disable by default in the
cinder.conf?

AFAIR, only Rally doesn't support API v2 now and I'm going to implement it
asap.

[1] https://review.openstack.org/194726
[2] https://bugs.launchpad.net/cinder/+bug/1467589
[3]
http://lists.openstack.org/pipermail/openstack-dev/2015-September/073576.html

Regards,
Ivan Kolodyazhny
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [openstack-dev] [operators] [cinder] Does anyone use Cinder XML API?

2015-10-01 Thread Ivan Kolodyazhny
Hi all,

I would like to ask if anybody uses Cinder XML API.

AFAIR, almost one year ago XML-related tests were removed from Tempest [1].
XML API removed from Nova last year [2] and doesn't supported by Heat. I
didn't check for the others projects.

For now, I don't know does Cinder XML API work or not. It's not tested on
Gates except of some kind of unit-tests.

I proposed blueprint [3] to remove it. But I would like to ask if anybody
uses this API. We don't want to break any Cinder API users. We need to
decide remove XML API like Nova did or support it and implement tests.


[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-November/051384.html
[2]
http://lists.openstack.org/pipermail/openstack-dev/2014-December/052443.html
[3] https://blueprints.launchpad.net/cinder/+spec/remove-xml-api

Regards,
Ivan Kolodyazhny
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [openstack-dev][openstack-operators][cinder] max_concurrent_builds in Cinder

2016-05-23 Thread Ivan Kolodyazhny
Hi developers and operators,
I would like to get any feedback from you about my idea before I'll start
work on spec.

In Nova, we've got max_concurrent_builds option [1] to set 'Maximum number
of instance builds to run concurrently' per each compute. There is no
equivalent Cinder.

Why do we need it for Cinder? IMO, it could help us to address following
issues:

   - Creation of N volumes at the same time increases a lot of resource
   usage by cinder-volume service. Image caching feature [2] could help us a
   bit in case when we create volume form image. But we still have to upload N
   images to the volumes backend at the same time.
   - Deletion on N volumes at parallel. Usually, it's not very hard task
   for Cinder, but if you have to delete 100+ volumes at once, you can fit
   different issues with DB connections, CPU and memory usages. In case of
   LVM, it also could use 'dd' command to cleanup volumes.
   - It will be some kind of load balancing in HA mode: if cinder-volume
   process is busy with current operations, it will not catch message from
   RabbitMQ and other cinder-volume service will do it.
   - From users perspective, it seems that better way is to create/delete N
   volumes a bit slower than fail after X volumes were created/deleted.


[1]
https://github.com/openstack/nova/blob/283da2bbb74d0a131a66703516d16698a05817c7/nova/conf/compute.py#L161-L163
[2]
https://specs.openstack.org/openstack/cinder-specs/specs/liberty/image-volume-cache.html

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators