On 6/7/2018 12:56 PM, melanie witt wrote:
Recently, we've received interest about increasing the maximum number of allowed volumes to attach to a single instance > 26. The limit of 26 is because of a historical limitation in libvirt (if I remember correctly) and is no longer limited at the libvirt level in the present day. So, we're looking at providing a way to attach more than 26 volumes to a single instance and we want your feedback.

The 26 volumes thing is a libvirt driver restriction.

There was a bug at one point because powervm (or powervc) was capping out at 80 volumes per instance because of restrictions in the build_requests table in the API DB:

https://bugs.launchpad.net/nova/+bug/1621138

They wanted to get to 128, because that's how power rolls.


We'd like to hear from operators and users about their use cases for wanting to be able to attach a large number of volumes to a single instance. If you could share your use cases, it would help us greatly in moving forward with an approach for increasing the maximum.

Some ideas that have been discussed so far include:

A) Selecting a new, higher maximum that still yields reasonable performance on a single compute host (64 or 128, for example). Pros: helps prevent the potential for poor performance on a compute host from attaching too many volumes. Cons: doesn't let anyone opt-in to a higher maximum if their environment can handle it.

B) Creating a config option to let operators choose how many volumes allowed to attach to a single instance. Pros: lets operators opt-in to a maximum that works in their environment. Cons: it's not discoverable for those calling the API.

I'm not a fan of a non-discoverable config option which will impact API behavior indirectly, i.e. on cloud A I can boot from volume with 64 volumes but not on cloud B.


C) Create a configurable API limit for maximum number of volumes to attach to a single instance that is either a quota or similar to a quota. Pros: lets operators opt-in to a maximum that works in their environment. Cons: it's yet another quota?

This seems the most reasonable to me if we're going to do this, but I'm probably in the minority. Yes more quota limits sucks, but it's (1) discoverable by API users and therefore (2) interoperable.

If we did the quota thing, I'd probably default to unlimited and let the cinder volume quota cap it for the project as it does today. Then admins can tune it as needed.

--

Thanks,

Matt

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to