** Also affects: cinder Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1705340
Title: Unable to boot large instances due to prlimit setting Status in Cinder: New Status in OpenStack Compute (nova): Fix Released Bug description: I recently had the need to migrate some instances from an old KIlo cluster to a new Ocata one. Some of the snapshots were 120GB or more (terrible I know). Due to a prlimit limitation of cpu=8, these instances are unable to spawn. Changing nova/virt/images.py line 42 from cpu_time=8, to cpu_time=16, allowed the instances to boot properly. This was implemented at 2 seconds and later changed to 8 seconds as part of: https://review.openstack.org/gitweb?p=openstack/nova.git;a=commitdiff;h=068d851561addfefb2b812d91dc2011077cb6e1d Here's my qemu-img info process taking more than 8 seconds: 9ddeea47df894145.part execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:355 2017-07-19 19:47:42.849 7 DEBUG oslo_concurrency.processutils [req-7ed3314d-1c11-4dd8-b612-f8d9c022417f ff236d57a57dd42cb5811c998e30fca1a76233873b9f08330f725fb639c8b025 9776d48734a24c23a4aef51cb78cc269 - - -] CMD "/usr/bin/python2 -m oslo_concurrency.prlimit --as=1073741824 --cpu=16 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/41ebff725eab55d368f97bc79ddeea47df894145.part" returned: 0 in 8.639s execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:385 Would it be possible to increase the default setting, or better yet make it a configuration variable so we don't have to keep chasing it? To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1705340/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp