We ran into an interesting issue today that seems counter-intuitive. We
use Ceph as our image backend when booting from ephemeral volumes and for
cinder volumes. We had a few remaining qcow2 images that would come up
and act as if there was no boot sector on the boot drive.
>From what it looks
Cool, thanks, Jon. I've been following the thread on your scheduling issue
on the OpenStack list. I can't see our users hitting that issue, but it's
always good to keep in mind. :)
On Tue, Feb 17, 2015 at 1:17 PM, Jonathan Proulx wrote:
> Recently (4 weeks?) moved from Icehouse to Juno. It was p
Recently (4 weeks?) moved from Icehouse to Juno. It was pretty smooth
(neutron has been much more well behaved though I know that's not
relevant to you).
One negative difference I noticed, but haven't really dug into yet
since it's not a common pattern here:
If I schedule >20 instances in one API
Nice - thanks, Jesse. :)
On Tue, Feb 17, 2015 at 10:35 AM, Jesse Keating wrote:
> On 2/17/15 8:46 AM, Joe Topjian wrote:
>
>>
>> The only issue I'm aware of is that live snapshotting is disabled. Has
>> anyone re-enabled this and seen issues? What was the procedure to
>> re-enable?
>>
>
> We've
On 2/17/15 8:46 AM, Joe Topjian wrote:
The only issue I'm aware of is that live snapshotting is disabled. Has
anyone re-enabled this and seen issues? What was the procedure to re-enable?
We've re-enabled it. Live snapshots take more system resources, which
meant I had to dial back down my Ral
Hello,
I'm beginning to plan for a Juno upgrade and wanted to get some feedback
from anyone else who has gone through the upgrade and has been running Juno
in production.
The environment that will be upgraded is pretty basic: nova-network, no
cells, Keystone v2. We run a RabbitMQ cluster, though,