+1
I encountered the negative effects of the disparity between the cinder cli and
OpenStack cli just an hour before receiving Tim’s reply. The missing features
of OpenStack client relative to individual project clients trip me up multiple
times per week on average.
Sent from my iPad
> On Sep
Sometimes I’ve had similar problems that can by fixed by running fsck against
the rbd device in bare meta oob via rbd-nbd. I’ve been thinking it’s related
to trim/discard and some sort of disk geometry mismatch.
> On Apr 30, 2018, at 1:22 PM, Jonathan Proulx wrote:
>
>
>
As far as that goes, if you have 2 haproxies you might as well use both. Use 2
VIPs and DNS round robin between them, configure keepalived to have each
haproxy node take one VIP as primary with the other node’s VIP as backup. This
has worked well for me for the past couple of years.
Ceilometer (now panko) vm exists events coerced to look like jobs from a HPC
batch system.
Sent from my iPad
> On Mar 13, 2018, at 6:11 PM, Simon Leinen wrote:
>
> Lars Kellogg-Stedman writes:
>> I'm curious what folks out there are using for chargeback/billing in
>>
s.frie...@windriver.com> wrote:
>
> On 11/02/2017 08:48 AM, Mike Lowe wrote:
>> After moving from CentOS 7.3 to 7.4, I’ve had trouble getting live migration
>> to work when a volume is attached. As it turns out when a live migration
>> takes place the libvirt driver rewrit
our problem [1]
>
> [1] https://bugs.launchpad.net/nova/+bug/1715569
> <https://bugs.launchpad.net/nova/+bug/1715569>
>
> Cheers,
> Sergio
>
> On 2 November 2017 at 08:48, Mike Lowe <joml...@iu.edu
> <mailto:joml...@iu.edu>> wrote:
> After moving from CentO
After moving from CentOS 7.3 to 7.4, I’ve had trouble getting live migration to
work when a volume is attached. As it turns out when a live migration takes
place the libvirt driver rewrites portions of the xml definition for the
destination hypervisor and gets it wrong. Here is an example.
I did the minor point release update from 10.0.2 to 10.0.4 and found my cinder
volume services would go out to lunch during startup. They would do their
initial heartbeat then get marked as dead never sending another heartbeat. The
process was running and there were constant logs about ceph
format: 2
> features: layering, striping
> flags:
> stripe unit: 4096 kB
> stripe count: 1
>
>
> John Petrini
>
>
> On Tue, Aug 1, 2017 at 3:24 PM, Mike Lowe <joml...@iu.edu
> <mailto:joml...@iu.edu>> wrote:
> There is no
other than the intended recipient is prohibited. If you received
> this in error, please contact the sender and delete the material from any
> computer.
>
>
>
>
> On Tue, Aug 1, 2017 at 2:53 PM, Mike Lowe <joml...@iu.edu
> <mailto:joml...@iu.edu>
Strictly speaking I don’t think this is the case anymore for Mitaka or later.
Snapping nova does take more space as the image is flattened, but the dumb
download then upload back into ceph has been cut out. With careful attention
paid to discard/TRIM I believe you can maintain the thin
We have run ceph backed cinder from Liberty through Newton, with the exception
of a libvirt 2.x bug that should now be fixed, cinder really hasn't caused us
any problems.
Sent from my iPad
> On May 31, 2017, at 6:12 PM, Joshua Harlow wrote:
>
> Hi folks,
>
> So I was
The whole cell thing tripped me up earlier this week. From what I understand
it’s hard coded in the upgrade scripts to be the same as the nova_api db with
cell0 appended to the db name but there is a patch in to change this behavior
to match what the install docs say. So it looks like if you
I think it was somewhere around the 2M mark.
> On Mar 30, 2017, at 8:33 AM, Alex Krzos <akr...@redhat.com> wrote:
>
> On Tue, Mar 28, 2017 at 3:55 PM, Mike Lowe <joml...@iu.edu
> <mailto:joml...@iu.edu>> wrote:
>> I recently got into trouble with a lar
I recently got into trouble with a large backlog. What I found was at some
point the backlog got too large for gnocchi to effectivly function. When using
ceph list of metric objects is kept in a omap object which normally is a quick
and efficient way to store this list. However, at some point
new instance.
>
> Saverio
>
> 2017-03-13 15:47 GMT+01:00 Mike Lowe <joml...@iu.edu>:
>> Over the weekend a user reported that his instance was in a stopped state
>> and could not be started, on further examination it appears that the vm had
>> crashed and
How would you account for heterogeneous node types? Flavors by convention put
the hardware generation in the name as the digit.
Sent from my iPad
> On Mar 15, 2017, at 11:42 PM, Kris G. Lindgren wrote:
>
> So how do you bill for someone when you have a 24 core, 256GB
Over the weekend a user reported that his instance was in a stopped state and
could not be started, on further examination it appears that the vm had crashed
and the strange thing is that the root disk is now gone. Has anybody come
across anything like this before?
And why on earth is it
Do you have this in your haproxy front end config?
reqadd X-Forwarded-Proto:\ https
And this in your keystone.conf ?
secure_proxy_ssl_header=HTTP_X_FORWARDED_PROTO
I think that’s what I had to do to tell haproxy to add a headder that keystone
then matched to know when to return https.
> On
d Cloud Systems Architect
> Overstock.com <http://overstock.com/>
>
>
>
>> On Dec 20, 2016, at 9:57 AM, Mike Lowe <joml...@iu.edu
>> <mailto:joml...@iu.edu>> wrote:
>>
>> I got a rather nasty surprise upgrading from CentOS 7.2 to 7.3. As far as I
>
I did manage to get the fc24 libvirt 2.5.0 srpm to compile today. This seems
to have fixed my problem.
Sent from my iPad
> On Dec 20, 2016, at 7:27 PM, Erik McCormick <emccorm...@cirrusseven.com>
> wrote:
>
>> On Tue, Dec 20, 2016 at 11:57 AM, Mike Lowe <joml...
I got a rather nasty surprise upgrading from CentOS 7.2 to 7.3. As far as I
can tell the libvirt 2.0.0 that ships with 7.3 doesn’t behave the same way as
the 1.2.17 that ships with 7.2 when using ceph with cephx auth during volume
attachment using virtio-scsi. It looks like it fails to add
I don’t think we’ve figured out how to do this, but I’d certainly like to
attend. We have promised integration with Wrangler’s 10PB of lustre that sits
across the hot aisle from us, so I probably need to figure out how to do this
between now and Barcelona.
> On Jul 12, 2016, at 10:03 AM,
I was having just this problem last week. We updated to 3.6.2 from 3.5.4 on
ubuntu and stated seeing crashes due to excessive memory usage. I did this on
each node of my rabbit cluster and haven’t had any problems since
'rabbitmq-plugins disable rabbitmq_management’. From what I could gather
I use qemu-guest-agent inside of a vm and ensure that the mounts are noatime.
You can then use the guest agent to issue a freeze and get consistent rbd snaps
of the backing devices. Those snaps can be exported, preferably differentially
to skip over the unused portions, off to some other
25 matches
Mail list logo