All,
I’d like to thank everyone that came to the Vancouver Summit and our Official
Ops Meeting[1].
Since then we have officially been sanctioned[2] and made amazing progress with
true integration testing our cookbooks[3][4], including leveraging Rally[5] for
benchmarking.
Turns out after
On 28 May 2015, at 19:10, Tim Bell tim.b...@cern.ch wrote:
Using UDP is a great workaround but it does not feel like a fix... can't the
daemons realise that the syslog socket is not alive and reconnect. Given it
affects most of the OpenStack projects, a fix inside one of the oslo logging
On Thu, May 28, 2015 at 12:55 PM, Jonathan Proulx j...@jonproulx.com wrote:
On Thu, May 28, 2015 at 3:34 PM, Warren Wang war...@wangspeed.com wrote:
Even though we're using Ceph as a backend, we still use qcow2 images as our
golden images, since we still have a significant (maybe majority)
Hi folks,
Is anyone using SaltStack to deploy Openstack ? I haven't seen much
discussion around this tech hence my question and maybe point of
inspiration.
Thanks,
Dani
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
On Thu, May 28, 2015 at 3:21 PM, Fox, Kevin M kevin@pnnl.gov wrote:
I've experienced the opposite problem though. Downloading raw images and
uploading them to the cloud is very slow. Doing it through qcow2 allows them
to be compressed over the slow links. Ideally, the Ceph driver would
Hi,
As I commented on the bug, it sounds similar with:
https://bugs.launchpad.net/ubuntu/+source/python-eventlet/+bug/1452312
https://github.com/eventlet/eventlet/issues/192
2015-05-29 1:56 GMT+09:00 George Shuklin george.shuk...@gmail.com:
Hello.
Today we've discover a very serious bug in
-Original Message-
From: Christian Schwede [mailto:cschw...@redhat.com]
Sent: 28 May 2015 20:03
To: George Shuklin; openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] 100% CPU and hangs if syslog is restarted
On 28.05.15 18:56, George Shuklin wrote:
Today
Hello,
Yeah, I ran into it last fall:
http://www.gossamer-threads.com/lists/openstack/operators/41876
Good to know that this issue still exists in Juno (we're still on
Icehouse). Thanks for the note. :)
Joe
On Thu, May 28, 2015 at 10:56 AM, George Shuklin george.shuk...@gmail.com
wrote:
Cinder volume is available:
[root@openstack nova(keystone)]# cinder list |grep
a9f8f997-bc87-4f39-9f9f-0b41169a0256
| a9f8f997-bc87-4f39-9f9f-0b41169a0256 | available |radius |
32 | None| true | |
But I get Block Device Mapping is
We use ceph for glance and cinder, and we have some nodes that lack sufficient
disk space, so they have libvirt_images_type=rbd
The majority of the Public images are raw format, which is pretty awesome
because those huge windows images (20 - 80 GB Raw) spawn in 14s.
When a user uploads a qcow2
Even though we're using Ceph as a backend, we still use qcow2 images as our
golden images, since we still have a significant (maybe majority) number of
users using true ephemeral disks. It would be nice if glance was clever
enough to convert where appropriate.
Warren
Warren
On Thu, May 28, 2015
I bet you can workaround this by switching to UDP remote syslog. You'd loose
messages, but your processes *should* fire and forget
On May 28, 2015, at 9:56 AM, George Shuklin george.shuk...@gmail.com wrote:
Hello.
Today we've discover a very serious bug in juno:
It would be nice if glance was clever enough to convert where appropriate.
You're right, and it looks like that was added in the Kilo cycle:
https://review.openstack.org/#/c/159129/
-Chris
On Thu, May 28, 2015 at 3:34 PM, Warren Wang war...@wangspeed.com wrote:
Even though we're using Ceph
On 28.05.15 18:56, George Shuklin wrote:
Today we've discover a very serious bug in juno:
https://bugs.launchpad.net/nova/+bug/1459726
In short: if you're using syslog, and restart rsyslog, all APIs
processes will eventually stuck with 100% CPU usage without doing
anything.
Is anyone
On Thu, May 28, 2015 at 3:34 PM, Warren Wang war...@wangspeed.com wrote:
Even though we're using Ceph as a backend, we still use qcow2 images as our
golden images, since we still have a significant (maybe majority) number of
users using true ephemeral disks. It would be nice if glance was
David is right, Ceph implements volume snapshotting at the RBD level,
not even RADOS level: whole 2 levels of abstraction above file system.
It doesn't matter if it's XFS, BtrFS, Ext4, or VFAT (if Ceph supported
VFAT): Ceph RBD takes care of it before individual chunks of an RBD
volume are passed
On 5/26/2015 11:55 PM, Abhishek Talwar wrote:
Hi Folks,
I have a doubt regarding VM migration in single-node Openstack kilo
installation. As all the services run on a single host in single node
installation and we have only 1 available compute host, so how does the
VM get migrated and where
Hi Team,
I have installed kilo on Ubuntu 14.04. Everything was working fine but
suddenly, I am getting an error in glance image-list.
Before installing mongodb, it was working fine. But now, it is not.
root@kilocontroller:/home/oss# glance image-list
None is not of type u'string'
Failed
On 26-05-2015 17:12, Assaf Muller wrote:
I'd install them via the same tool, make sure the outcome is the same Neutron
code on both nodes with the same patches applied, recreate the routers
entirely
and see what happens. If it works, work from there.
Hello,
I confirm that HA is working when
Hi
Now I try to use Fuel 6.1 deploy openstack Juno, use Ceph as cinder, nova
and glance backend.
In Fuel document suggest if use ceph, suggest use RAW format image.
but if I upload qcow2 image, seem working well.
what is the different use qcow2 and RAW in Ceph?
--
Shake Chen
I'm also curious about this. Here are some other pieces of information
relevant to the discussion. Maybe someone here can clear this up for me as
well. The documentation for Fuel 6.0, not sure what they changed for 6.1,
[1] states that when using Ceph one should disable qcow2 so that images are
This isn't remotely related to btrfs. It works fine with XFS. Not sure how
that works in Fuel, never used it.
On Thu, May 28, 2015 at 8:01 AM, Forrest Flagg fostro.fl...@gmail.com
wrote:
I'm also curious about this. Here are some other pieces of information
relevant to the discussion. Maybe
Hi David,
So Ceph will use Copy-on-write even with XFS?
Thanks,
Steve
On Thu, May 28, 2015 at 10:36 AM, David Medberry openst...@medberry.net
wrote:
This isn't remotely related to btrfs. It works fine with XFS. Not sure how
that works in Fuel, never used it.
On Thu, May 28, 2015 at 8:01
and better explained here:
http://ceph.com/docs/master/rbd/qemu-rbd/
On Thu, May 28, 2015 at 6:02 AM, David Medberry openst...@medberry.net
wrote:
The primary difference is the ability for CEPH to make zero byte copies.
When you use qcow2, ceph must actually create a complete copy instead of a
The primary difference is the ability for CEPH to make zero byte copies.
When you use qcow2, ceph must actually create a complete copy instead of a
zero byte copy as it cannot do its own copy-on-write tricks with a qcow2
image.
So, yes, it will work fine with qcow2 images but it won't be as
yep. It's at the CEPH level (not the XFS level.)
On Thu, May 28, 2015 at 8:40 AM, Stephen Cousins steve.cous...@maine.edu
wrote:
Hi David,
So Ceph will use Copy-on-write even with XFS?
Thanks,
Steve
On Thu, May 28, 2015 at 10:36 AM, David Medberry openst...@medberry.net
wrote:
This
Hello.
Today we've discover a very serious bug in juno:
https://bugs.launchpad.net/nova/+bug/1459726
In short: if you're using syslog, and restart rsyslog, all APIs
processes will eventually stuck with 100% CPU usage without doing anything.
Is anyone hits this bug before? It looks like
27 matches
Mail list logo