Everyone should be able to comment on specs, as long as you have a launchpad
account. It would probably be more productive to post these comments there.
As far as extending the spec to include usage reporting on all resources:
while I don’t disagree with you that it would be good to have those
On 5/26/2015 11:55 PM, Abhishek Talwar wrote:
Hi Folks,
I have a doubt regarding VM migration in single-node Openstack kilo
installation. As all the services run on a single host in single node
installation and we have only 1 available compute host, so how does the
VM get migrated and where do
All,
I’d like to thank everyone that came to the Vancouver Summit and our Official
Ops Meeting[1].
Since then we have officially been sanctioned[2] and made amazing progress with
true integration testing our cookbooks[3][4], including leveraging Rally[5] for
benchmarking.
Turns out after th
> On 28 May 2015, at 19:10, Tim Bell wrote:
>
> Using UDP is a great workaround but it does not feel like a fix... can't the
> daemons realise that the syslog socket is not alive and reconnect. Given it
> affects most of the OpenStack projects, a fix inside one of the oslo logging
> libraries
Hi folks,
Is anyone using SaltStack to deploy Openstack ? I haven't seen much
discussion around this tech hence my question and maybe point of
inspiration.
Thanks,
Dani
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://
We have some workloads that are only met by using ephemeral SSDs, so we
can't move those to RBD ephemeral. As far as the import conversion goes, it
looks like it would still do a blind one time conversion to raw (or
whatever format). I still wouldn't be able to use that function.
I guess ideally,
On Thu, May 28, 2015 at 12:55 PM, Jonathan Proulx wrote:
> On Thu, May 28, 2015 at 3:34 PM, Warren Wang wrote:
>> Even though we're using Ceph as a backend, we still use qcow2 images as our
>> golden images, since we still have a significant (maybe majority) number of
>> users using true ephemera
On Thu, May 28, 2015 at 3:21 PM, Fox, Kevin M wrote:
> I've experienced the opposite problem though. Downloading raw images and
> uploading them to the cloud is very slow. Doing it through qcow2 allows them
> to be compressed over the slow links. Ideally, the Ceph driver would take a
> qcow2 an
On Thu, May 28, 2015 at 3:34 PM, Warren Wang wrote:
> Even though we're using Ceph as a backend, we still use qcow2 images as our
> golden images, since we still have a significant (maybe majority) number of
> users using true ephemeral disks. It would be nice if glance was clever
> enough to conv
> It would be nice if glance was clever enough to convert where appropriate.
You're right, and it looks like that was added in the Kilo cycle:
https://review.openstack.org/#/c/159129/
-Chris
On Thu, May 28, 2015 at 3:34 PM, Warren Wang wrote:
> Even though we're using Ceph as a backend, we st
We use ceph for glance and cinder, and we have some nodes that lack sufficient
disk space, so they have "libvirt_images_type=rbd"
The majority of the "Public" images are raw format, which is pretty awesome
because those huge windows images (20 - 80 GB Raw) spawn in 14s.
When a user uploads a qcow
Cinder volume is available:
[root@openstack nova(keystone)]# cinder list |grep
a9f8f997-bc87-4f39-9f9f-0b41169a0256
| a9f8f997-bc87-4f39-9f9f-0b41169a0256 | available |radius |
32 | None| true | |
But I get Block Device Mapping is I
Even though we're using Ceph as a backend, we still use qcow2 images as our
golden images, since we still have a significant (maybe majority) number of
users using true ephemeral disks. It would be nice if glance was clever
enough to convert where appropriate.
Warren
Warren
On Thu, May 28, 2015
I've experienced the opposite problem though. Downloading raw images and
uploading them to the cloud is very slow. Doing it through qcow2 allows them to
be compressed over the slow links. Ideally, the Ceph driver would take a qcow2
and convert it to raw on glance ingest rather then at boot.
Tha
David is right, Ceph implements volume snapshotting at the RBD level,
not even RADOS level: whole 2 levels of abstraction above file system.
It doesn't matter if it's XFS, BtrFS, Ext4, or VFAT (if Ceph supported
VFAT): Ceph RBD takes care of it before individual chunks of an RBD
volume are passed t
> Using UDP is a great workaround but it does not feel like a fix...
Additionally, it's not possible to use TLS/SSL with syslog and UDP -- TCP
is required.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.opensta
> -Original Message-
> From: Christian Schwede [mailto:cschw...@redhat.com]
> Sent: 28 May 2015 20:03
> To: George Shuklin; openstack-operators@lists.openstack.org
> Subject: Re: [Openstack-operators] 100% CPU and hangs if syslog is restarted
>
> On 28.05.15 18:56, George Shuklin wrote:
>
On 28.05.15 18:56, George Shuklin wrote:
> Today we've discover a very serious bug in juno:
> https://bugs.launchpad.net/nova/+bug/1459726
>
> In short: if you're using syslog, and restart rsyslog, all APIs
> processes will eventually stuck with 100% CPU usage without doing
> anything.
>
> Is a
Hi,
As I commented on the bug, it sounds similar with:
https://bugs.launchpad.net/ubuntu/+source/python-eventlet/+bug/1452312
https://github.com/eventlet/eventlet/issues/192
2015-05-29 1:56 GMT+09:00 George Shuklin :
> Hello.
>
> Today we've discover a very serious bug in juno:
> https://bugs.lau
I bet you can workaround this by switching to UDP remote syslog. You'd loose
messages, but your processes *should* "fire and forget"
> On May 28, 2015, at 9:56 AM, George Shuklin wrote:
>
> Hello.
>
> Today we've discover a very serious bug in juno:
> https://bugs.launchpad.net/nova/+bug/1459
Hello,
Yeah, I ran into it last fall:
http://www.gossamer-threads.com/lists/openstack/operators/41876
Good to know that this issue still exists in Juno (we're still on
Icehouse). Thanks for the note. :)
Joe
On Thu, May 28, 2015 at 10:56 AM, George Shuklin
wrote:
> Hello.
>
> Today we've disc
Hello.
Today we've discover a very serious bug in juno:
https://bugs.launchpad.net/nova/+bug/1459726
In short: if you're using syslog, and restart rsyslog, all APIs
processes will eventually stuck with 100% CPU usage without doing anything.
Is anyone hits this bug before? It looks like very
yep. It's at the CEPH level (not the XFS level.)
On Thu, May 28, 2015 at 8:40 AM, Stephen Cousins
wrote:
> Hi David,
>
> So Ceph will use Copy-on-write even with XFS?
>
> Thanks,
>
> Steve
>
> On Thu, May 28, 2015 at 10:36 AM, David Medberry
> wrote:
>
>> This isn't remotely related to btrfs. I
Hi David,
So Ceph will use Copy-on-write even with XFS?
Thanks,
Steve
On Thu, May 28, 2015 at 10:36 AM, David Medberry
wrote:
> This isn't remotely related to btrfs. It works fine with XFS. Not sure how
> that works in Fuel, never used it.
>
> On Thu, May 28, 2015 at 8:01 AM, Forrest Flagg
>
This isn't remotely related to btrfs. It works fine with XFS. Not sure how
that works in Fuel, never used it.
On Thu, May 28, 2015 at 8:01 AM, Forrest Flagg
wrote:
> I'm also curious about this. Here are some other pieces of information
> relevant to the discussion. Maybe someone here can clea
I'm also curious about this. Here are some other pieces of information
relevant to the discussion. Maybe someone here can clear this up for me as
well. The documentation for Fuel 6.0, not sure what they changed for 6.1,
[1] states that when using Ceph one should disable qcow2 so that images are
The primary difference is the ability for CEPH to make zero byte copies.
When you use qcow2, ceph must actually create a complete copy instead of a
zero byte copy as it cannot do its own copy-on-write tricks with a qcow2
image.
So, yes, it will work fine with qcow2 images but it won't be as perfor
and better explained here:
http://ceph.com/docs/master/rbd/qemu-rbd/
On Thu, May 28, 2015 at 6:02 AM, David Medberry
wrote:
> The primary difference is the ability for CEPH to make zero byte copies.
> When you use qcow2, ceph must actually create a complete copy instead of a
> zero byte copy as
Hi
Now I try to use Fuel 6.1 deploy openstack Juno, use Ceph as cinder, nova
and glance backend.
In Fuel document suggest if use ceph, suggest use RAW format image.
but if I upload qcow2 image, seem working well.
what is the different use qcow2 and RAW in Ceph?
--
Shake Chen
Hi Team,
I have installed kilo on Ubuntu 14.04. Everything was working fine but
suddenly, I am getting an error in glance image-list.
Before installing mongodb, it was working fine. But now, it is not.
root@kilocontroller:/home/oss# glance image-list
None is not of type u'string'
Failed valida
30 matches
Mail list logo