[Openstack-operators] Disable console for an instance

2016-10-12 Thread Blair Bethwaite
Hi all,

Does anyone know whether there is a way to disable the novnc console on a
per instance basis?

Cheers,
Blair
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph?

2016-10-12 Thread Clint Byrum
Excerpts from Adam Kijak's message of 2016-10-12 12:23:41 +:
> > 
> > From: Xav Paice 
> > Sent: Monday, October 10, 2016 8:41 PM
> > To: openstack-operators@lists.openstack.org
> > Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do 
> > you handle Nova on Ceph?
> > 
> > On Mon, 2016-10-10 at 13:29 +, Adam Kijak wrote:
> > > Hello,
> > >
> > > We use a Ceph cluster for Nova (Glance and Cinder as well) and over
> > > time,
> > > more and more data is stored there. We can't keep the cluster so big
> > > because of
> > > Ceph's limitations. Sooner or later it needs to be closed for adding
> > > new
> > > instances, images and volumes. Not to mention it's a big failure
> > > domain.
> > 
> > I'm really keen to hear more about those limitations.
> 
> Basically it's all related to the failure domain ("blast radius") and risk 
> management.
> Bigger Ceph cluster means more users.

Are these risks well documented? Since Ceph is specifically designed
_not_ to have the kind of large blast radius that one might see with
say, a centralized SAN, I'm curious to hear what events trigger
cluster-wide blasts.

> Growing the Ceph cluster temporary slows it down, so many users will be 
> affected.

One might say that a Ceph cluster that can't be grown without the users
noticing is an over-subscribed Ceph cluster. My understanding is that
one is always advised to provision a certain amount of cluster capacity
for growing and replicating to replaced drives.

> There are bugs in Ceph which can cause data corruption. It's rare, but when 
> it happens 
> it can affect many (maybe all) users of the Ceph cluster.
> 

:(

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph?

2016-10-12 Thread Warren Wang
If fault domain is a concern, you can always split the cloud up into 3
regions, each having a dedicate Ceph cluster. It isn't necessarily going to
mean more hardware, just logical splits. This is kind of assuming that the
network doesn't share the same fault domain though.

Alternatively, you can split the hardware for the Ceph boxes into multiple
clusters, and use multi backend Cinder to talk to the same set of
hypervisors to use multiple Ceph clusters. We're doing that to migrate from
one Ceph cluster to another. You can even mount a volume from each cluster
into a single instance.

Keep in mind that you don't really want to shrink a Ceph cluster too much.
What's "too big"? You should keep growing so that the fault domains aren't
too small (3 physical rack min), or you guarantee that the entire cluster
stops if you lose network.

Just my 2 cents,
Warren

On Wed, Oct 12, 2016 at 8:35 AM, Adam Kijak  wrote:

> > ___
> > From: Abel Lopez 
> > Sent: Monday, October 10, 2016 9:57 PM
> > To: Adam Kijak
> > Cc: openstack-operators
> > Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova]
> How do you handle Nova on Ceph?
> >
> > Have you thought about dedicated pools for cinder/nova and a separate
> pool for glance, and any other uses you might have?
> > You need to setup secrets on kvm, but you can have cinder creating
> volumes from glance images quickly in different pools
>
> We already have separate pool for images, volumes and instances.
> Separate pools doesn't really split the failure domain though.
> Also AFAIK you can't set up multiple pools for instances in nova.conf,
> right?
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ubuntu package for Octavia

2016-10-12 Thread Xav Paice
I highly recommend looking in to Giftwrap for that, until there's UCA
packages.

The thing missing from the packages that Giftwrap will produce is init
scripts, config file examples, and the various user and directory setup
stuff.  That's easy enough to put into config management or a separate
package if you wanted to.

On 13 October 2016 at 01:25, Lutz Birkhahn  wrote:

> Has anyone seen Ubuntu packages for Octavia yet?
>
> We’re running Ubuntu 16.04 with Newton, but for whatever reason I can not
> find any Octavia package…
>
> So far I’ve only found in https://wiki.openstack.org/
> wiki/Neutron/LBaaS/HowToRun the following:
>
>  Ubuntu Packages Setup: Install octavia with your favorite
> distribution: “pip install octavia”
>
> That was not exactly what we would like to do in our production cloud…
>
> Thanks,
>
> /lutz
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] glance, nova backed by NFS

2016-10-12 Thread Curtis
On Wed, Oct 12, 2016 at 12:34 PM, James Penick  wrote:
> Are you backing both glance and nova-compute with NFS? If you're only
> putting the glance store on NFS you don't need any special changes. It'll
> Just Work.

I've got both glance and nova backed by NFS. Haven't put up cinder
yet, but that will also be NFS backed. I just have very limited
storage on the compute hosts, basically just enough for the operating
system; this is just a small but permanent lab deployment. Good to
hear that Glance will Just Work. :) Thanks!

Thanks,
Curtis.

>
> On Wed, Oct 12, 2016 at 11:18 AM, Curtis  wrote:
>>
>> On Wed, Oct 12, 2016 at 11:58 AM, Kris G. Lindgren
>>  wrote:
>> > We don’t use shared storage at all.  But I do remember what you are
>> > talking about.  The issue is that compute nodes weren’t aware they were on
>> > shared storage, and would nuke the backing mage from shared storage, after
>> > all vm’s on *that* compute node had stopped using it. Not after all vm’s 
>> > had
>> > stopped using it.
>> >
>> > https://bugs.launchpad.net/nova/+bug/1620341 - Looks like some code to
>> > address that concern has landed  but only in trunk maybe mitaka.  Any 
>> > stable
>> > releases don’t appear to be shared backing image safe.
>> >
>> > You might be able to get around this by setting the compute image
>> > manager task to not run.  But the issue with that will be one missed 
>> > compute
>> > node, and everyone will have a bad day.
>>
>> Cool, thanks Kris. Exactly what I was talking about. I'm on Mitaka,
>> and I will look into that bugfix. I guess I need to test this lol.
>>
>> Thanks,
>> Curtis.
>>
>> >
>> > ___
>> > Kris Lindgren
>> > Senior Linux Systems Engineer
>> > GoDaddy
>> >
>> > On 10/12/16, 11:21 AM, "Curtis"  wrote:
>> >
>> > Hi All,
>> >
>> > I've never used NFS with OpenStack before. But I am now with a small
>> > lab deployment with a few compute nodes.
>> >
>> > Is there anything special I should do with NFS and glance and nova?
>> > I
>> > remember there was an issue way back when of images being deleted
>> > b/c
>> > certain components weren't aware they are on NFS. I'm guessing that
>> > has changed but just wanted to check if there is anything specific I
>> > should be doing configuration-wise.
>> >
>> > I can't seem to find many examples of NFS usage...so feel free to
>> > point me to any documentation, blog posts, etc. I may have just
>> > missed
>> > it.
>> >
>> > Thanks,
>> > Curtis.
>> >
>> > ___
>> > OpenStack-operators mailing list
>> > OpenStack-operators@lists.openstack.org
>> >
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> >
>> >
>>
>>
>>
>> --
>> Blog: serverascode.com
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>



-- 
Blog: serverascode.com

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] glance, nova backed by NFS

2016-10-12 Thread James Penick
Are you backing both glance and nova-compute with NFS? If you're only
putting the glance store on NFS you don't need any special changes. It'll
Just Work.

On Wed, Oct 12, 2016 at 11:18 AM, Curtis  wrote:

> On Wed, Oct 12, 2016 at 11:58 AM, Kris G. Lindgren
>  wrote:
> > We don’t use shared storage at all.  But I do remember what you are
> talking about.  The issue is that compute nodes weren’t aware they were on
> shared storage, and would nuke the backing mage from shared storage, after
> all vm’s on *that* compute node had stopped using it. Not after all vm’s
> had stopped using it.
> >
> > https://bugs.launchpad.net/nova/+bug/1620341 - Looks like some code to
> address that concern has landed  but only in trunk maybe mitaka.  Any
> stable releases don’t appear to be shared backing image safe.
> >
> > You might be able to get around this by setting the compute image
> manager task to not run.  But the issue with that will be one missed
> compute node, and everyone will have a bad day.
>
> Cool, thanks Kris. Exactly what I was talking about. I'm on Mitaka,
> and I will look into that bugfix. I guess I need to test this lol.
>
> Thanks,
> Curtis.
>
> >
> > ___
> > Kris Lindgren
> > Senior Linux Systems Engineer
> > GoDaddy
> >
> > On 10/12/16, 11:21 AM, "Curtis"  wrote:
> >
> > Hi All,
> >
> > I've never used NFS with OpenStack before. But I am now with a small
> > lab deployment with a few compute nodes.
> >
> > Is there anything special I should do with NFS and glance and nova? I
> > remember there was an issue way back when of images being deleted b/c
> > certain components weren't aware they are on NFS. I'm guessing that
> > has changed but just wanted to check if there is anything specific I
> > should be doing configuration-wise.
> >
> > I can't seem to find many examples of NFS usage...so feel free to
> > point me to any documentation, blog posts, etc. I may have just
> missed
> > it.
> >
> > Thanks,
> > Curtis.
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack-operators
> >
> >
>
>
>
> --
> Blog: serverascode.com
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] glance, nova backed by NFS

2016-10-12 Thread Kris G. Lindgren
Tobias does bring up something that we have ran into before.

With NFSv3 user mapping is done by ID, so you need to ensure that all of your 
servers use the same UID for nova/glance.  If you are using packages/automation 
that do useradd’s  without the same userid its *VERY* easy to have mismatched 
username/uid’s across multiple boxes.

NFSv4, iirc, sends the username and the nfs server does the translation of the 
name to uid, so it should not have this issue.  But we have been bit by that 
more than once on nfsv3.


___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

On 10/12/16, 11:59 AM, "Tobias Schön"  wrote:

Hi,

We have an environment with glance and cinder using NFS.
It's important that they have the correct rights. The shares should be 
owned by nova on compute if mounted up on /var/lib/nova/instances
And the same for nova and glance on the controller..

It's important that you map the glance and nova up in fstab.

The cinder one is controlled by the nfsdriver.

We are running rhelosp6, Openstack Juno.

This parameter is used:
nfs_shares_config=/etc/cinder/shares-nfs.conf in the 
/etc/cinder/cinder.conf file and then we have specified the share in 
/etc/cinder/shares-nfs.conf.

chmod 0640 /etc/cinder/shares-nfs.conf

setsebool -P virt_use_nfs on
This one is important to make it work with SELinux

How up to date this is actually I don't know tbh, but it was up to date as 
of redhat documentation when we deployed it around 1.5y ago.

//Tobias

-Ursprungligt meddelande-
Från: Curtis [mailto:serverasc...@gmail.com] 
Skickat: den 12 oktober 2016 19:21
Till: openstack-operators@lists.openstack.org
Ämne: [Openstack-operators] glance, nova backed by NFS

Hi All,

I've never used NFS with OpenStack before. But I am now with a small lab 
deployment with a few compute nodes.

Is there anything special I should do with NFS and glance and nova? I 
remember there was an issue way back when of images being deleted b/c certain 
components weren't aware they are on NFS. I'm guessing that has changed but 
just wanted to check if there is anything specific I should be doing 
configuration-wise.

I can't seem to find many examples of NFS usage...so feel free to point me 
to any documentation, blog posts, etc. I may have just missed it.

Thanks,
Curtis.

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] glance, nova backed by NFS

2016-10-12 Thread Tobias Schön
Hi,

We have an environment with glance and cinder using NFS.
It's important that they have the correct rights. The shares should be owned by 
nova on compute if mounted up on /var/lib/nova/instances
And the same for nova and glance on the controller..

It's important that you map the glance and nova up in fstab.

The cinder one is controlled by the nfsdriver.

We are running rhelosp6, Openstack Juno.

This parameter is used:
nfs_shares_config=/etc/cinder/shares-nfs.conf in the /etc/cinder/cinder.conf 
file and then we have specified the share in /etc/cinder/shares-nfs.conf.

chmod 0640 /etc/cinder/shares-nfs.conf

setsebool -P virt_use_nfs on
This one is important to make it work with SELinux

How up to date this is actually I don't know tbh, but it was up to date as of 
redhat documentation when we deployed it around 1.5y ago.

//Tobias

-Ursprungligt meddelande-
Från: Curtis [mailto:serverasc...@gmail.com] 
Skickat: den 12 oktober 2016 19:21
Till: openstack-operators@lists.openstack.org
Ämne: [Openstack-operators] glance, nova backed by NFS

Hi All,

I've never used NFS with OpenStack before. But I am now with a small lab 
deployment with a few compute nodes.

Is there anything special I should do with NFS and glance and nova? I remember 
there was an issue way back when of images being deleted b/c certain components 
weren't aware they are on NFS. I'm guessing that has changed but just wanted to 
check if there is anything specific I should be doing configuration-wise.

I can't seem to find many examples of NFS usage...so feel free to point me to 
any documentation, blog posts, etc. I may have just missed it.

Thanks,
Curtis.

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] glance, nova backed by NFS

2016-10-12 Thread Kris G. Lindgren
We don’t use shared storage at all.  But I do remember what you are talking 
about.  The issue is that compute nodes weren’t aware they were on shared 
storage, and would nuke the backing mage from shared storage, after all vm’s on 
*that* compute node had stopped using it. Not after all vm’s had stopped using 
it.

https://bugs.launchpad.net/nova/+bug/1620341 - Looks like some code to address 
that concern has landed  but only in trunk maybe mitaka.  Any stable releases 
don’t appear to be shared backing image safe.

You might be able to get around this by setting the compute image manager task 
to not run.  But the issue with that will be one missed compute node, and 
everyone will have a bad day.

___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

On 10/12/16, 11:21 AM, "Curtis"  wrote:

Hi All,

I've never used NFS with OpenStack before. But I am now with a small
lab deployment with a few compute nodes.

Is there anything special I should do with NFS and glance and nova? I
remember there was an issue way back when of images being deleted b/c
certain components weren't aware they are on NFS. I'm guessing that
has changed but just wanted to check if there is anything specific I
should be doing configuration-wise.

I can't seem to find many examples of NFS usage...so feel free to
point me to any documentation, blog posts, etc. I may have just missed
it.

Thanks,
Curtis.

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] glance, nova backed by NFS

2016-10-12 Thread Curtis
Hi All,

I've never used NFS with OpenStack before. But I am now with a small
lab deployment with a few compute nodes.

Is there anything special I should do with NFS and glance and nova? I
remember there was an issue way back when of images being deleted b/c
certain components weren't aware they are on NFS. I'm guessing that
has changed but just wanted to check if there is anything specific I
should be doing configuration-wise.

I can't seem to find many examples of NFS usage...so feel free to
point me to any documentation, blog posts, etc. I may have just missed
it.

Thanks,
Curtis.

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] OPNFV delivered its new Colorado release

2016-10-12 Thread Jay Pipes

On 10/12/2016 10:17 AM, Ulrich Kleber wrote:

Hi,

I didn’t see an official announcement, so I like to point you to the new
release of OPNFV.

https://www.opnfv.org/news-faq/press-release/2016/09/open-source-nfv-project-delivers-third-platform-release-introduces-0

OPNFV is an open source project and one of the most important users of
OpenStack in the Telecom/NFV area. Maybe it is interesting for your work.


Hi Ulrich,

I'm hoping you can explain to me what exactly OPNFV is producing in its 
releases. I've been through a number of the Jira items linked in the 
press release above and simply cannot tell what is being actually 
delivered by OPNFV versus what is just something that is in an OpenStack 
component or deployment.


A good example of this is the IPV6 project's Jira item here:

https://jira.opnfv.org/browse/IPVSIX-37

Which has the title of "Auto-installation of both underlay IPv6 and 
overlay IPv6". The issue is marked as "Fixed" in Colorado 1.0. However, 
I can't tell what code was produced in OPNFV that delivers the 
auto-installation of both an underlay IPv6 and an overlay IPv6.


In short, I'm confused about what OPNFV is producing and hope to get 
some insights from you.


Best,
-jay

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova] Does anyone use the os-diagnostics API?

2016-10-12 Thread Joe Topjian
Hi Matt, Tim,

Thanks for asking. We’ve used the API in the past as a way of getting the
> usage data out of Nova. We had problems running ceilometer at scale and
> this was a way of retrieving the data for our accounting reports. We
> created a special policy configuration to allow authorised users query this
> data without full admin rights.
>

We do this as well.


> From the look of the new spec, it would be fairly straightforward to adapt
> the process to use the new format as all the CPU utilisation data is there.
>

I agree.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova] Does anyone use the os-diagnostics API?

2016-10-12 Thread Tim Bell

> On 12 Oct 2016, at 07:00, Matt Riedemann  wrote:
> 
> The current form of the nova os-diagnostics API is hypervisor-specific, which 
> makes it pretty unusable in any generic way, which is why Tempest doesn't 
> test it.
> 
> Way back when the v3 API was a thing for 2 minutes there was work done to 
> standardize the diagnostics information across virt drivers in nova. The only 
> thing is we haven't exposed that out of the REST API yet, but there is a spec 
> proposing to do that now:
> 
> https://review.openstack.org/#/c/357884/
> 
> This is an admin-only API so we're trying to keep an end user point of view 
> out of discussing it. For example, the disk details don't have any unique 
> identifier. We could add one, but would it be useful to an admin?
> 
> This API is really supposed to be for debug, but the question I have for this 
> list is does anyone actually use the existing os-diagnostics API? And if so, 
> how do you use it, and what information is most useful? If you are using it, 
> please review the spec and provide any input on what's proposed for outputs.
> 

Matt,

Thanks for asking. We’ve used the API in the past as a way of getting the usage 
data out of Nova. We had problems running ceilometer at scale and this was a 
way of retrieving the data for our accounting reports. We created a special 
policy configuration to allow authorised users query this data without full 
admin rights.

From the look of the new spec, it would be fairly straightforward to adapt the 
process to use the new format as all the CPU utilisation data is there.

Tim

> -- 
> 
> Thanks,
> 
> Matt Riedemann
> 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] OPNFV delivered its new Colorado release

2016-10-12 Thread Ulrich Kleber
Hi,
I didn't see an official announcement, so I like to point you to the new 
release of OPNFV.
https://www.opnfv.org/news-faq/press-release/2016/09/open-source-nfv-project-delivers-third-platform-release-introduces-0
OPNFV is an open source project and one of the most important users of 
OpenStack in the Telecom/NFV area. Maybe it is interesting for your work.
Feel free to contact me or meet during the Barcelona summit at the session of 
the OpenStack Operators Telecom/NFV Functional Team 
(https://www.openstack.org/summit/barcelona-2016/summit-schedule/events/16768/openstack-operators-telecomnfv-functional-team).
Cheers,
Uli


Ulrich KLEBER
Chief Architect Cloud Platform
European Research Center
IT R Division
[huawei_logo]
Riesstraße 25
80992 München
Mobile: +49 (0)173 4636144
Mobile (China): +86 13005480404



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [nova] Does anyone use the os-diagnostics API?

2016-10-12 Thread Matt Riedemann
The current form of the nova os-diagnostics API is hypervisor-specific, 
which makes it pretty unusable in any generic way, which is why Tempest 
doesn't test it.


Way back when the v3 API was a thing for 2 minutes there was work done 
to standardize the diagnostics information across virt drivers in nova. 
The only thing is we haven't exposed that out of the REST API yet, but 
there is a spec proposing to do that now:


https://review.openstack.org/#/c/357884/

This is an admin-only API so we're trying to keep an end user point of 
view out of discussing it. For example, the disk details don't have any 
unique identifier. We could add one, but would it be useful to an admin?


This API is really supposed to be for debug, but the question I have for 
this list is does anyone actually use the existing os-diagnostics API? 
And if so, how do you use it, and what information is most useful? If 
you are using it, please review the spec and provide any input on what's 
proposed for outputs.


--

Thanks,

Matt Riedemann


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph?

2016-10-12 Thread Adam Kijak
> ___
> From: Abel Lopez 
> Sent: Monday, October 10, 2016 9:57 PM
> To: Adam Kijak
> Cc: openstack-operators
> Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do 
> you handle Nova on Ceph?
> 
> Have you thought about dedicated pools for cinder/nova and a separate pool 
> for glance, and any other uses you might have?
> You need to setup secrets on kvm, but you can have cinder creating volumes 
> from glance images quickly in different pools

We already have separate pool for images, volumes and instances. 
Separate pools doesn't really split the failure domain though.
Also AFAIK you can't set up multiple pools for instances in nova.conf, right?

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Ubuntu package for Octavia

2016-10-12 Thread Lutz Birkhahn
Has anyone seen Ubuntu packages for Octavia yet?

We’re running Ubuntu 16.04 with Newton, but for whatever reason I can not find 
any Octavia package…

So far I’ve only found in 
https://wiki.openstack.org/wiki/Neutron/LBaaS/HowToRun the following:

 Ubuntu Packages Setup: Install octavia with your favorite distribution: 
“pip install octavia”

That was not exactly what we would like to do in our production cloud…

Thanks,

/lutz

smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph?

2016-10-12 Thread Adam Kijak
> 
> From: Xav Paice 
> Sent: Monday, October 10, 2016 8:41 PM
> To: openstack-operators@lists.openstack.org
> Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do 
> you handle Nova on Ceph?
> 
> On Mon, 2016-10-10 at 13:29 +, Adam Kijak wrote:
> > Hello,
> >
> > We use a Ceph cluster for Nova (Glance and Cinder as well) and over
> > time,
> > more and more data is stored there. We can't keep the cluster so big
> > because of
> > Ceph's limitations. Sooner or later it needs to be closed for adding
> > new
> > instances, images and volumes. Not to mention it's a big failure
> > domain.
> 
> I'm really keen to hear more about those limitations.

Basically it's all related to the failure domain ("blast radius") and risk 
management.
Bigger Ceph cluster means more users.
Growing the Ceph cluster temporary slows it down, so many users will be 
affected.
There are bugs in Ceph which can cause data corruption. It's rare, but when it 
happens 
it can affect many (maybe all) users of the Ceph cluster.

> >
> > How do you handle this issue?
> > What is your strategy to divide Ceph clusters between compute nodes?
> > How do you solve VM snapshot placement and migration issues then
> > (snapshots will be left on older Ceph)?
> 
> Having played with Ceph and compute on the same hosts, I'm a big fan of
> separating them and having dedicated Ceph hosts, and dedicated compute
> hosts.  That allows me a lot more flexibility with hardware
> configuration and maintenance, easier troubleshooting for resource
> contention, and also allows scaling at different rates.

Exactly, I consider it the best practice as well.


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators