Re: [Openstack-operators] Sharing resources across OpenStack instances

2015-04-22 Thread Gilles Mocellin

Le 22/04/2015 15:32, Adam Young a écrit :
Its been my understanding that many people are deploying small 
OpenStack instances as a way to share the Hardware owned by their 
particular team, group, or department.  The Keystone instance 
represents ownership, and the identity of the users comes from a 
corporate LDAP server.


Is there much demand for the following scenarios?

1.  A project team crosses organizational boundaries and has to work 
with VMs in two separate OpenStack instances.  They need to set up a 
network that requires talking to two neutron instances.


2.  One group manages a powerful storage array.  Several OpenStack 
instances need to be able to mount volumes from this array. Sometimes, 
those volumes have to be transferred from VMs running in one instance 
to another.


3.  A group is producing nightly builds.  Part of this is an image 
building system that posts to glance.  Ideally, multiple OpenStack 
instances would be able to pull their images from the same glance.


4. Hadoop ( or some other orchestrated task) requires more resources 
than are in any single OpenStack instance, and needs to allocate 
resources across two or more instances for a single job.



I suspect that these kinds of architectures are becoming more common.  
Can some of the operators validate these assumptions? Are there other, 
more common cases where Operations need to span multiple clouds which 
would require integration of one Nova server with multiple Cinder, 
Glance, or Neutron  servers managed in other OpenStack instances?


I'm always a bit disappointed when someone asks me about hybridation 
with OpenStack.
OpenStack is not a Cloud Management Platform, that can manage severeal 
Clouds, private and public... Like RedHat Cloudforms / ManageIQ.


At least, being able to manage two OpenStack instances, one private and 
one from a public Cloud with OpenStack API will be great.


I found a project by Huawei to cascade Openstack instances : 
https://wiki.openstack.org/wiki/OpenStack_cascading_solution

I really would like to see that becoming reality.

Perhaps It can be a solution to your scenarii ?



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Sharing resources across OpenStack instances

2015-04-22 Thread Fox, Kevin M
This is a case for a cross project cloud (institutional?). It costs more to run 
two little clouds then one bigger one. Both in terms of man power, and in cases 
like these. under utilized resources.

#3 is interesting though. If there is to be an openstack app catalog, it would 
be inportant to be able to pull the needed images from outside the cloud easily.

Thanks,
Kevin


From: Adam Young
Sent: Wednesday, April 22, 2015 6:32:17 AM
To: openstack-operators@lists.openstack.org
Subject: [Openstack-operators] Sharing resources across OpenStack instances

Its been my understanding that many people are deploying small OpenStack
instances as a way to share the Hardware owned by their particular team,
group, or department.  The Keystone instance represents ownership, and
the identity of the users comes from a corporate LDAP server.

Is there much demand for the following scenarios?

1.  A project team crosses organizational boundaries and has to work
with VMs in two separate OpenStack instances.  They need to set up a
network that requires talking to two neutron instances.

2.  One group manages a powerful storage array.  Several OpenStack
instances need to be able to mount volumes from this array. Sometimes,
those volumes have to be transferred from VMs running in one instance to
another.

3.  A group is producing nightly builds.  Part of this is an image
building system that posts to glance.  Ideally, multiple OpenStack
instances would be able to pull their images from the same glance.

4. Hadoop ( or some other orchestrated task) requires more resources
than are in any single OpenStack instance, and needs to allocate
resources across two or more instances for a single job.


I suspect that these kinds of architectures are becoming more common.
Can some of the operators validate these assumptions?  Are there other,
more common cases where Operations need to span multiple clouds which
would require integration of one Nova server with multiple Cinder,
Glance, or Neutron  servers managed in other OpenStack instances?

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] StackTach.v3 now in production ...

2015-04-22 Thread Sandy Walsh
(sorry for cross-post, but this is appropriate to both audiences)

Hey y'all!

For those of you that don't know, StackTach is a notification-based debugging, 
monitoring and usage tool for OpenStack.

We're happy to announce that we've recently rolled StackTach.v3 into production 
at one of the Rax datacenters with a plan to roll out to the rest asap. Once we 
do, we'll be bumping all the library versions to 1.0, but we encourage you to 
start playing with the system now. 

The docs and screencasts are at www.stacktach.com
We live on #stacktach on freenode (for v2 and v3 questions)
All the StackTach code is on stackforge 
https://github.com/stackforge?query=stacktach

This a very exciting time for us.  With StackTach.v3 we've:
* solved many of the scaling, redundancy and idempotency problems of v2
* modularized the entire system (use only the parts you want)
* made the system less rigid with respect to Nova and Glance. Now, nearly any 
JSON notification can be handled (even outside of OpenStack)
* created a very flexible REST API with pluggable implementation drivers. So, 
if you don't like our solution but want to keep a compatible API, all the 
pieces are there for you, including cmdline tools and client libraries. 
* included a devstack-like sandbox for you to play in that doesn't require an 
OpenStack installation to generate notifications
* developed a way to run STv3 side-by-side with your existing notification 
consumers for safe trials. We can split notification queues without requiring 
any changes to your openstack deployment (try *that* with oslo-messaging ;)

If you haven't looked at your OpenStack deployment from the perspective of 
notifications you're really missing out. It's the most powerful way to debug 
your installations. And, for usage metering, there is really no better option. 
We feel StackTach.v3 is the best solution out there for all your 
event-processing needs. 

Let us know how we can help! We're in a good place to squash bugs quickly. 

Cheers
-Sandy, Dragon and the rest of the StackTach.v3 team/contributors



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] over commit ratios

2015-04-22 Thread Allamaraju, Subbu
In addition to these factors, collocation happens to be another key source of 
noise. By collocation I mean VMs doing the same/similar work running on the 
same hypervisor. This happens under low capacity situations when the scheduler 
could not enforce anti-affinity.

Subbu

 On Apr 22, 2015, at 5:09 AM, George Shuklin george.shuk...@gmail.com wrote:
 
 Yes, it really depends on the used backing technique. We using SSDs and raw 
 images, so IO is not an issue.
 
 But memory is more important: if you lack IO capability you left with slow 
 guests. If you lack memory you left with dead guests (hello, OOM killer).
 
 BTW: Swap is needed not to swapin/swapout, but to relief memory pressure. 
 With properly configured memory swin/swout  should be less than 2-3.
 
 On 04/22/2015 09:49 AM, Tim Bell wrote:
 I'd also keep an eye on local I/O... we've found this to be the resource 
 which can cause the worst noisy neighbours. Swapping makes this worse.
 
 Tim
 
 -Original Message-
 From: George Shuklin [mailto:george.shuk...@gmail.com]
 Sent: 21 April 2015 23:55
 To: openstack-operators@lists.openstack.org
 Subject: Re: [Openstack-operators] over commit ratios
 
 It's very depend on production type.
 
 If you can control guests and predict their memory consumption, use it as 
 base
 for ratio.
 If you can't (typical for public clouds) - use 1 or smaller with
 reserved_host_memory_mb in nova.conf.
 
 And one more: some swap sapce is really necessary. Add at least twice of
 reserved_host_memory_mb - it really improves performance and prevents
 strange OOMs in the situation of very large host with very small dom0 
 footprint.
 
 On 04/21/2015 10:59 PM, Caius Howcroft wrote:
 Just a general question: what kind of over commit ratios do people
 normally run in production with?
 
 We currently run 2 for cpu and 1 for memory (with some held back for
 OS/ceph)
 
 i.e.:
 default['bcpc']['nova']['ram_allocation_ratio'] = 1.0
 default['bcpc']['nova']['reserved_host_memory_mb'] = 1024 # often
 larger default['bcpc']['nova']['cpu_allocation_ratio'] = 2.0
 
 Caius
 
 
 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 
 
 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Operator Help to get patch merged

2015-04-22 Thread Edgar Magana
I just read this email and after looking what happened on gerrit, I feel 
totally disappointed of the way the Horizon team (specially the cores) handle 
this patch.

You did everything properly in your commit and horizon dev team should have 
helped you to get your commit merge.

I do not understand why you are not even included as co-author in the patch 
that was finally merged: https://review.openstack.org/#/c/175702/

Just my two cents!

Edgar

From: Joseph Bajin josephba...@gmail.commailto:josephba...@gmail.com
Date: Saturday, April 18, 2015 at 5:06 PM
To: OpenStack Operators 
openstack-operators@lists.openstack.orgmailto:openstack-operators@lists.openstack.org
Subject: [Openstack-operators] Operator Help to get patch merged

I wanted to see about getting some Operator Help to push through a patch[1].

The patch is to not give the user a 404 message back when they click the cancel 
button why trying to change their password or the user settings. The patch 
resets the page.

It's been sitting there for a while, but started to get some -1's and then 
+1's, then some move to kilo-rc1 to liberty and back.  Some people think that 
those screens should be somewhere else, others think the text should be 
replaced, but that is not the purpose of the patch. It's just to not give back 
a negative user experience.

So, I'm hoping that I can get some Operator support to get this merged and if 
they want to change the text, change the location, etc, then they can do it 
later down the road.

Thanks

-Joe


[1] https://review.openstack.org/#/c/166569/
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] over commit ratios

2015-04-22 Thread Jesse Keating
A juno feature may help with this, Utilization based scheduling:
https://blueprints.launchpad.net/nova/+spec/utilization-aware-scheduling

That helps when landing the instance, but doesn't help if utilization
changes /after/ instances have landed, but could help with a resize action
to relocate the instance.


- jlk

On Wed, Apr 22, 2015 at 8:44 AM, Allamaraju, Subbu su...@subbu.org wrote:

 In addition to these factors, collocation happens to be another key source
 of noise. By collocation I mean VMs doing the same/similar work running on
 the same hypervisor. This happens under low capacity situations when the
 scheduler could not enforce anti-affinity.

 Subbu

  On Apr 22, 2015, at 5:09 AM, George Shuklin george.shuk...@gmail.com
 wrote:
 
  Yes, it really depends on the used backing technique. We using SSDs and
 raw images, so IO is not an issue.
 
  But memory is more important: if you lack IO capability you left with
 slow guests. If you lack memory you left with dead guests (hello, OOM
 killer).
 
  BTW: Swap is needed not to swapin/swapout, but to relief memory
 pressure. With properly configured memory swin/swout  should be less than
 2-3.
 
  On 04/22/2015 09:49 AM, Tim Bell wrote:
  I'd also keep an eye on local I/O... we've found this to be the
 resource which can cause the worst noisy neighbours. Swapping makes this
 worse.
 
  Tim
 
  -Original Message-
  From: George Shuklin [mailto:george.shuk...@gmail.com]
  Sent: 21 April 2015 23:55
  To: openstack-operators@lists.openstack.org
  Subject: Re: [Openstack-operators] over commit ratios
 
  It's very depend on production type.
 
  If you can control guests and predict their memory consumption, use it
 as base
  for ratio.
  If you can't (typical for public clouds) - use 1 or smaller with
  reserved_host_memory_mb in nova.conf.
 
  And one more: some swap sapce is really necessary. Add at least twice
 of
  reserved_host_memory_mb - it really improves performance and prevents
  strange OOMs in the situation of very large host with very small dom0
 footprint.
 
  On 04/21/2015 10:59 PM, Caius Howcroft wrote:
  Just a general question: what kind of over commit ratios do people
  normally run in production with?
 
  We currently run 2 for cpu and 1 for memory (with some held back for
  OS/ceph)
 
  i.e.:
  default['bcpc']['nova']['ram_allocation_ratio'] = 1.0
  default['bcpc']['nova']['reserved_host_memory_mb'] = 1024 # often
  larger default['bcpc']['nova']['cpu_allocation_ratio'] = 2.0
 
  Caius
 
 
  ___
  OpenStack-operators mailing list
  OpenStack-operators@lists.openstack.org
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 
 
  ___
  OpenStack-operators mailing list
  OpenStack-operators@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [all][api] 3 API Guidelines up for final review

2015-04-22 Thread Everett Toews
Hi All,

We have 3 API Guidelines that are ready for a final review.

1. Metadata guidelines document
https://review.openstack.org/#/c/141229/

2. Tagging guidelines
https://review.openstack.org/#/c/155620/

3. Guidelines on using date and time format
https://review.openstack.org/#/c/159892/

If the API Working Group hasn’t received any further feedback, we’ll merge them 
on April 29.

Thanks,
Everett


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [Ceilometer] Gnocchi - capturing time-series data

2015-04-22 Thread gordon chung
hi folks,

there's a session coming up at the summit so we can discuss Ceilometer and 
give/get some feedback but i wanted to highlight some of the work we've been 
doing, specifically relating to storing measurement values. as many of you have 
heard/read, we're building this thing called Gnocchi. to explain what Gnocchi 
is, Julien, a key developer of Gnocchi, wrote up an introduction[1] which i 
hope explains a bit of the concepts of Gnocchi and what it aims to achieve. 
from this, i hope it answers some questions, generates some ideas on how it can 
be used, and raises a few thoughts on how to improve it.

[1] http://julien.danjou.info/blog/2015/openstack-gnocchi-first-release

cheers,
gord
  
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Sharing resources across OpenStack instances

2015-04-22 Thread Marc Heckmann
[top posting on this one]

Hi,

When you write Openstack instances, I'm assuming that you're referring
to Openstack deployments right? 

We have different deployments based on geographic regions for
performance concerns but certainly not by department. Each Openstack
project is tied to a department/project budget code and re-billed
accordingly based on Ceilometer data. No need to have separate
deployments for that. Central IT owns all the Cloud infra.

In the separate deployments the only thing that we aim to have shared is
Swift and Keystone (it's not the case for us right now).

Glance images need to be identical between deployments but that's easily
achievable through automation both for the operator and the end user.

We make sure that the users understand that these are separate
regions/Clouds akin to AWS regions.

-m

On Wed, 2015-04-22 at 13:50 +, Fox, Kevin M wrote:
 This is a case for a cross project cloud (institutional?). It costs
 more to run two little clouds then one bigger one. Both in terms of
 man power, and in cases like these. under utilized resources.
 
 #3 is interesting though. If there is to be an openstack app catalog,
 it would be inportant to be able to pull the needed images from
 outside the cloud easily.
 
 Thanks,
 Kevin 
  
 
 __
 From: Adam Young
 Sent: Wednesday, April 22, 2015 6:32:17 AM
 To: openstack-operators@lists.openstack.org
 Subject: [Openstack-operators] Sharing resources across OpenStack
 instances
 
 
 Its been my understanding that many people are deploying small
 OpenStack 
 instances as a way to share the Hardware owned by their particular
 team, 
 group, or department.  The Keystone instance represents ownership,
 and 
 the identity of the users comes from a corporate LDAP server.
 
 Is there much demand for the following scenarios?
 
 1.  A project team crosses organizational boundaries and has to work 
 with VMs in two separate OpenStack instances.  They need to set up a 
 network that requires talking to two neutron instances.
 
 2.  One group manages a powerful storage array.  Several OpenStack 
 instances need to be able to mount volumes from this array.
 Sometimes, 
 those volumes have to be transferred from VMs running in one instance
 to 
 another.
 
 3.  A group is producing nightly builds.  Part of this is an image 
 building system that posts to glance.  Ideally, multiple OpenStack 
 instances would be able to pull their images from the same glance.
 
 4. Hadoop ( or some other orchestrated task) requires more resources 
 than are in any single OpenStack instance, and needs to allocate 
 resources across two or more instances for a single job.
 
 
 I suspect that these kinds of architectures are becoming more
 common.  
 Can some of the operators validate these assumptions?  Are there
 other, 
 more common cases where Operations need to span multiple clouds which 
 would require integration of one Nova server with multiple Cinder, 
 Glance, or Neutron  servers managed in other OpenStack instances?
 
 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 
 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Sharing resources across OpenStack instances

2015-04-22 Thread Mike Smith
At Overstock we have a number of separate Openstack deployments in different 
facilities that are completely separated from each other.  No shared services 
between them.   Some of the separation is due to the kind of instances they 
contain (“Dev/Test vs “Prod” for example), but it is largely due to the 
location diversity and the desire to not require everything to be upgraded at 
once.

We have automation to build and push out new glance images every month (built 
via Oz) to the multiple glance instances.  Our home-made orchestration knows 
how to provision to the multiple clouds.   We are not currently using corporate 
LDAP for these, but our orchestration tool does use AD and that’s where we do 
most of our work from anyway.  All of these are managed by our Website Systems 
team and we do basic show-back to various teams that use these services using 
data from nova statistics (no Ceilometer yet).

Mike Smith
Principal Engineer, Website Systems
Overstock.com



 On Apr 22, 2015, at 6:26 PM, Marc Heckmann marc.heckm...@ubisoft.com wrote:

 [top posting on this one]

 Hi,

 When you write Openstack instances, I'm assuming that you're referring
 to Openstack deployments right?

 We have different deployments based on geographic regions for
 performance concerns but certainly not by department. Each Openstack
 project is tied to a department/project budget code and re-billed
 accordingly based on Ceilometer data. No need to have separate
 deployments for that. Central IT owns all the Cloud infra.

 In the separate deployments the only thing that we aim to have shared is
 Swift and Keystone (it's not the case for us right now).

 Glance images need to be identical between deployments but that's easily
 achievable through automation both for the operator and the end user.

 We make sure that the users understand that these are separate
 regions/Clouds akin to AWS regions.

 -m

 On Wed, 2015-04-22 at 13:50 +, Fox, Kevin M wrote:
 This is a case for a cross project cloud (institutional?). It costs
 more to run two little clouds then one bigger one. Both in terms of
 man power, and in cases like these. under utilized resources.

 #3 is interesting though. If there is to be an openstack app catalog,
 it would be inportant to be able to pull the needed images from
 outside the cloud easily.

 Thanks,
 Kevin


 __
 From: Adam Young
 Sent: Wednesday, April 22, 2015 6:32:17 AM
 To: openstack-operators@lists.openstack.org
 Subject: [Openstack-operators] Sharing resources across OpenStack
 instances


 Its been my understanding that many people are deploying small
 OpenStack
 instances as a way to share the Hardware owned by their particular
 team,
 group, or department.  The Keystone instance represents ownership,
 and
 the identity of the users comes from a corporate LDAP server.

 Is there much demand for the following scenarios?

 1.  A project team crosses organizational boundaries and has to work
 with VMs in two separate OpenStack instances.  They need to set up a
 network that requires talking to two neutron instances.

 2.  One group manages a powerful storage array.  Several OpenStack
 instances need to be able to mount volumes from this array.
 Sometimes,
 those volumes have to be transferred from VMs running in one instance
 to
 another.

 3.  A group is producing nightly builds.  Part of this is an image
 building system that posts to glance.  Ideally, multiple OpenStack
 instances would be able to pull their images from the same glance.

 4. Hadoop ( or some other orchestrated task) requires more resources
 than are in any single OpenStack instance, and needs to allocate
 resources across two or more instances for a single job.


 I suspect that these kinds of architectures are becoming more
 common.
 Can some of the operators validate these assumptions?  Are there
 other,
 more common cases where Operations need to span multiple clouds which
 would require integration of one Nova server with multiple Cinder,
 Glance, or Neutron  servers managed in other OpenStack instances?

 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




CONFIDENTIALITY NOTICE: This message is intended only for the use and review of 
the individual or entity to which it is addressed and may contain information 
that is privileged and confidential. If the reader of this message is not the 

Re: [Openstack-operators] over commit ratios

2015-04-22 Thread Tim Bell
I'd also keep an eye on local I/O... we've found this to be the resource which 
can cause the worst noisy neighbours. Swapping makes this worse.

Tim

 -Original Message-
 From: George Shuklin [mailto:george.shuk...@gmail.com]
 Sent: 21 April 2015 23:55
 To: openstack-operators@lists.openstack.org
 Subject: Re: [Openstack-operators] over commit ratios
 
 It's very depend on production type.
 
 If you can control guests and predict their memory consumption, use it as base
 for ratio.
 If you can't (typical for public clouds) - use 1 or smaller with
 reserved_host_memory_mb in nova.conf.
 
 And one more: some swap sapce is really necessary. Add at least twice of
 reserved_host_memory_mb - it really improves performance and prevents
 strange OOMs in the situation of very large host with very small dom0 
 footprint.
 
 On 04/21/2015 10:59 PM, Caius Howcroft wrote:
  Just a general question: what kind of over commit ratios do people
  normally run in production with?
 
  We currently run 2 for cpu and 1 for memory (with some held back for
  OS/ceph)
 
  i.e.:
  default['bcpc']['nova']['ram_allocation_ratio'] = 1.0
  default['bcpc']['nova']['reserved_host_memory_mb'] = 1024 # often
  larger default['bcpc']['nova']['cpu_allocation_ratio'] = 2.0
 
  Caius
 
 
 
 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Migrating to Different Availability Zone and Different Storage

2015-04-22 Thread Eren Türkay
Hello operators,

We have a running installation of Openstack with block storage. We are
currently having a problem with our storage appliance and we would like to
migrate the instance from this storage appliance. To do that, we are thinking
of creating a new availability zone with new storage applience.

In the end, we want to move the instances from our current storage appliance to
another. Those storage appliances are the same and we have enough storage.
Basically, the storage appliance and the interface will be the same, but it
will be completely new machine.

Is it possible to move instances from one availability zone to other while
keeping in mind that those availability zones use different storage? If so, is
there any guide to configure this? Does nova/cinder copy the disk of the
instance from old one (old availability zone) to new one? Lastly, could it be
done with minimal/zero downtime (live migration)?

Regards,
Eren

-- 
Eren Türkay, System Administrator
https://skyatlas.com/ | +90 850 885 0357

Yildiz Teknik Universitesi Davutpasa Kampusu
Teknopark Bolgesi, D2 Blok No:107
Esenler, Istanbul Pk.34220



signature.asc
Description: OpenPGP digital signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [Nova] Add config option for real deletes instead of soft-deletes

2015-04-22 Thread Sylvain Bauza
Cross-posting to operators@ as I think they are rather interested in the 
$subject :-)


Le 21/04/2015 23:42, Artom Lifshitz a écrit :

Hello,

I'd like to gauge acceptance of introducing a feature that would give operators
a config option to perform real database deletes instead of soft deletes.

There's definitely a need for *something* that cleans up the database. There
have been a few attempts at a DB purge engine [1][2][3][4][5], and archiving to
shadow tables has been merged [6] (though that currently has some issues [7]).

DB archiving notwithstanding, the general response to operators when they
mention the database becoming too big seems to be DIY cleanup.

I would like to propose a different approach: add a config option that turns
soft-deletes into real deletes, and start telling operators if you turn this
on, it's DIY backups.

Would something like that be acceptable and feasible? I'm ready to put in the
work to implement this, however searching the mailing list indicates that it
would be somewhere between non trivial and impossible [8]. Before I start, I
would like some confidence that it's closer to the former than the latter :)


My personal bet is that it should indeed be config-driven : either we 
keep the old records or we just get rid of them.


I'm not fan of any massive deletion system that Nova would manage unless 
it would be on trigger because even if the model is quite correctly 
normalized, it would perhaps lock many tables if we don't get attention 
to that.


Anyway, a spec seems a good place for discussing that, IMHO.

-Sylvain





Cheers!

[1] https://blueprints.launchpad.net/nova/+spec/db-purge-engine
[2] https://blueprints.launchpad.net/nova/+spec/db-purge2
[3] https://blueprints.launchpad.net/nova/+spec/remove-db-archiving
[4] https://blueprints.launchpad.net/nova/+spec/database-purge
[5] https://blueprints.launchpad.net/nova/+spec/db-archiving
[6] https://review.openstack.org/#/c/18493/
[7] https://review.openstack.org/#/c/109201/
[8] 
http://lists.openstack.org/pipermail/openstack-operators/2014-November/005591.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] over commit ratios

2015-04-22 Thread George Shuklin
Yes, it really depends on the used backing technique. We using SSDs and 
raw images, so IO is not an issue.


But memory is more important: if you lack IO capability you left with 
slow guests. If you lack memory you left with dead guests (hello, OOM 
killer).


BTW: Swap is needed not to swapin/swapout, but to relief memory 
pressure. With properly configured memory swin/swout  should be less 
than 2-3.


On 04/22/2015 09:49 AM, Tim Bell wrote:

I'd also keep an eye on local I/O... we've found this to be the resource which 
can cause the worst noisy neighbours. Swapping makes this worse.

Tim


-Original Message-
From: George Shuklin [mailto:george.shuk...@gmail.com]
Sent: 21 April 2015 23:55
To: openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] over commit ratios

It's very depend on production type.

If you can control guests and predict their memory consumption, use it as base
for ratio.
If you can't (typical for public clouds) - use 1 or smaller with
reserved_host_memory_mb in nova.conf.

And one more: some swap sapce is really necessary. Add at least twice of
reserved_host_memory_mb - it really improves performance and prevents
strange OOMs in the situation of very large host with very small dom0 footprint.

On 04/21/2015 10:59 PM, Caius Howcroft wrote:

Just a general question: what kind of over commit ratios do people
normally run in production with?

We currently run 2 for cpu and 1 for memory (with some held back for
OS/ceph)

i.e.:
default['bcpc']['nova']['ram_allocation_ratio'] = 1.0
default['bcpc']['nova']['reserved_host_memory_mb'] = 1024 # often
larger default['bcpc']['nova']['cpu_allocation_ratio'] = 2.0

Caius



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators