Re: [Openstack] Project quotas on multi-region

2013-03-23 Thread Nathanael Burton
On Mar 23, 2013 7:59 PM, "Aguiar, Glaucimar (Brazil R&D-ECL)" <
glaucimar.agu...@hp.com> wrote:
>
> Hi,
>
> In a deployment scenario where one keystone has several regions
registered, how the project quota are managed by, as an example, two nova
services in two different regions?
> I am wondering if is it possible to set quota on the project for all
regions or this must to be done on a region by region basis which really
means a quota for a project in a region.
>
> Thanks in advance,
> Glaucimar Aguiar
>
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

Glaucimar,

Currently quotas are maintained within each nova system so there is not a
global view/management/enforcement of quotas. I would love to see a
discussion of centralizing things from nova like key pairs, AZs, and quotas
in keystone.

Thanks,

Nate
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Project quotas on multi-region

2013-03-23 Thread
Hi,

In a deployment scenario where one keystone has several regions registered, how 
the project quota are managed by, as an example, two nova services in two 
different regions?
I am wondering if is it possible to set quota on the project for all regions or 
this must to be done on a region by region basis which really means a quota for 
a project in a region.

Thanks in advance,
Glaucimar Aguiar




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] DHCP release

2013-03-23 Thread Nathanael Burton
On Mar 23, 2013 4:02 AM, "David Hill"  wrote:
>
>
> 
> From: Robert Collins [robe...@robertcollins.net]
> Sent: March 23, 2013 02:21
> To: David Hill
> Cc: Kevin Stevens; openstack@lists.launchpad.net
> Subject: Re: [Openstack] DHCP release
>
> On 23 March 2013 14:53, David Hill  wrote:
> > Hello Kevin,
> >
> > Thanks for replying to my question.   I was asking that question
because if we go there:
http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-vlan-networking.html
and look at the very bottom of the page, it suggests the following:
> >
> > # release leases immediately on terminate
> > force_dhcp_release=true? (did I miss something?)
> > # one week lease time
> > dhcp_lease_time=604800
> > # two week disassociate timeout
> > fixed_ip_disassociate_timeout=1209600
> >
> > I tried that and if you have at some creation/destruction of virtual
machines, let's say 2046 in the same week, you'll end up burning the 2046
IPs because they're never disassociated.  At some point, nova-network
complains with "no more fixed IP are available".  Changing
fixed_ip_disassociate_timeout to something smaller solves this issue.
> > Is there any reasons why fixed_ip_disassociate_timeout should be bigger
than dhcp_lease_time?
> >
> > Also, I thought that by destroying a virtual machine, it would
release/disassociate the IP from the UUID since it has been destroyed
(DELETED).  I've turned on the debugging and with
fixed_ip_disassociate_timeout set to 600 seocnds, it disassociate stale IPs
after they've been deleted for at least 600 seconds.  Is it a bug in our
setup/nova-network or nova-network/manage relies on the periodic task that
disassociate stale IPs in order to regain those IPs?
> >
> > Finaly, wouldn't it be better to simply disassociate a released IP as
soon as the VM is deleted?  Since we deleted the VM, why keep it in the
database?
>
> When you reuse an IP address you run the risk of other machines that
> have the IP cached (e.g. as DNS lookup result, or because they were
> configured to use it as a service endpoint) talking to the wrong
> machine. The long timeout is to prevent the sort of confusing hard to
> debug errors that that happen when machine A is replaced by machine C
> on A's IP address.
>
> My 2c: just make your pool larger. Grab 10/8 and have 16M ip's to play
with.
>
> -Rob
>
> I'm not the network guy here but, if I use a 10/8 and that we already
have 10/8 in our internal network, this could easily become a problem
am I wrong?
>
> Also, if a VM is deleted, IMHO, it's destroyed with all it's network.
I don't know if this is "old minding" or anything,  but when I destroy a VM
in VSphere,  I expect it to disappear leaving no trace.   This is the
cloud, and when I delete something,  I expect it to simply be deleted.
>
> My 2c, but I see your point and have nothing against it.
>
>
> Dave
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

David,

I believe the biggest reason for the long timeout is historical based on
bugs in dnsmasq [1].  You can probably just use the default of 600 now if
you're using a new enough version of dnsmasq.

[1] - https://lists.launchpad.net/openstack/msg11696.html

Thanks,

Nate
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Making networking decision on new OpenStack install

2013-03-23 Thread Logan McNaughton
The Quantum administration guide:
http://docs.openstack.org/folsom/openstack-network/admin/content/

That is your best bet. It's very thorough and covers both installation, and
administration. It also goes over a number of example scenarios.

Nova-network still works well for a single flat network, other than that,
Quantum is probably the better choice.
On Mar 22, 2013 4:32 PM, "Willard Dennis"  wrote:

> Hi all,
>
> We’ve been playing around with a single-server install of (Folsom)
> Devstack, and have now decided to do the deal and create a multi-node
> OpenStack installation. The installation is going to be for research use
> for a single tenant (the owning research dept) and should not grow that
> much in physical nodes.
>
> In any case, being StackNewbs™ we would like to keep it simple and follow
> a good known recipe. Thankfully there’s now a OpenStack Operations Guide we
> can use to get going 😊 However, I see that it uses the
> older ‘nova-network’ network model, and not the new Quantum framework. We
> do plan to use Folsom as our release; is it viable for a few-year lifetime
> to go with nova-network at this point, or should we really consider using
> Quantum? If Quantum is used, is there any instructions available to use
> that instead of nova-network in the Ops Guide type of installation?
>
> Thanks for your input...
>
> Will
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] DHCP release

2013-03-23 Thread David Hill


From: Robert Collins [robe...@robertcollins.net]
Sent: March 23, 2013 02:21
To: David Hill
Cc: Kevin Stevens; openstack@lists.launchpad.net
Subject: Re: [Openstack] DHCP release

On 23 March 2013 14:53, David Hill  wrote:
> Hello Kevin,
>
> Thanks for replying to my question.   I was asking that question because if 
> we go there: 
> http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-vlan-networking.html
>   and look at the very bottom of the page, it suggests the following:
>
> # release leases immediately on terminate
> force_dhcp_release=true? (did I miss something?)
> # one week lease time
> dhcp_lease_time=604800
> # two week disassociate timeout
> fixed_ip_disassociate_timeout=1209600
>
> I tried that and if you have at some creation/destruction of virtual 
> machines, let's say 2046 in the same week, you'll end up burning the 2046 IPs 
> because they're never disassociated.  At some point, nova-network complains 
> with "no more fixed IP are available".  Changing 
> fixed_ip_disassociate_timeout to something smaller solves this issue.
> Is there any reasons why fixed_ip_disassociate_timeout should be bigger than 
> dhcp_lease_time?
>
> Also, I thought that by destroying a virtual machine, it would 
> release/disassociate the IP from the UUID since it has been destroyed 
> (DELETED).  I've turned on the debugging and with 
> fixed_ip_disassociate_timeout set to 600 seocnds, it disassociate stale IPs 
> after they've been deleted for at least 600 seconds.  Is it a bug in our 
> setup/nova-network or nova-network/manage relies on the periodic task that 
> disassociate stale IPs in order to regain those IPs?
>
> Finaly, wouldn't it be better to simply disassociate a released IP as soon as 
> the VM is deleted?  Since we deleted the VM, why keep it in the database?

When you reuse an IP address you run the risk of other machines that
have the IP cached (e.g. as DNS lookup result, or because they were
configured to use it as a service endpoint) talking to the wrong
machine. The long timeout is to prevent the sort of confusing hard to
debug errors that that happen when machine A is replaced by machine C
on A's IP address.

My 2c: just make your pool larger. Grab 10/8 and have 16M ip's to play with.

-Rob

I'm not the network guy here but, if I use a 10/8 and that we already have 10/8 
in our internal network, this could easily become a problem am I wrong?

Also, if a VM is deleted, IMHO, it's destroyed with all it's network. I 
don't know if this is "old minding" or anything,  but when I destroy a VM in 
VSphere,  I expect it to disappear leaving no trace.   This is the cloud, and 
when I delete something,  I expect it to simply be deleted.

My 2c, but I see your point and have nothing against it.


Dave


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp