Hi,

To update the thread the initial problem that I mentioned that when I add a 
host to multiple availability zone(AZ) and then do a
“nova boot” without specifying a AZ expecting the default zone to be picked up.

This is due to the bug [1] as mentioned by Vish. I have updated the bug with 
the problem.

The validation fails during instance create due to the [1]

Thanks,
Sangeeta

[1] https://bugs.launchpad.net/nova/+bug/1277230
From: Sylvain Bauza <sylvain.ba...@gmail.com<mailto:sylvain.ba...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, March 26, 2014 at 1:34 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [nova][scheduler] Availability Zones and Host 
aggregates..

I can't agree more on this. Although the name sounds identical to AWS, Nova AZs 
are *not* for segregating compute nodes, but rather exposing to users a certain 
sort of grouping.
Please see this pointer for more info if needed : 
http://russellbryantnet.wordpress.com/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/

Regarding the bug mentioned by Vish [1], I'm the owner of it. I took it a while 
ago, but things and priorities changed so I can take a look over it this week 
and hope to deliver a patch by next week.

Thanks,
-Sylvain

[1] https://bugs.launchpad.net/nova/+bug/1277230




2014-03-26 19:00 GMT+01:00 Chris Friesen 
<chris.frie...@windriver.com<mailto:chris.frie...@windriver.com>>:
On 03/26/2014 11:17 AM, Khanh-Toan Tran wrote:

I don't know why you need a
compute node that belongs to 2 different availability-zones. Maybe
I'm wrong but for me it's logical that availability-zones do not
share the same compute nodes. The "availability-zones" have the role
of partition your compute nodes into "zones" that are physically
separated (in large term it would require separation of physical
servers, networking equipments, power sources, etc). So that when
user deploys 2 VMs in 2 different zones, he knows that these VMs do
not fall into a same host and if some zone falls, the others continue
working, thus the client will not lose all of his VMs.

See Vish's email.

Even under the original meaning of availability zones you could realistically 
have multiple orthogonal availability zones based on "room", or "rack", or 
"network", or "dev" vs "production", or even "has_ssds" and a compute node 
could reasonably be part of several different zones because they're logically 
in different namespaces.

Then an end-user could boot an instance, specifying "networkA", "dev", and 
"has_ssds" and only hosts that are part of all three zones would match.

Even if they're not used for orthogonal purposes, multiple availability zones 
might make sense.  Currently availability zones are the only way an end-user 
has to specify anything about the compute host he wants to run on.  So it's not 
entirely surprising that people might want to overload them for purposes other 
than physical partitioning of machines.

Chris


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to