I have performed the test, which I hoped will shed some light on this (potential) behavior, however turns out it's not.
The idea was to prepare two AZ which will separate the two groups of computes (in my case it was simply 3-node devstack), so that first AZ would have one compute and the second AZ would have the other one. There is also one Host Aggregate which contain all the computes. With this approach it might happen, that Host Aggregate will take a precedence over the AZ. The actors: 1. ctrl (controller node) 2. Alter the nova.conf: scheduler_available_filters=nova.scheduler.filters.all_filters scheduler_default_filters=RetryFilter,AggregateInstanceExtraSpecsFilter,AvailabilityZoneFilter,RamFilter,CoreFilter,DiskFilter,ComputeFilter,ImagePropertiesFilter 3. cpu1 and cpu2 (compute nodes) 4. availability zone az1 which include cpu1 and have metadata set to some.hw=true 5. availability zone az2 which include cpu2 6. host aggregate aggr3 which include cpu and cpu2 7. flavor aztest with the extra spec set to some.hw=true The action: Create the vms with aztest - all of them should be spawned on cpu1. Note, cirrosXXX has to be avialable; i've used image for i386 to be able to successfully perform live migration on my devstack setup. $ nova boot --flavor aztest --image cirrosXXX --min-count 4 vm $ nova list --fields host,name,status +--------------------------------------+------+------+--------+ | ID | Host | Name | Status | +--------------------------------------+------+------+--------+ | 1569be1a-1289-4d52-b3d1-c3008f7c865f | cpu1 | vm-4 | ACTIVE | | 217cb74e-74c6-4e46-abbc-3582d7e5fb4d | cpu1 | vm-3 | ACTIVE | | 7dc98646-db5a-4433-b000-fd0ae671f3c7 | cpu1 | vm-2 | ACTIVE | | a6ddd4d8-d05f-45c3-9e6a-4c9fa33da2ea | cpu1 | vm-1 | ACTIVE | +--------------------------------------+------+------+--------+ Now, try live migrate the vm-1: $ nova live-migration --block-migrate vm-1 ERROR (BadRequest): No valid host was found. There are not enough hosts available. (HTTP 400) (Request-ID: req-2b1cd8d2-2316-40f2-8600-98c748ae565d) After adding another compute to the cluster, and adding it to the az1, live migration works as expected: $ nova aggregate-add-host aggr1 cpu3 $ nova live-migration --block-migrate vm-1 So I've failed to reproduce the reported behaviour, which might be a result of not enough data provided, and might be an configuration issue on the production. ** Changed in: nova Status: Confirmed => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1442024 Title: AvailabilityZoneFilter does not filter when doing live migration Status in OpenStack Compute (nova): Invalid Bug description: last night our ops team live migrated (nova live-migration --block- migrate $vm) a group of vm to do hw maintenance. the vm ended on a different AZ making the vm unusable (we have different upstream network connectivity on each AZ) it never happened before, i tested of course, i have setup AZ filter scheduler_available_filters=nova.scheduler.filters.all_filters scheduler_default_filters=RetryFilter,AggregateInstanceExtraSpecsFilter,AvailabilityZoneFilter,RamFilter,CoreFilter,DiskFilter,ComputeFilter,ImagePropertiesFilter i'm using icehouse 2014.1.2-0ubuntu1.1~cloud0 i will clean and upload logs right away To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1442024/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp