Status: New
Owner: ----

New issue 799 by [email protected]: hail does not respect instance exclusion tags
http://code.google.com/p/ganeti/issues/detail?id=799

What software version are you running? Please provide the output of "gnt-
cluster --version", "gnt-cluster version", and "hspace --version".

# gnt-cluster --version
gnt-cluster (ganeti v2.9.5) 2.9.5

# gnt-cluster version
Software version: 2.9.5
Internode protocol: 2090000
Configuration format: 2090000
OS api version: 20
Export interface: 0
VCS version: v2.9.5

What distribution are you using?
# cat /etc/*-release
SUSE Linux Enterprise Server 11 (x86_64)
VERSION = 11
PATCHLEVEL = 3

What steps will reproduce the problem?
1. set htools instance exclusion tags
# gnt-cluster info | grep htools
Tags: htools:iextags:customer, htools:iextags:sapsid

2. add instances with exclusion tag (placement via hail)
# gnt-instance add -t sharedfile ... --tags sapsid:xxx vm1
# gnt-instance add -t sharedfile ... --tags sapsid:xxx vm2

What is the expected output? What do you see instead?

Expected is, that hail places instances on nodes wrt exlusion tag. But it happend, that the same node was choosen:

2014-04-15 09:25:01,681: ganeti-masterd pid=22310/Jq16/Job890534/I_CREATE INFO Selected nodes for instance gisu429.gisa-halle.de via iallocator hail: gisu818.gisa-halle.de 2014-04-15 09:30:31,423: ganeti-masterd pid=22310/Jq18/Job890543/I_CREATE INFO Selected nodes for instance gisu430.gisa-halle.de via iallocator hail: gisu817.gisa-halle.de 2014-04-15 09:32:16,040: ganeti-masterd pid=22310/Jq25/Job890544/I_CREATE INFO Selected nodes for instance gisu431.gisa-halle.de via iallocator hail: gisu818.gisa-halle.de

Right after creation, running hbal will calculate, that some instances should migrate:

2014-04-15 09:39:02,418: ganeti-masterd pid=22310/ClientReq13 INFO New job with id 890555, summary: INSTANCE_MIGRATE(gisu429.gisa-halle.de)

Please provide any additional information below.

It seems to happen often in a "unbalanced" cluster, i.e. after adding a new node. But I'm shure the capacity of the "old" nodes hasn't been reached (like hbal afterwards proves).

Thanks, Sascha.

--
You received this message because this project is configured to send all issue notifications to this address.
You may adjust your notification preferences at:
https://code.google.com/hosting/settings

Reply via email to