[Yahoo-eng-team] [Bug 1827094] [NEW] ironic.peer_list config help is wrong
Public bug reported: This help text currently implies that the host reading the config does not need to be in the peer list, when in fact it does: https://opendev.org/openstack/nova/src/commit/ce5ef763b58cad09440e0da67733ce578068752a/nova/virt/ironic/driver.py#L142 ** Affects: nova Importance: Undecided Assignee: Jim Rollenhagen (jim-rollenhagen) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1827094 Title: ironic.peer_list config help is wrong Status in OpenStack Compute (nova): In Progress Bug description: This help text currently implies that the host reading the config does not need to be in the peer list, when in fact it does: https://opendev.org/openstack/nova/src/commit/ce5ef763b58cad09440e0da67733ce578068752a/nova/virt/ironic/driver.py#L142 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1827094/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1815763] [NEW] Unbound regex in config options
Public bug reported: Oslo.config uses re.search() to check config values against the allowed regex. This checks if the regex matches anywhere in the string, rather than checking if the entire string matches the regex. Nova has three config options that appear as if the entire string should match the given regex: * DEFAULT.instance_usage_audit_period * cinder.catalog_info * serial_console.port_range However, these are not bounded with ^ and $ to ensure the entire string matches. ** Affects: nova Importance: Low Assignee: Jim Rollenhagen (jim-rollenhagen) Status: In Progress ** Changed in: nova Importance: Undecided => Low -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1815763 Title: Unbound regex in config options Status in OpenStack Compute (nova): In Progress Bug description: Oslo.config uses re.search() to check config values against the allowed regex. This checks if the regex matches anywhere in the string, rather than checking if the entire string matches the regex. Nova has three config options that appear as if the entire string should match the given regex: * DEFAULT.instance_usage_audit_period * cinder.catalog_info * serial_console.port_range However, these are not bounded with ^ and $ to ensure the entire string matches. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1815763/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1801779] [NEW] Policy rule rule:create_port:fixed_ips:subnet_id doesn't allow non-admin to create port on specific subnet
Public bug reported: Running roughly master branch. According to pip, neutron==13.0.0.0rc2.dev324. I know that isn't super helpful from a dev perspective, but this is a kolla image and I don't have a great way to map this back to a SHA. Trying to create a port on a specific subnet on a shared network. I have the following policy rules, which seem to imply I should be able to do this: "create_port:fixed_ips": "rule:context_is_advsvc or rule:admin_or_network_owner", "create_port:fixed_ips:ip_address": "rule:context_is_advsvc or rule:admin_or_network_owner", "create_port:fixed_ips:subnet_id": "rule:context_is_advsvc or rule:admin_or_network_owner or rule:shared", Client logs here: https://gist.github.com/jimrollenhagen/82514bee47ad66e1e878c56d8fd66453 Not much showing up in neutron-server.log, but can provide more info if needed. ** Affects: neutron Importance: Undecided Status: New ** Description changed: + Running roughly master branch. According to pip, + neutron==13.0.0.0rc2.dev324. I know that isn't super helpful from a dev + perspective, but this is a kolla image and I don't have a great way to + map this back to a SHA. + Trying to create a port on a specific subnet on a shared network. I have the following policy rules, which seem to imply I should be able to do this: - "create_port:fixed_ips": "rule:context_is_advsvc or rule:admin_or_network_owner", - "create_port:fixed_ips:ip_address": "rule:context_is_advsvc or rule:admin_or_network_owner", - "create_port:fixed_ips:subnet_id": "rule:context_is_advsvc or rule:admin_or_network_owner or rule:shared", + "create_port:fixed_ips": "rule:context_is_advsvc or rule:admin_or_network_owner", + "create_port:fixed_ips:ip_address": "rule:context_is_advsvc or rule:admin_or_network_owner", + "create_port:fixed_ips:subnet_id": "rule:context_is_advsvc or rule:admin_or_network_owner or rule:shared", Client logs here: https://gist.github.com/jimrollenhagen/82514bee47ad66e1e878c56d8fd66453 Not much showing up in neutron-server.log, but can provide more info if needed. -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1801779 Title: Policy rule rule:create_port:fixed_ips:subnet_id doesn't allow non- admin to create port on specific subnet Status in neutron: New Bug description: Running roughly master branch. According to pip, neutron==13.0.0.0rc2.dev324. I know that isn't super helpful from a dev perspective, but this is a kolla image and I don't have a great way to map this back to a SHA. Trying to create a port on a specific subnet on a shared network. I have the following policy rules, which seem to imply I should be able to do this: "create_port:fixed_ips": "rule:context_is_advsvc or rule:admin_or_network_owner", "create_port:fixed_ips:ip_address": "rule:context_is_advsvc or rule:admin_or_network_owner", "create_port:fixed_ips:subnet_id": "rule:context_is_advsvc or rule:admin_or_network_owner or rule:shared", Client logs here: https://gist.github.com/jimrollenhagen/82514bee47ad66e1e878c56d8fd66453 Not much showing up in neutron-server.log, but can provide more info if needed. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1801779/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1793556] [NEW] ironic: power sync loop makes way too many API calls
Public bug reported: The ironic driver does not use its local cache of node data for the get_info call, which is used during the instance power sync. This results in N API calls per power sync loop, where N is the number of instances managed by the compute service doing the sync. We should aim to use the cache and reduce this to one or less API calls. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1793556 Title: ironic: power sync loop makes way too many API calls Status in OpenStack Compute (nova): New Bug description: The ironic driver does not use its local cache of node data for the get_info call, which is used during the instance power sync. This results in N API calls per power sync loop, where N is the number of instances managed by the compute service doing the sync. We should aim to use the cache and reduce this to one or less API calls. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1793556/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1775934] [NEW] Cannot run "openstack server list" with instance stuck in scheduling state
Public bug reported: Seeing this in Ocata at 125dd1f30fdaf50182256c56808a5199856383c7. Running `openstack server list --project 9c28d07207a54c78848fd7b4f85779d5` results in a 500 error: RESP BODY: {"computeFault": {"message": "Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.\n", "code": 500}} Traceback in nova-api: http://paste.openstack.org/show/6YrSmjMSo0lIxyFjbPIz/ some data on the instance: http://paste.openstack.org/show/6PSa35HvdxZCQnVQ2sQU/ Looks like lazy-loading the flavor is failing because it's looking in the wrong database. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1775934 Title: Cannot run "openstack server list" with instance stuck in scheduling state Status in OpenStack Compute (nova): New Bug description: Seeing this in Ocata at 125dd1f30fdaf50182256c56808a5199856383c7. Running `openstack server list --project 9c28d07207a54c78848fd7b4f85779d5` results in a 500 error: RESP BODY: {"computeFault": {"message": "Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.\n", "code": 500}} Traceback in nova-api: http://paste.openstack.org/show/6YrSmjMSo0lIxyFjbPIz/ some data on the instance: http://paste.openstack.org/show/6PSa35HvdxZCQnVQ2sQU/ Looks like lazy-loading the flavor is failing because it's looking in the wrong database. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1775934/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1750450] [NEW] ironic: n-cpu fails to recover after losing connection to ironic-api and placement-api
Public bug reported: The ironic virt driver does some crazy things when the ironic API goes down - it returns [] from get_available_nodes(). When the resource tracker sees this, it immediately attempts to delete all of the compute node records and resource providers for said nodes. If placement is also down at this time, the resource providers will not be properly deleted. When ironic-api and placement-api return, nova will see nodes, create compute_node records for them, and try to create new resource providers (as they are new compute_node records). This will fail with a name conflict, and the nodes will be unusable. This is easy to fix, by raising an exception in get_available_nodes, instead of lying to the resource tracker and returning []. However, this causes nova-compute to fail to start if ironic-api is not available. This may be fine but should have a larger discussion. We've added these hacks over the years for some reason, we should look at the bigger picture and decide how we want to handle these cases. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1750450 Title: ironic: n-cpu fails to recover after losing connection to ironic-api and placement-api Status in OpenStack Compute (nova): New Bug description: The ironic virt driver does some crazy things when the ironic API goes down - it returns [] from get_available_nodes(). When the resource tracker sees this, it immediately attempts to delete all of the compute node records and resource providers for said nodes. If placement is also down at this time, the resource providers will not be properly deleted. When ironic-api and placement-api return, nova will see nodes, create compute_node records for them, and try to create new resource providers (as they are new compute_node records). This will fail with a name conflict, and the nodes will be unusable. This is easy to fix, by raising an exception in get_available_nodes, instead of lying to the resource tracker and returning []. However, this causes nova-compute to fail to start if ironic-api is not available. This may be fine but should have a larger discussion. We've added these hacks over the years for some reason, we should look at the bigger picture and decide how we want to handle these cases. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1750450/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1749797] [NEW] placement returns 503 when keystone is down
Public bug reported: See the logs here: http://logs.openstack.org/50/544750/8/check/ironic- grenade-dsvm-multinode-multitenant/5713fb8/logs/screen-placement- api.txt.gz#_Feb_15_17_58_22_463228 This is during an upgrade while Keystone is down. Placement returns a 503 because it cannot reach keystone. I'm not sure what the expected behavior should be, but a 503 feels wrong. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1749797 Title: placement returns 503 when keystone is down Status in OpenStack Compute (nova): New Bug description: See the logs here: http://logs.openstack.org/50/544750/8/check/ironic- grenade-dsvm-multinode-multitenant/5713fb8/logs/screen-placement- api.txt.gz#_Feb_15_17_58_22_463228 This is during an upgrade while Keystone is down. Placement returns a 503 because it cannot reach keystone. I'm not sure what the expected behavior should be, but a 503 feels wrong. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1749797/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1674236] Re: CI / promotion: Nova isn't aware of the nodes that were registered with Ironic
I'll have a patch up for this today sometime, btw. ** Also affects: nova Importance: Undecided Status: New ** Changed in: nova Status: New => Confirmed ** Changed in: nova Importance: Undecided => High ** Changed in: nova Assignee: (unassigned) => Jim Rollenhagen (jim-rollenhagen) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1674236 Title: CI / promotion: Nova isn't aware of the nodes that were registered with Ironic Status in OpenStack Compute (nova): Confirmed Status in tripleo: Triaged Bug description: All CI periodic jobs fail with "No valid host" error: http://logs.openstack.org/periodic/periodic-tripleo-ci-centos-7-ovb-ha/6504587/ http://logs.openstack.org/periodic/periodic-tripleo-ci-centos-7-ovb-nonha/12d034e/ Hosts are not deployed: http://logs.openstack.org/periodic/periodic-tripleo-ci-centos-7-ovb-nonha/12d034e/logs/postci.txt.gz#_2017-03-19_07_22_10_000 2017-03-19 07:22:10.000 | +--+-+++-+--+ 2017-03-19 07:22:10.000 | | ID | Name | Status | Task State | Power State | Networks | 2017-03-19 07:22:10.000 | +--+-+++-+--+ 2017-03-19 07:22:10.000 | | 96e8d6bc-0ff4-46ad-a274-7bf554cdaf1a | overcloud-cephstorage-0 | ERROR | - | NOSTATE | | 2017-03-19 07:22:10.000 | | 56266ef5-7483-4052-8698-37efe14bc1c6 | overcloud-novacompute-0 | ERROR | - | NOSTATE | | 2017-03-19 07:22:10.000 | +--+-+++-+--+ ironic node-list +--+--+---+-++-+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--+--+---+-++-+ | b285-e40e-4068-abd8-7edeeb255cef | baremetal-periodic-0 | None | power off | available | False | | 102deb76-7f12-49a1-9c3c-53472a1d0f3e | baremetal-periodic-1 | None | power off | available | False | | 8afea687-4d29-4eed-97f3-57ba449eed14 | baremetal-periodic-2 | None | power off | available | False | +--+--+---+-++-+ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1674236/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1606231] Re: [RFE] Support nova virt interface attach/detach
In ironic, this is a duplicate of an RFE to do the same: https://bugs.launchpad.net/ironic/+bug/1582188 ** Changed in: ironic Status: Confirmed => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1606231 Title: [RFE] Support nova virt interface attach/detach Status in Ironic: Invalid Status in OpenStack Compute (nova): Invalid Bug description: Steps to reproduce: 1. Get list of attached ports of instance: nova interface-list 42dd8b8b-b2bc-420e-96b6-958e9295b2d4 ++--+--+---+---+ | Port State | Port ID | Net ID | IP addresses | MAC Addr | ++--+--+---+---+ | ACTIVE | 512e6c8e-3829-4bbd-8731-c03e5d7f7639 | ccd0fd43-9cc3-4544-b17c-dfacd8fa4d14 | 10.1.0.6,fdea:fd32:11ff:0:f816:3eff:fed1:8a7c | 52:54:00:85:19:89 | ++--+--+---+---+ 2. Show ironic port. it has vif_port_id in extra with id of neutron port: ironic port-show 735fcaf5-145d-4125-8701-365c58c6b796 +---+---+ | Property | Value | +---+---+ | address | 52:54:00:85:19:89 | | created_at| 2016-07-20T13:15:23+00:00 | | extra | {u'vif_port_id': u'512e6c8e-3829-4bbd-8731-c03e5d7f7639'} | | local_link_connection | | | node_uuid | 679fa8a9-066e-4166-ac1e-6e77af83e741 | | pxe_enabled | | | updated_at| 2016-07-22T13:31:29+00:00 | | uuid | 735fcaf5-145d-4125-8701-365c58c6b796 | +---+---+ 3. Delete neutron port: neutron port-delete 512e6c8e-3829-4bbd-8731-c03e5d7f7639 Deleted port: 512e6c8e-3829-4bbd-8731-c03e5d7f7639 4. It is done from interface list: nova interface-list 42dd8b8b-b2bc-420e-96b6-958e9295b2d4 ++-++--+--+ | Port State | Port ID | Net ID | IP addresses | MAC Addr | ++-++--+--+ ++-++--+--+ 5. ironic port still has vif_port_id with neutron's port id: ironic port-show 735fcaf5-145d-4125-8701-365c58c6b796 +---+---+ | Property | Value | +---+---+ | address | 52:54:00:85:19:89 | | created_at| 2016-07-20T13:15:23+00:00 | | extra | {u'vif_port_id': u'512e6c8e-3829-4bbd-8731-c03e5d7f7639'} | | local_link_connection | | | node_uuid | 679fa8a9-066e-4166-ac1e-6e77af83e741 | | pxe_enabled | | | updated_at| 2016-07-22T13:31:29+00:00 | | uuid | 735fcaf5-145d-4125-8701-365c58c6b796 | +---+---+ This can confuse when user wants to get list of unused ports of ironic node. vif_port_id should be removed after neutron port-delete. To manage notifications about this bug go to: https://bugs.launchpad.net/ironic/+bug/1606231/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1239481] Re: nova baremetal requires manual neutron setup for metadata access
While this is an ironic-specific problem, I don't believe the fix here is in ironic. Seems that Neutron and/or ML2 mechanisms need to set a proper route for this in the physical switch, but I'm not sure which layer that would be in. (I assume usually the agent on the host does it) ** Changed in: ironic Status: Confirmed => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1239481 Title: nova baremetal requires manual neutron setup for metadata access Status in Ironic: Invalid Status in neutron: Expired Status in OpenStack Compute (nova): Won't Fix Status in tripleo: Incomplete Bug description: a subnet setup with host routes can use a bare metal gateway as long as there is a metadata server on the same network: neutron subnet-create ... (network, dhcp settings etc) host_routes type=dict list=true destination=169.254.169.254/32,nexthop= --gateway_ip= But this requires manual configuration - it would be nice if nova could configure this as part of bringing up the network for a given node. To manage notifications about this bug go to: https://bugs.launchpad.net/ironic/+bug/1239481/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1613622] Re: nova cellsv2 breaks ironic
This was fixed with https://review.openstack.org/#/c/355659/ ** Changed in: nova Status: New => Fix Released ** Changed in: ironic Status: Confirmed => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1613622 Title: nova cellsv2 breaks ironic Status in Ironic: Invalid Status in OpenStack Compute (nova): Fix Released Bug description: After merging https://review.openstack.org/#/c/322311/ and https://review.openstack.org/#/c/354734/ ironic-grenade job is failing call "nova-manage cell_v2 simple_cell_setup" with: 2016-08-16 07:00:15.389 | error: (pymysql.err.IntegrityError) (1062, u"Duplicate entry 'ubuntu-trusty-rax-ord-3505064' for key 'uniq_host_mappings0host'") [SQL: u'INSERT INTO host_mappings (created_at, updated_at, cell_id, host) VALUES (%(created_at)s, %(updated_at)s, %(cell_id)s, %(host)s)'] [parameters: {'created_at': datetime.datetime(2016, 8, 16, 7, 0, 15, 386432), 'cell_id': 2, 'host': u'ubuntu-trusty-rax-ord-3505064', 'updated_at': None}] full log may be found here: http://logs.openstack.org/99/350399/5/check/gate-grenade-dsvm- ironic/4ec5728/logs/grenade.sh.txt.gz#_2016-08-16_07_00_15_389 The patch to devstack where cellsv2 were enabled https://review.openstack.org/#/c/322311/ failed to pass Ironic jobs: 2016-08-13 02:25:16.316 | 2016-08-13 02:25:16.316 20278 DEBUG oslo_policy._cache_handler [req-d499d0c6-b799-4886-b3b8-2c576ecb3137 - -] Reloading cached file /etc/nova/policy.json read_cached_file /usr/local/lib/python2.7/dist-packages/oslo_policy/_cache_handler.py:38 2016-08-13 02:25:16.318 | 2016-08-13 02:25:16.317 20278 DEBUG oslo_policy.policy [req-d499d0c6-b799-4886-b3b8-2c576ecb3137 - -] Reloaded policy file: /etc/nova/policy.json _load_policy_file /usr/local/lib/python2.7/dist-packages/oslo_policy/policy.py:584 2016-08-13 02:25:16.559 | No hosts found to map to cell, exiting. Full log may be found here: http://logs.openstack.org/11/322311/4/check/gate-tempest-dsvm-ironic-ipa-wholedisk-agent_ssh-tinyipa-nv/1347f2a/logs/devstacklog.txt.gz#_2016-08-13_02_25_16_316 Nova experts please take a look on this issues, it is a critical for ironic team. To manage notifications about this bug go to: https://bugs.launchpad.net/ironic/+bug/1613622/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1552466] [NEW] Ironic: deploy not cleaned up if configdrive fails to build
Public bug reported: The _generate_configdrive is not protected by a try/except, and thus we do not clean up behind it if it fails somehow. This leaves firewalls and vifs set up, and leaves instance info on the node in ironic's DB. ** Affects: nova Importance: Undecided Assignee: Jim Rollenhagen (jim-rollenhagen) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1552466 Title: Ironic: deploy not cleaned up if configdrive fails to build Status in OpenStack Compute (nova): In Progress Bug description: The _generate_configdrive is not protected by a try/except, and thus we do not clean up behind it if it fails somehow. This leaves firewalls and vifs set up, and leaves instance info on the node in ironic's DB. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1552466/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1350608] Re: Request ID header is lost between nova.virt.ironic and ironic-api service
Going to close this one on the Ironic side in favor of the RFE https://bugs.launchpad.net/ironic/+bug/1505119 ** Changed in: ironic Status: Confirmed => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1350608 Title: Request ID header is lost between nova.virt.ironic and ironic-api service Status in Ironic: Invalid Status in OpenStack Compute (nova): Incomplete Bug description: Services pass request-id headers around to assist with operator interpretation of log files. This "req-XXX" header is being logged at the nova.virt.ironic layer, but does not seem to be passed to ironic's API service (or is not received / logged there). To manage notifications about this bug go to: https://bugs.launchpad.net/ironic/+bug/1350608/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1311401] Re: nova.virt.ironic tries to remove vif_port_id unnecessarily
Looks like this is fixed already: https://github.com/openstack/nova/commit/d3acac0f5bffca59441d9a4a12c89db1d45ec4cf ** Changed in: nova Assignee: Aniruddha Singh Gautam (aniruddha-gautam) => (unassigned) ** Changed in: nova Status: Confirmed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1311401 Title: nova.virt.ironic tries to remove vif_port_id unnecessarily Status in Ironic: Won't Fix Status in OpenStack Compute (nova): Fix Released Bug description: While spawning an instance, Ironic nova driver logs the following warning every time: 2014-04-22 17:23:21.967 15379 WARNING wsme.api [-] Client-side error: Couldn't apply patch '[{'path': '/extra/vif_port_id', 'op': 'remove'}]'. Reason: u'vif_port_id' To manage notifications about this bug go to: https://bugs.launchpad.net/ironic/+bug/1311401/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1326639] Re: Ironic nova driver fails to setup initial state correctly
This has been fixed for a while; we only expose resources available to a node in AVAILABLE/NONE provision state and with no instance uuid: https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L301-L318 ** Changed in: nova Status: Confirmed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1326639 Title: Ironic nova driver fails to setup initial state correctly Status in Ironic: Invalid Status in OpenStack Compute (nova): Fix Released Bug description: 2014-06-05 04:04:54.552 28915 ERROR ironic.nova.virt.ironic.driver [req-66403d15-5f7e-4a59-8d3d-ba9d6e654fb5 None] Failed to request Ironic to provision instance ef3421ef-e7b3-4203-811c-dad052b9badf: RPC do_node_deploy called for cfa5c267-3a7c-4973-bdcf-80a139a947ea, but provision state is already deploy failed. (HTTP 500) This happened because the node wasn't 'properly' cleaned after the last instance_uuid was removed from it. Seems to me that the ironic nova driver should not make any assumptions - just its instance_uuid atomically, and then reset all the state, and finally proceed to set the state it wants for deployment. To manage notifications about this bug go to: https://bugs.launchpad.net/ironic/+bug/1326639/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1341347] Re: failed Ironic deploys can have incorrect hypervisor attribute in Nova
I tend to think the instance should always be tagged with a "hypervisor" for a record of where it was built. In the past this could cause problems with the resource tracker, but those are long solved. There's also the part of this where the logs are likely gone by now, tripleo has changed its architecture up, etc. This is likely to be hard to reproduce, even if we think it is a bug. Going to close this as WONTFIX, feel free to reopen if you think I'm a terrible person :) ** Changed in: nova Status: Confirmed => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1341347 Title: failed Ironic deploys can have incorrect hypervisor attribute in Nova Status in Ironic: Invalid Status in OpenStack Compute (nova): Won't Fix Bug description: I just booted 46 nodes at once from a single Ironic conductor/Nova/keystone etc all in one cloud. After this, according to Ironic: - 1 node was in maintenance mode (see bug 1326279) 5 have instance_uuid None and the rest are active. But according to Nova: - 8 are in ERROR spawning: (in nova) | eb0e1255-4da5-46cb-b8e4-d3e1059e1087 | hw-test-eb0e1255-4da5-46cb-b8e4-d3e1059e1087 | ERROR | spawning | NOSTATE | | (in ironic) | ebd0e2c1-7630-4067-94c1-81771c1680b6 | eb0e1255-4da5-46cb-b8e4-d3e1059e1087 | power on| active | False | (see bug 1341346) - 5 are in ERROR NOSTATE: (nova)| c389bb7b-1760-4e69-a4ea-0aea07ccd4d8 | hw-test-c389bb7b-1760-4e69-a4ea-0aea07ccd4d8 | ERROR | - | NOSTATE | ctlplane=10.10.16.146 | nova show shows us that it has a hypervisor | OS-EXT-SRV-ATTR:hypervisor_hostname | 8bc4357a-6b32-47de-b3ee-cec5b41e72d2 but in ironic there is no instance uuid (nor a deployment dict..): | 8bc4357a-6b32-47de-b3ee-cec5b41e72d2 | None | power off | None | False | This bug is about the Nova instance having a hypervisor attribute that is wrong :) I have logs for this copied inside the DC, but a) its a production environment, so only tripleo-cd-admins can look (due to me being concened about passwords being in the logs) and b) they are 2.6GB in size, so its not all that feasible to attach them to the bug anyhow :). To manage notifications about this bug go to: https://bugs.launchpad.net/ironic/+bug/1341347/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1485068] Re: Nova does not support baremetal vnic
Closed bug as invalid, this blueprint covers the work https://blueprints.launchpad.net/nova/+spec/ironic-networks-support ** Changed in: nova Status: In Progress => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1485068 Title: Nova does not support baremetal vnic Status in OpenStack Compute (nova): Invalid Bug description: In order to support Ironic/Neutron integration, there is a need to identify Ironic ports. Presently defined vnic types does not help with this. Therefore, a new type is needed. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1485068/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1502177] [NEW] Existing Ironic instances report negative available RAM for the node after upgrade
Public bug reported: Ironic nodes that have an existing instance will report negative available RAM after upgrading beyond this commit: https://github.com/openstack/nova/commit/b99fb0a51c658301188bbc729d1437a9c8b75d00 The node attached to the instance will not have instance_info[memory_mb], etc, set on the node object in Ironic. This code causes the driver to report memory_mb_used=memory_mb=0 if this info is unset. The resource tracker notices that an instance is on that node and sets memory_mb_used to X (the size of the instance). After which the node reports (-X) available memory. This can wreak havoc on tools that look at total available memory. These could range from capacity reporting tools to scheduler/cells filters. If more than half of the capacity has instances, the total memory available will be negative, and could cause things to not schedule properly or generate alerts. ** Affects: nova Importance: Undecided Status: New ** Tags: liberty-rc-potential -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1502177 Title: Existing Ironic instances report negative available RAM for the node after upgrade Status in OpenStack Compute (nova): New Bug description: Ironic nodes that have an existing instance will report negative available RAM after upgrading beyond this commit: https://github.com/openstack/nova/commit/b99fb0a51c658301188bbc729d1437a9c8b75d00 The node attached to the instance will not have instance_info[memory_mb], etc, set on the node object in Ironic. This code causes the driver to report memory_mb_used=memory_mb=0 if this info is unset. The resource tracker notices that an instance is on that node and sets memory_mb_used to X (the size of the instance). After which the node reports (-X) available memory. This can wreak havoc on tools that look at total available memory. These could range from capacity reporting tools to scheduler/cells filters. If more than half of the capacity has instances, the total memory available will be negative, and could cause things to not schedule properly or generate alerts. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1502177/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1499054] Re: devstack VMs are not booting
This revert fixes the problem locally for me: https://review.openstack.org/#/c/226969/ ** Also affects: neutron Importance: Undecided Status: New ** Tags added: liberty-backport-potential -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1499054 Title: devstack VMs are not booting Status in Ironic: Confirmed Status in neutron: New Bug description: In devstack, VMs are failing to boot the deploy ramdisk consistently. It appears ipxe is failing to configure the NIC, which is usually caused by a DHCP timeout, but can also be caused by a bug in the PXE ROM that chainloads to ipxe. See also http://ipxe.org/err/040ee1 Console output: eaBIOS (version 1.7.4-20140219_122710-roseapple) achine UUID 37679b90-9a59-4a85-8665-df8267e09a3b M iPXE (http://ipxe.org) 00:04.0 CA00 PCI2.10 PnP PMM+3FFC2360+3FF22360 CA00 Booting from ROM... iPXE (PCI 00:04.0) starting execution...ok iPXE initialising devices...ok iPXE 1.0.0+git-2013.c3d1e78-2ubuntu1.1 -- Open Source Network Boot Firmware -- http://ipxe.org Features: HTTP HTTPS iSCSI DNS TFTP AoE bzImage ELF MBOOT PXE PXEXT Menu net0: 52:54:00:7c:af:9e using 82540em on PCI00:04.0 (open) [Link:up, TX:0 TXE:0 RX:0 RXE:0] Configuring (net0 52:54:00:7c:af:9e).. Error 0x040ee119 (http:// ipxe.org/040ee119) No more network devices No bootable device. To manage notifications about this bug go to: https://bugs.launchpad.net/ironic/+bug/1499054/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1495523] Re: router-interface-add fails with error 500 on PostgreSQL
This doesn't require Ironic changes, only affects Ironic... I'm going to close this in Ironic as invalid so it doesn't show up in the milestone. (and yes, it's fixed now) ** Changed in: ironic Status: In Progress => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1495523 Title: router-interface-add fails with error 500 on PostgreSQL Status in Ironic: Invalid Status in neutron: Fix Committed Bug description: If PostgreSQL is used as DB backend then Neutron fails with error code 500 using CLI "router-interface-add": 2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters Traceback (most recent call last): 2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1139, in _execute_context 2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters context) 2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 450, in do_execute 2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters cursor.execute(statement, parameters) 2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters ProgrammingError: column "agents.id" must appear in the GROUP BY clause or be used in an aggregate function 2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters LINE 1: SELECT agents.id AS agents_id, agents.agent_type AS agents_a... Manila CI Tempest job with PostreSQL errors: http://logs.openstack.org/01/218801/9/check/gate-manila-tempest-dsvm- neutron- postgres/76739da/logs/devstacklog.txt.gz#_2015-09-14_11_37_13_009 http://logs.openstack.org/01/218801/9/check/gate-manila-tempest-dsvm- neutron- postgres/76739da/logs/screen-q-svc.txt.gz?level=TRACE#_2015-09-14_11_37_13_976 To manage notifications about this bug go to: https://bugs.launchpad.net/ironic/+bug/1495523/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1301279] Re: Changing node's properties in Ironic after node is deployed will count as available resources in Nova
I don't believe there's anything to do here in Ironic, correct me if I'm wrong. ** Changed in: ironic Status: Confirmed => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1301279 Title: Changing node's properties in Ironic after node is deployed will count as available resources in Nova Status in Ironic: Invalid Status in OpenStack Compute (nova): Fix Released Bug description: If you increase the properties of a node which was already deployed the different will go to nova as available resources. For e.g, a node with properties/memory_mb=512 was deployed, and n-cpu is showing: 2014-04-02 10:37:26.514 AUDIT nova.compute.resource_tracker [-] Free ram (MB): 0 2014-04-02 10:37:26.514 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 0 2014-04-02 10:37:26.514 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 0 Now if we update that that to properties/memory_mb=1024, the difference will be shown in nova as available resources: 2014-04-02 10:40:48.266 AUDIT nova.compute.resource_tracker [-] Free ram (MB): 512 2014-04-02 10:40:48.266 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 0 2014-04-02 10:40:48.266 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 0 LOGs: http://paste.openstack.org/show/74806/ To manage notifications about this bug go to: https://bugs.launchpad.net/ironic/+bug/1301279/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1352510] Re: Delete and re-add of same node to compute_nodes table is broken
15:15:57 edleafe | jroll: ok, got to look at that bug, and yeah, it should have closed it, but I forgot to put Closes-bug: in the commit message And I agree it appears to fix it. This was released with Liberty-1 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1352510 Title: Delete and re-add of same node to compute_nodes table is broken Status in OpenStack Compute (nova): Fix Released Bug description: When a compute node is deleted (or marked deleted) in the DB and another compute node is re-added with the same name, things break. This is because the resource tracker caches the compute node object/dict and uses the 'id' to update the record. When this happens, rt.update_available_resources will raise a ComputeHostNotFound. This ends up short-circuiting the full run of the update_available_resource() periodic task. This mostly applies when using a virt driver where a nova-compute manages more than 1 "hypervisor". To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1352510/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1479124] [NEW] Scheduler doesn't respect tracks_instance_changes in all cases
Public bug reported: This commit introduces instance tracking in the scheduler, with an option to disable it for performance. https://github.com/openstack/nova/commit/82cc056fb7e1b081a733797ed27550343cbaf44c However, _add_instance_info is not guarded by the config option, but causes just as much performance havoc as the initial load. https://github.com/openstack/nova/commit/82cc056fb7e1b081a733797ed27550343cbaf44c #diff-978b9f8734365934eaf8fbb01f11a7d7R554 This should be guarded by the config. ** Affects: nova Importance: Undecided Assignee: Jim Rollenhagen (jim-rollenhagen) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1479124 Title: Scheduler doesn't respect tracks_instance_changes in all cases Status in OpenStack Compute (nova): In Progress Bug description: This commit introduces instance tracking in the scheduler, with an option to disable it for performance. https://github.com/openstack/nova/commit/82cc056fb7e1b081a733797ed27550343cbaf44c However, _add_instance_info is not guarded by the config option, but causes just as much performance havoc as the initial load. https://github.com/openstack/nova/commit/82cc056fb7e1b081a733797ed27550343cbaf44c #diff-978b9f8734365934eaf8fbb01f11a7d7R554 This should be guarded by the config. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1479124/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1462374] [NEW] Ironic: Unavailable nodes may be scheduled to
Public bug reported: The Ironic driver reports all resources consumed for compute nodes in certain unavailable states (e.g. deploying, cleaning, deleting). However, if there is not an instance associated with the node, the resource tracker will try to correct the driver and expose these resources. This may result in being scheduled to a node that is still cleaning up from a previous instance. ** Affects: nova Importance: Undecided Assignee: Jim Rollenhagen (jim-rollenhagen) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1462374 Title: Ironic: Unavailable nodes may be scheduled to Status in OpenStack Compute (Nova): In Progress Bug description: The Ironic driver reports all resources consumed for compute nodes in certain unavailable states (e.g. deploying, cleaning, deleting). However, if there is not an instance associated with the node, the resource tracker will try to correct the driver and expose these resources. This may result in being scheduled to a node that is still cleaning up from a previous instance. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1462374/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1460176] [NEW] Reschedules sometimes do not allocate networks
Public bug reported: https://gist.github.com/jimrollenhagen/b6b45aa43878cdc89d89 Fixed by https://review.openstack.org/#/c/177470/ ** Affects: nova Importance: Undecided Status: Fix Released ** Changed in: nova Status: New => Fix Committed ** Changed in: nova Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1460176 Title: Reschedules sometimes do not allocate networks Status in OpenStack Compute (Nova): Fix Released Bug description: https://gist.github.com/jimrollenhagen/b6b45aa43878cdc89d89 Fixed by https://review.openstack.org/#/c/177470/ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1460176/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1447249] [NEW] Ironic: injected files not passed through to configdrive
Public bug reported: The ironic driver's code to generate a configdrive does not pass injected_files through to the configdrive builder, resulting in injected files not being in the resulting configdrive. ** Affects: nova Importance: Undecided Assignee: Jim Rollenhagen (jim-rollenhagen) Status: In Progress ** Tags: kilo-rc-potential -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1447249 Title: Ironic: injected files not passed through to configdrive Status in OpenStack Compute (Nova): In Progress Bug description: The ironic driver's code to generate a configdrive does not pass injected_files through to the configdrive builder, resulting in injected files not being in the resulting configdrive. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1447249/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1439868] [NEW] Ironic: CONF.scheduler_use_baremetal_filters doesn't have any effect
Public bug reported: When CONF.scheduler_use_baremetal_filters is set, and IronicHostManager is in use, the default scheduler filters should be as defined by CONF.baremetal_scheduler_default_filters. This is done in IronicHostManager's __init__ method. However, __init__ calls the superclass' __init__ method before setting the default filters, and so the change isn't picked up by the base HostManager. Thus this setting does nothing. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1439868 Title: Ironic: CONF.scheduler_use_baremetal_filters doesn't have any effect Status in OpenStack Compute (Nova): New Bug description: When CONF.scheduler_use_baremetal_filters is set, and IronicHostManager is in use, the default scheduler filters should be as defined by CONF.baremetal_scheduler_default_filters. This is done in IronicHostManager's __init__ method. However, __init__ calls the superclass' __init__ method before setting the default filters, and so the change isn't picked up by the base HostManager. Thus this setting does nothing. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1439868/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1439796] [NEW] Ironic driver fails to start when CONF.ironic.client_log_level is set
Public bug reported: In commit 97d63d8745cd9b3b391ce96b94b4da263b3a053d, logging was changed to use oslo.log. However, the ironic driver previously interacted with the stdlib logging module to set the log level dynamically. oslo.log does not provide the methods that were being used (getLevelName), and so this block of code causes nova-compute to fail to start when this option is set: https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L184-187 ** Affects: nova Importance: Undecided Assignee: Jim Rollenhagen (jim-rollenhagen) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1439796 Title: Ironic driver fails to start when CONF.ironic.client_log_level is set Status in OpenStack Compute (Nova): In Progress Bug description: In commit 97d63d8745cd9b3b391ce96b94b4da263b3a053d, logging was changed to use oslo.log. However, the ironic driver previously interacted with the stdlib logging module to set the log level dynamically. oslo.log does not provide the methods that were being used (getLevelName), and so this block of code causes nova-compute to fail to start when this option is set: https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L184-187 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1439796/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1399830] [NEW] Power sync periodic task does $node_count API calls for Ironic driver
Public bug reported: The power sync periodic task calls driver.get_info() for each instance in the database. This is typically fine; however in the Ironic driver, get_info() is an API call. We should bring this down to one API call. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1399830 Title: Power sync periodic task does $node_count API calls for Ironic driver Status in OpenStack Compute (Nova): New Bug description: The power sync periodic task calls driver.get_info() for each instance in the database. This is typically fine; however in the Ironic driver, get_info() is an API call. We should bring this down to one API call. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1399830/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1385468] [NEW] Cells assumes 1:1 compute-service:compute-node mapping
Public bug reported: Cells capacity calculation seems to assume one compute_node per nova- compute service. It calculates capacity data by service name, overwriting the value for each compute_node. This results in the cell only showing capacity for one compute_host for each nova-compute service in the cell. Observed running Ironic, where there are many compute_hosts in a nova- compute service. ** Affects: nova Importance: Undecided Assignee: Jim Rollenhagen (jim-rollenhagen) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1385468 Title: Cells assumes 1:1 compute-service:compute-node mapping Status in OpenStack Compute (Nova): In Progress Bug description: Cells capacity calculation seems to assume one compute_node per nova- compute service. It calculates capacity data by service name, overwriting the value for each compute_node. This results in the cell only showing capacity for one compute_host for each nova-compute service in the cell. Observed running Ironic, where there are many compute_hosts in a nova- compute service. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1385468/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1362699] [NEW] amd64 is not a valid arch
Public bug reported: 'amd64' is not in the list of valid architectures, this should be canonicalized to 'x86_64'. ** Affects: nova Importance: Undecided Assignee: Jim Rollenhagen (jim-rollenhagen) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1362699 Title: amd64 is not a valid arch Status in OpenStack Compute (Nova): In Progress Bug description: 'amd64' is not in the list of valid architectures, this should be canonicalized to 'x86_64'. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1362699/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp