[Yahoo-eng-team] [Bug 1520538] Re: hyperv migration should report real PercentComplete
@yuntongjin: It seems that this is a feature request. Feature requests for nova are done with blueprints [1] and with specs [2]. I'll recommend to read [3] if not yet done. To focus here on bugs which are a failures/errors/faults I close this one as "Invalid". The effort to implement the requested feature is then driven only by the blueprint (and spec). If there are any questions left, feel free to contact me (markus_z) in the IRC channel #openstack-nova [1] https://blueprints.launchpad.net/nova/ [2] https://github.com/openstack/nova-specs [3] https://wiki.openstack.org/wiki/Blueprints ** Tags added: hyper-v ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1520538 Title: hyperv migration should report real PercentComplete Status in OpenStack Compute (nova): Invalid Bug description: in Hyper driver, instance.progress is assigned by migrate_disk_and_power_off the instance.progress is calculated by migrate steps, this is not accurate. there is "uint16 PercentComplete" Msvm_ConcreteJob class https://msdn.microsoft.com/en-us/library/cc136824(v=vs.85).aspx which can get the actually PercentComplete of the migration. we should use it as instance.progress. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1520538/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1517171] Re: os-endpoint-type not really working internalURL required 2nd region only support http access.
@Michael Duncan: We need additional information to do a proper bug triage. Would you please provide the following information: * What was the result you expected? The error indicates that there is an issue with your certificates (and I'm guessing that OpenStack does not setup certificates). SSL exception connecting to https://10.30.4.100:5000/v2.0/tokens: [Errno bad handshake] [('SSL routines', 'SSL3_GET_SERVER_CERTIFICATE', 'certificate verify failed')] * Which version did you use? Depending on your package manager you can use: $ dpkg -l | grep nova or: $ rpm -ql | grep nova If you installed it from git you can use: $ cd /opt/stack/nova && git log -1 After you provided this information, please set this bug status back to "New". ** Changed in: nova Status: New => Incomplete ** Also affects: python-novaclient Importance: Undecided Status: New ** Changed in: python-novaclient Status: New => Incomplete ** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1517171 Title: os-endpoint-type not really working internalURL required 2nd region only support http access. Status in OpenStack Compute (nova): Invalid Status in python-novaclient: Incomplete Bug description: FAILED: [root@controller-0 ~(keystone_admin)]# nova --debug --os-endpoint-type internalURL list DEBUG (session:195) REQ: curl -g -i -X GET http://10.30.3.103:5000/v2.0/ -H "Accept: application/json" -H "User-Agent: python-keystoneclient" INFO (connectionpool:203) Starting new HTTP connection (1): 10.30.3.103 DEBUG (connectionpool:383) "GET /v2.0/ HTTP/1.1" 200 338 DEBUG (session:224) RESP: [200] content-length: 338 vary: X-Auth-Token connection: keep-alive date: Tue, 17 Nov 2015 18:46:50 GMT content-type: application/json x-openstack-request-id: req-9eee79c9-b916-419a-b6ff-30db684966fb RESP BODY: {"version": {"status": "stable", "updated": "2014-04-17T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": [{"href": "https://10.30.4.100:5000/v2.0/";, "rel": "self"}, {"href": "http://docs.openstack.org/";, "type": "text/html", "rel": "describedby"}]}} DEBUG (v2:76) Making authentication request to https://10.30.4.100:5000/v2.0/tokens INFO (connectionpool:735) Starting new HTTPS connection (1): 10.30.4.100 DEBUG (shell:923) SSL exception connecting to https://10.30.4.100:5000/v2.0/tokens: [Errno bad handshake] [('SSL routines', 'SSL3_GET_SERVER_CERTIFICATE', 'certificate verify failed')] Traceback (most recent call last): File "/usr/lib64/python2.7/site-packages/novaclient/shell.py", line 920, in main File "/usr/lib64/python2.7/site-packages/novaclient/shell.py", line 847, in main File "./usr/lib64/python2.7/site-packages/novaclient/v2/shell.py", line 1412, in do_list File "./usr/lib64/python2.7/site-packages/novaclient/v2/servers.py", line 602, in list File "/usr/lib64/python2.7/site-packages/novaclient/base.py", line 64, in _list File "/usr/lib64/python2.7/site-packages/keystoneclient/adapter.py", line 170, in get File "./usr/lib64/python2.7/site-packages/novaclient/client.py", line 90, in request File "/usr/lib64/python2.7/site-packages/keystoneclient/adapter.py", line 206, in request File "/usr/lib64/python2.7/site-packages/keystoneclient/adapter.py", line 95, in request File "./usr/lib64/python2.7/site-packages/keystoneclient/utils.py", line 318, in inner File "./usr/lib64/python2.7/site-packages/keystoneclient/session.py", line 313, in request File "./usr/lib64/python2.7/site-packages/keystoneclient/session.py", line 598, in get_auth_headers File "./usr/lib64/python2.7/site-packages/keystoneclient/auth/base.py", line 114, in get_headers File "./usr/lib64/python2.7/site-packages/keystoneclient/auth/identity/base.py", line 104, in get_token File "./usr/lib64/python2.7/site-packages/keystoneclient/auth/identity/base.py", line 144, in get_access File "./usr/lib64/python2.7/site-packages/keystoneclient/auth/identity/generic/base.py", line 176, in get_auth_ref File "./usr/lib64/python2.7/site-packages/keystoneclient/auth/identity/v2.py", line 78, in get_auth_ref File "./usr/lib64/python2.7/site-packages/keystoneclient/session.py", line 497, in post File "./usr/lib64/python2.7/site-packages/keystoneclient/utils.py", line 318, in inner File "./usr/lib64/python2.7/site-packages/keystoneclient/session.py", line 382, in request File "./usr/lib64/python2.7/site-packages/keystoneclient/session.py", line 420, in _send_request SSLError: SSL exception connecting to https://10.30.4.100:5000/v2.0/tokens: [Errno bad handshake] [('SSL routines', 'SSL3_GET_SERVER_CERTIFICATE', 'certifi
[Yahoo-eng-team] [Bug 1521099] Re: security group rule update is not validated
it sounds like a bug in the extension itself rather than midonet implementation. ** Also affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1521099 Title: security group rule update is not validated Status in networking-midonet: New Status in neutron: New Bug description: Even though midonet does not support updating of SG rule direction, neutron allows it: https://github.com/openstack/neutron/blob/master/neutron/extensions/securitygroup.py#L239 The plugin should have a validation to prevent this update. To manage notifications about this bug go to: https://bugs.launchpad.net/networking-midonet/+bug/1521099/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1521117] [NEW] fixed_ip of floatingip is not updated.
Public bug reported: When update fixed_ip of a port that associated with floatingip, the fixed_ip of the floatingip is not updated. How to reproduce: (1) create a port $ neutron port-show 9b11ce55-a404-4ae4-941d-ea0db555c331 +---+--+ | Field | Value | +---+--+ | admin_state_up| True | | allowed_address_pairs | | | binding:host_id | | | binding:profile | {} | | binding:vif_details | {} | | binding:vif_type | unbound | | binding:vnic_type | normal | | device_id | | | device_owner | | | dns_assignment| {"hostname": "host-10-0-0-11", "ip_address": "10.0.0.11", "fqdn": "host-10-0-0-11.openstacklocal."} | | | {"hostname": "host-fdcb-a2eb-ab3e-0-f816-3eff-fe90-aa8e", "ip_address": "fdcb:a2eb:ab3e:0:f816:3eff:fe90:aa8e", "fqdn": "host-fdcb-a2eb-ab3e-0-f816-3eff-fe90-aa8e.openstacklocal."} | | dns_name | | | extra_dhcp_opts | | | fixed_ips | {"subnet_id": "fedeed6a-b2fc-41f8-8382-6a7ebc34ff1a", "ip_address": "10.0.0.11"} | | | {"subnet_id": "68cbe554-0b26-4d6c-992d-4d8889cfb49f", "ip_address": "fdcb:a2eb:ab3e:0:f816:3eff:fe90:aa8e"} | | id| 9b11ce55-a404-4ae4-941d-ea0db555c331 | | mac_address | fa:16:3e:90:aa:8e | | name | test-port | | network_id| e82b8c93-ba6a-4c52-a87f-f9790c7e2087 | | port_security_enabled | True
[Yahoo-eng-team] [Bug 1521099] Re: security group rule update is not validated
** Changed in: networking-midonet Status: New => Invalid ** Changed in: networking-midonet Assignee: (unassigned) => YAMAMOTO Takashi (yamamoto) ** Changed in: networking-midonet Importance: Medium => Low -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1521099 Title: security group rule update is not validated Status in networking-midonet: Invalid Status in neutron: In Progress Bug description: Even though midonet does not support updating of SG rule direction, neutron allows it: https://github.com/openstack/neutron/blob/master/neutron/extensions/securitygroup.py#L239 The plugin should have a validation to prevent this update. To manage notifications about this bug go to: https://bugs.launchpad.net/networking-midonet/+bug/1521099/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1516489] Re: networking-midonet stable/kilo release request
** Changed in: networking-midonet Status: New => Fix Released ** Changed in: networking-midonet Milestone: None => 2015.1.1 ** Changed in: networking-midonet Assignee: (unassigned) => YAMAMOTO Takashi (yamamoto) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1516489 Title: networking-midonet stable/kilo release request Status in networking-midonet: Fix Released Status in neutron: Fix Released Bug description: please push 2015.1.1 tag on the tip of stable/kilo. namely, commit bcdc794fe969041b55e6731e48e38c12cda45f1e . To manage notifications about this bug go to: https://bugs.launchpad.net/networking-midonet/+bug/1516489/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1521122] [NEW] K2K: 'NoneType' object has no attribute 'check_output'
Public bug reported: When messing with keystone 2 keystone I was getting 500 Errors telling me that the signing process was failing when generating ECP assertions. When i looked into it i found it was because the federation/idp.py was being imported before the code had the chance to do a environment.use_stdlib(). This meant that the global subprocess variable [1] was being assigned before the environment code had the chance to specify which subprocess to use, leaving the value as None. After this because of the overly generic exception handler around signing [2] this was being reported as a signing failure and a 500 Error. I was using my own wsgi launcher rather than the /usr/local/bin /keystone-wsgi-[admin|public] but it wasn't doing anything complex [3]. This should no longer be a problem when we remove eventlet as we can remove the environment code for good. Fixes: 1) You should always use variables from environment as close to the location as possible as these values are late bound. 2) We should make a more specific error handler for the signing process. It really should only have to catch the output of the subprocess. [1] https://github.com/openstack/keystone/blob/8f30e9a0785234a5a9cf4daaa95207c9adb10835/keystone/federation/idp.py#L40 [2] https://github.com/openstack/keystone/blob/8f30e9a0785234a5a9cf4daaa95207c9adb10835/keystone/federation/idp.py#L440 [3] http://paste.openstack.org/show/480338/ ** Affects: keystone Importance: Undecided Assignee: Jamie Lennox (jamielennox) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1521122 Title: K2K: 'NoneType' object has no attribute 'check_output' Status in OpenStack Identity (keystone): In Progress Bug description: When messing with keystone 2 keystone I was getting 500 Errors telling me that the signing process was failing when generating ECP assertions. When i looked into it i found it was because the federation/idp.py was being imported before the code had the chance to do a environment.use_stdlib(). This meant that the global subprocess variable [1] was being assigned before the environment code had the chance to specify which subprocess to use, leaving the value as None. After this because of the overly generic exception handler around signing [2] this was being reported as a signing failure and a 500 Error. I was using my own wsgi launcher rather than the /usr/local/bin /keystone-wsgi-[admin|public] but it wasn't doing anything complex [3]. This should no longer be a problem when we remove eventlet as we can remove the environment code for good. Fixes: 1) You should always use variables from environment as close to the location as possible as these values are late bound. 2) We should make a more specific error handler for the signing process. It really should only have to catch the output of the subprocess. [1] https://github.com/openstack/keystone/blob/8f30e9a0785234a5a9cf4daaa95207c9adb10835/keystone/federation/idp.py#L40 [2] https://github.com/openstack/keystone/blob/8f30e9a0785234a5a9cf4daaa95207c9adb10835/keystone/federation/idp.py#L440 [3] http://paste.openstack.org/show/480338/ To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1521122/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1515720] Re: flavor extra specs: Input parameters need error check
AFAIK the flavor extra specs are free form string values and there is no defined list of allowed values. That's why it's not possible to check if the provided input is sane. The code 201 the REST API returns just says that your request (create/update an extra spec) got fulfilled. It doesn't say that it is sane. That's why I think it's fair to close this bug. ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1515720 Title: flavor extra specs: Input parameters need error check Status in OpenStack Compute (nova): Invalid Bug description: By mistake I gave parameter with wrong extra spec "quota:quota:vif_shares_share=10" instead of "quota:vif_shares_share=10", executing of command was successful. I didnt see same Shares value being set on VM NIC as expected. It might be good idea to validate the parameters before returning success. Build Version: VIO 2.0 GA + NSX 6.2 GA root@blr-vio-dhcp:~# nova flavor-key m1.spirent set quota:quota:vif_shares_share=10 root@blr-vio-dhcp:~# nova flavor-show m1.spirent ++--+ | Property | Value | ++--+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 5 | | extra_specs| {"quota:vif_shares_level": "custom", "quota:quota:vif_shares_share": "10", "quota:vif_limit": "5", "quota:vif_reservation": "3"} | | id | 6 | | name | m1.spirent | | os-flavor-access:is_public | True | | ram| 2048 | | rxtx_factor| 1.0 | | swap | | | vcpus | 2 | ++--+ root@blr-vio-dhcp:~# To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1515720/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1521160] [NEW] VMware LBaaSv1 method uses invalid context
Public bug reported: VMware LBaaSv1 driver supplies the Neutron plugin with a method, which allows detection of Edge appliance use by the LBaaS driver. The is_edge_in_use() method is using a context which is invalid: the driver has no self.context property and shouldn't have one. Instead, the context should pass as parameter. ** Affects: neutron Importance: Undecided Assignee: Kobi Samoray (ksamoray) Status: In Progress ** Changed in: neutron Assignee: (unassigned) => Kobi Samoray (ksamoray) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1521160 Title: VMware LBaaSv1 method uses invalid context Status in neutron: In Progress Bug description: VMware LBaaSv1 driver supplies the Neutron plugin with a method, which allows detection of Edge appliance use by the LBaaS driver. The is_edge_in_use() method is using a context which is invalid: the driver has no self.context property and shouldn't have one. Instead, the context should pass as parameter. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1521160/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1521194] [NEW] Qos Aggregated Bandwidth Rate Limiting
Public bug reported: [Existing problem] In the current QoS implementation, network QoS bandwidth limit are applied to all the ports uniformly in the network. This may be wrong. Consider a case where the user just want 20 mbps speed for the entire network but the current implementation will end up in allowing 20 * num_of_ports_in_the_networks mbps. [Proposal] This proposal talks about the support for aggregated bandwidth rate limiting where all the ports in the network should together attain the specified network bandwidth limit. To start with an easiest implementation could be dividing the overall bandwidth value with the number of ports in the network. In this there might a case of over and under utilization which might need more thought (May be we got to monitor all the ports and have a notion of thresh hold to decide whether to increase or decrease the bandwidth) [Benefits] Better and correct user experience. [What is the enhancement?] Applying correct QoS rule parameter to each ports. [Related information] None ** Affects: neutron Importance: Undecided Status: New ** Tags: rfe -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1521194 Title: Qos Aggregated Bandwidth Rate Limiting Status in neutron: New Bug description: [Existing problem] In the current QoS implementation, network QoS bandwidth limit are applied to all the ports uniformly in the network. This may be wrong. Consider a case where the user just want 20 mbps speed for the entire network but the current implementation will end up in allowing 20 * num_of_ports_in_the_networks mbps. [Proposal] This proposal talks about the support for aggregated bandwidth rate limiting where all the ports in the network should together attain the specified network bandwidth limit. To start with an easiest implementation could be dividing the overall bandwidth value with the number of ports in the network. In this there might a case of over and under utilization which might need more thought (May be we got to monitor all the ports and have a notion of thresh hold to decide whether to increase or decrease the bandwidth) [Benefits] Better and correct user experience. [What is the enhancement?] Applying correct QoS rule parameter to each ports. [Related information] None To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1521194/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1513645] Re: Inconsistent use of admin/non-admin context in neutron network api while preparing network info
@Shraddha Pandhe: This bug report doesn't seem to describe a bug. It's rather a request for discussion. As long as there is no actual observation that this is a failure/fault/error in the behavior of Nova it doesn't make sense to open a bug report from my point of view. Pointing to master code which is constantly evolving doesn't help too. If you have concerns that some use cases could be affected it's fair to add test cases to tempest to ensure that the use cases in doubt get tested properly. ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1513645 Title: Inconsistent use of admin/non-admin context in neutron network api while preparing network info Status in OpenStack Compute (nova): Invalid Bug description: In allocate_for_instance, Neutron network API calls Neutron to create the port(s)[1]. Once the port is created, it formats network info in network info model, before returning it to Compute Manager [2]. To form network info, the API makes several calls to Neutron. 1. List ports for device id [3] 2. Get associated floating IPs [4] 3. Get subnets from port [5] & [6] Notice that, in 3 & 4, the API uses admin context to talk to Neutron, whereas in 6, it doesn't use admin context. This is inconsistent. Is this intentional? [1] https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L716 [2] https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L739-L741 [3] https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L1671-L1676 [4] https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L1555-L1566 [5] https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L1568-L1573 [6] https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L1755-L1756 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1513645/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1519750] Re: stable tarball contains master version
Addressed and fixed by this review [1]. [1] https://review.openstack.org/249986 ** Changed in: neutron Status: Confirmed => Fix Released ** Changed in: neutron Milestone: None => mitaka-1 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1519750 Title: stable tarball contains master version Status in neutron: Fix Released Bug description: somewhat similar to https://bugs.launchpad.net/aodh/+bug/1515507 curl -s http://tarballs.openstack.org/neutron/neutron-stable-kilo.tar.gz|tar vzt |head -5 drwxrwxr-x jenkins/jenkins 0 2015-11-24 22:46 neutron-8.0.0.dev486/ drwxrwxr-x jenkins/jenkins 0 2015-11-24 22:46 neutron-8.0.0.dev486/releasenotes/ drwxrwxr-x jenkins/jenkins 0 2015-11-24 22:46 neutron-8.0.0.dev486/releasenotes/source/ -rw-rw-r-- jenkins/jenkins 146 2015-11-24 22:44 neutron-8.0.0.dev486/releasenotes/source/liberty.rst drwxrwxr-x jenkins/jenkins 0 2015-11-24 22:46 neutron-8.0.0.dev486/releasenotes/source/_static/ yesterday it still contained the correct neutron-2015.1.3.dev52 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1519750/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1517171] Re: os-endpoint-type not really working internalURL required 2nd region only support http access.
Sorry after furthering debugging the https issue and checking found the keystone endpoints setup incorrectly. Closing ticket as false report. ** Changed in: python-novaclient Status: Incomplete => Invalid ** Changed in: nova Assignee: (unassigned) => Michael Duncan (mduncan-j) ** Changed in: python-novaclient Assignee: (unassigned) => Michael Duncan (mduncan-j) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1517171 Title: os-endpoint-type not really working internalURL required 2nd region only support http access. Status in OpenStack Compute (nova): Invalid Status in python-novaclient: Invalid Bug description: FAILED: [root@controller-0 ~(keystone_admin)]# nova --debug --os-endpoint-type internalURL list DEBUG (session:195) REQ: curl -g -i -X GET http://10.30.3.103:5000/v2.0/ -H "Accept: application/json" -H "User-Agent: python-keystoneclient" INFO (connectionpool:203) Starting new HTTP connection (1): 10.30.3.103 DEBUG (connectionpool:383) "GET /v2.0/ HTTP/1.1" 200 338 DEBUG (session:224) RESP: [200] content-length: 338 vary: X-Auth-Token connection: keep-alive date: Tue, 17 Nov 2015 18:46:50 GMT content-type: application/json x-openstack-request-id: req-9eee79c9-b916-419a-b6ff-30db684966fb RESP BODY: {"version": {"status": "stable", "updated": "2014-04-17T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": [{"href": "https://10.30.4.100:5000/v2.0/";, "rel": "self"}, {"href": "http://docs.openstack.org/";, "type": "text/html", "rel": "describedby"}]}} DEBUG (v2:76) Making authentication request to https://10.30.4.100:5000/v2.0/tokens INFO (connectionpool:735) Starting new HTTPS connection (1): 10.30.4.100 DEBUG (shell:923) SSL exception connecting to https://10.30.4.100:5000/v2.0/tokens: [Errno bad handshake] [('SSL routines', 'SSL3_GET_SERVER_CERTIFICATE', 'certificate verify failed')] Traceback (most recent call last): File "/usr/lib64/python2.7/site-packages/novaclient/shell.py", line 920, in main File "/usr/lib64/python2.7/site-packages/novaclient/shell.py", line 847, in main File "./usr/lib64/python2.7/site-packages/novaclient/v2/shell.py", line 1412, in do_list File "./usr/lib64/python2.7/site-packages/novaclient/v2/servers.py", line 602, in list File "/usr/lib64/python2.7/site-packages/novaclient/base.py", line 64, in _list File "/usr/lib64/python2.7/site-packages/keystoneclient/adapter.py", line 170, in get File "./usr/lib64/python2.7/site-packages/novaclient/client.py", line 90, in request File "/usr/lib64/python2.7/site-packages/keystoneclient/adapter.py", line 206, in request File "/usr/lib64/python2.7/site-packages/keystoneclient/adapter.py", line 95, in request File "./usr/lib64/python2.7/site-packages/keystoneclient/utils.py", line 318, in inner File "./usr/lib64/python2.7/site-packages/keystoneclient/session.py", line 313, in request File "./usr/lib64/python2.7/site-packages/keystoneclient/session.py", line 598, in get_auth_headers File "./usr/lib64/python2.7/site-packages/keystoneclient/auth/base.py", line 114, in get_headers File "./usr/lib64/python2.7/site-packages/keystoneclient/auth/identity/base.py", line 104, in get_token File "./usr/lib64/python2.7/site-packages/keystoneclient/auth/identity/base.py", line 144, in get_access File "./usr/lib64/python2.7/site-packages/keystoneclient/auth/identity/generic/base.py", line 176, in get_auth_ref File "./usr/lib64/python2.7/site-packages/keystoneclient/auth/identity/v2.py", line 78, in get_auth_ref File "./usr/lib64/python2.7/site-packages/keystoneclient/session.py", line 497, in post File "./usr/lib64/python2.7/site-packages/keystoneclient/utils.py", line 318, in inner File "./usr/lib64/python2.7/site-packages/keystoneclient/session.py", line 382, in request File "./usr/lib64/python2.7/site-packages/keystoneclient/session.py", line 420, in _send_request SSLError: SSL exception connecting to https://10.30.4.100:5000/v2.0/tokens: [Errno bad handshake] [('SSL routines', 'SSL3_GET_SERVER_CERTIFICATE', 'certificate verify failed')] ERROR (SSLError): SSL exception connecting to https://10.30.4.100:5000/v2.0/tokens: [Errno bad handshake] [('SSL routines', 'SSL3_GET_SERVER_CERTIFICATE', 'certificate verify failed')] Working with bypass-url: [root@controller-0 ~(keystone_admin)]# nova --debug --bypass-url http://10.30.3.103:5000/v2.0/ list REQ: curl -g -i 'http://10.30.3.103:5000/v2.0/tokens' -X POST -H "Accept: application/json" -H "Content-Type: application/json" -H "User-Agent: python-novaclient" -d '{"auth": {"tenantName": "admin", "passwordCredentials": {"username": "admin", "password": "{SHA1}e19993bc289d21c00f6a33038a848235ae5fc0e3"}}}' R
[Yahoo-eng-team] [Bug 1521236] [NEW] Missing list of stores in glance-api.conf
Public bug reported: We used to give some indication of which stores could be configured: [glance_store] # List of which store classes and store class locations are # currently known to glance at startup. # Deprecated group/name - [DEFAULT]/known_stores # Existing but disabled stores: # glance.store.rbd.Store, # glance.store.s3.Store, # glance.store.swift.Store, # glance.store.sheepdog.Store, # glance.store.cinder.Store, # glance.store.gridfs.Store, # glance.store.vmware_datastore.Store, #stores = glance.store.filesystem.Store, # glance.store.http.Store We don't seem to have a full list of stores in etc/glance-api.conf any more: [glance_store] # # From glance.store # # List of stores enabled (list value) #stores = file,http ** Affects: glance Importance: Undecided Assignee: Stuart McLaren (stuart-mclaren) Status: In Progress ** Changed in: glance Assignee: (unassigned) => Stuart McLaren (stuart-mclaren) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1521236 Title: Missing list of stores in glance-api.conf Status in Glance: In Progress Bug description: We used to give some indication of which stores could be configured: [glance_store] # List of which store classes and store class locations are # currently known to glance at startup. # Deprecated group/name - [DEFAULT]/known_stores # Existing but disabled stores: # glance.store.rbd.Store, # glance.store.s3.Store, # glance.store.swift.Store, # glance.store.sheepdog.Store, # glance.store.cinder.Store, # glance.store.gridfs.Store, # glance.store.vmware_datastore.Store, #stores = glance.store.filesystem.Store, # glance.store.http.Store We don't seem to have a full list of stores in etc/glance-api.conf any more: [glance_store] # # From glance.store # # List of stores enabled (list value) #stores = file,http To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1521236/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1521238] [NEW] Documentation is missing http store
Public bug reported: configuring.rst has: Available options for this option are (``file``, ``swift``, ``s3``, ``rbd``, ``sheepdog``, ``cinder`` or ``vsphere``). In order to select a default store it must also 'http' is also an available option ** Affects: glance Importance: Undecided Assignee: Stuart McLaren (stuart-mclaren) Status: Invalid ** Changed in: glance Assignee: (unassigned) => Stuart McLaren (stuart-mclaren) ** Changed in: glance Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1521238 Title: Documentation is missing http store Status in Glance: Invalid Bug description: configuring.rst has: Available options for this option are (``file``, ``swift``, ``s3``, ``rbd``, ``sheepdog``, ``cinder`` or ``vsphere``). In order to select a default store it must also 'http' is also an available option To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1521238/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1521236] Re: Missing list of stores in glance-api.conf
** Also affects: glance-store Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1521236 Title: Missing list of stores in glance-api.conf Status in Glance: In Progress Status in glance_store: New Bug description: We used to give some indication of which stores could be configured: [glance_store] # List of which store classes and store class locations are # currently known to glance at startup. # Deprecated group/name - [DEFAULT]/known_stores # Existing but disabled stores: # glance.store.rbd.Store, # glance.store.s3.Store, # glance.store.swift.Store, # glance.store.sheepdog.Store, # glance.store.cinder.Store, # glance.store.gridfs.Store, # glance.store.vmware_datastore.Store, #stores = glance.store.filesystem.Store, # glance.store.http.Store We don't seem to have a full list of stores in etc/glance-api.conf any more: [glance_store] # # From glance.store # # List of stores enabled (list value) #stores = file,http To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1521236/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1491637] Re: Error when adding a new Firewall Rule
Please don't switch bugs to security without an explanation as to why you've done so. Reviewing the discussion here, it looks like it describes a temporary defect in the master branch during the Liberty development cycle and fixed prior to release. The OpenStack Vulnerability Management Team does not issue security advisories for problems which don't appear in any official release. ** Information type changed from Public Security to Public ** Also affects: ossa Importance: Undecided Status: New ** Changed in: ossa Status: New => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1491637 Title: Error when adding a new Firewall Rule Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Security Advisory: Won't Fix Bug description: Reported from openstack-dev ML. When i try to add new firewall rule , horizon broken with "'NoneType' object has no attribute 'id'" Error. This was fine about 10 hours back. Seems one of the latest commit broken it. Traceback in horizon: 2015-09-02 16:15:35.337872 return nodelist.render(context) 2015-09-02 16:15:35.337877 File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 903, in render 2015-09-02 16:15:35.337893 bit = self.render_node(node, context) 2015-09-02 16:15:35.337899 File "/usr/local/lib/python2.7/dist-packages/django/template/debug.py", line 79, in render_node 2015-09-02 16:15:35.337903 return node.render(context) 2015-09-02 16:15:35.337908 File "/usr/local/lib/python2.7/dist-packages/django/template/debug.py", line 89, in render 2015-09-02 16:15:35.337913 output = self.filter_expression.resolve(context) 2015-09-02 16:15:35.337917 File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 647, in resolve 2015-09-02 16:15:35.337922 obj = self.var.resolve(context) 2015-09-02 16:15:35.337927 File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 787, in resolve 2015-09-02 16:15:35.337931 value = self._resolve_lookup(context) 2015-09-02 16:15:35.337936 File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 825, in _resolve_lookup 2015-09-02 16:15:35.337940 current = getattr(current, bit) 2015-09-02 16:15:35.337945 File "/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/utils/html.py", line 59, in attr_string 2015-09-02 16:15:35.337950 return flatatt(self.get_final_attrs()) 2015-09-02 16:15:35.337954 File "/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/utils/html.py", line 42, in get_final_attrs 2015-09-02 16:15:35.337959 final_attrs['class'] = self.get_final_css() 2015-09-02 16:15:35.337964 File "/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/utils/html.py", line 47, in get_final_css 2015-09-02 16:15:35.337981 default = " ".join(self.get_default_classes()) 2015-09-02 16:15:35.337986 File "/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tables/base.py", line 792, in get_default_classes 2015-09-02 16:15:35.337991 if not self.url: 2015-09-02 16:15:35.337995 File "/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tables/base.py", line 756, in url 2015-09-02 16:15:35.338000 url = self.column.get_link_url(self.datum) 2015-09-02 16:15:35.338004 File "/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tables/base.py", line 431, in get_link_url 2015-09-02 16:15:35.338009 return self.link(datum) 2015-09-02 16:15:35.338014 File "/opt/stack/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/firewalls/tables.py", line 261, in get_policy_link 2015-09-02 16:15:35.338019 kwargs={'policy_id': datum.policy.id}) 2015-09-02 16:15:35.338023 AttributeError: 'NoneType' object has no attribute 'id' To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1491637/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1320269] Re: Scheduler fails confirm resize of instances when there is enough space to complete resize
I tried resizing of instances with preallocation strategy for nano and micro flavors and tried same for few instances of tiny flavor. I could not reproduce bug. At the same time, I am not able to check with other flavors due to hardware resources restriction I have. But I don't think we need API change here, as it is not reproducible with basic flavors. I think we can close this bug. Please feel free to reopen if required. ** Changed in: nova Status: Confirmed => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1320269 Title: Scheduler fails confirm resize of instances when there is enough space to complete resize Status in OpenStack Compute (nova): Invalid Bug description: I configured preallocation on my setup and launched 10 instances. 8 of the 10 instances have enough space to resize I started to resize all instances 1 at a time but did not confirm the resize when I got to the 8th instance the scheduler would not allow resize due to disk space. when I tried confirming resize of the instances with VERIFY_RESIZE status we fail on not enough disk space on all of them. [root@orange-vdsf instances(keystone_admin)]# nova list +--++---++-+---+ | ID | Name | Status| Task State | Power State | Networks | +--++---++-+---+ | 1a98eefe-ba41-49b7-931a-ebe54796e343 | dafna-1a98eefe-ba41-49b7-931a-ebe54796e343 | ERROR | - | Running | novanetwork=192.168.32.21 | | 1e237fb7-6787-4362-8ae3-e9668cdd3b50 | dafna-1e237fb7-6787-4362-8ae3-e9668cdd3b50 | ACTIVE| - | Running | novanetwork=192.168.32.13 | | 5f046400-7a8f-4a66-a1b2-5ac87f442e06 | dafna-5f046400-7a8f-4a66-a1b2-5ac87f442e06 | ERROR | - | Running | novanetwork=192.168.32.3 | | 5f87fd7e-4f85-4c41-ab12-3bf680160281 | dafna-5f87fd7e-4f85-4c41-ab12-3bf680160281 | ERROR | - | Running | novanetwork=192.168.32.15 | | 608eef43-c9ac-48e0-8482-e9275c123268 | dafna-608eef43-c9ac-48e0-8482-e9275c123268 | ERROR | - | Running | novanetwork=192.168.32.6 | | 7610109e-b631-40a5-9fa0-1bf150560503 | dafna-7610109e-b631-40a5-9fa0-1bf150560503 | ERROR | - | Running | novanetwork=192.168.32.2 | | 7dfc892f-65ef-4e5b-b9bf-35d74a37bb27 | dafna-7dfc892f-65ef-4e5b-b9bf-35d74a37bb27 | ERROR | - | Running | novanetwork=192.168.32.22 | | a4aefac9-feb8-40d7-9228-1df6a02d9e13 | dafna-a4aefac9-feb8-40d7-9228-1df6a02d9e13 | VERIFY_RESIZE | - | Running | novanetwork=192.168.32.18 | | ac4fb242-3fa4-4424-9410-4de61d260e0a | dafna-ac4fb242-3fa4-4424-9410-4de61d260e0a | ERROR | - | Running | novanetwork=192.168.32.12 | | d1e28a92-3ba2-402e-a798-ff8ff9a6bd98 | dafna-d1e28a92-3ba2-402e-a798-ff8ff9a6bd98 | ERROR | - | Running | novanetwork=192.168.32.14 | +--++---++-+---+ [root@orange-vdsf instances(keystone_admin)]# nova list --host orange-vdsf.qa.lab.tlv.redhat.com +--++++-+---+ [root@orange-vdsf instances(keystone_admin)]# egrep d1e28a92-3ba2-402e-a798-ff8ff9a6bd98 /var/log/nova/* /var/log/nova/nova-api.log:2014-05-16 17:38:45.092 15231 INFO nova.osapi_compute.wsgi.server [req-fe2167ee-705a-4563-a120-ddbd1abfc968 c9062d562d9f41e4a1fdce36a4f176f6 4ad766166539403189f2caca1ba306aa] 10.35.160.73 "GET /v2/4ad766166539403189f2caca1ba306aa/servers/d1e28a92-3ba2-402e-a798-ff8ff9a6bd98 HTTP/1.1" status: 200 len: 1923 time: 0.3389881 /var/log/nova/nova-api.log:2014-05-16 17:38:48.288 15231 INFO nova.osapi_compute.wsgi.server [req-3e8587c1-607f-436c-b351-b1c29576b815 c9062d562d9f41e4a1fdce36a4f176f6 4ad766166539403189f2caca1ba306aa] 10.35.160.73 "GET /v2/4ad766166539403189f2caca1ba306aa/servers/d1e28a92-3ba2-402e-a798-ff8ff9a6bd98 HTTP/1.1" status: 200 len: 1923 time: 0.3205199 /var/log/nova/nova-api.log:2014-05-16 17:38:51.077 15231 INFO nova.osapi_compute.wsgi.server [req-6901a68e-cb70-40a0-bf34-5efbaef492df c9062d562d9f41e4a1fdce36a4f176f6 4ad766166539403189f2caca1ba306aa] 10.35.160.73 "GET /v2/4ad766166539403189f2caca1ba306aa/servers/d1e28a92-3ba2-402e-a798-ff8ff9a6bd98 HTTP/1.1" status: 200 len: 1940 time: 0.4703
[Yahoo-eng-team] [Bug 1521291] [NEW] [RFE] Transition neutron CLI from python-neutronclient to python-openstackclient
Public bug reported: [Existing problem] The OpenStackClient CLI (python-openstackclient project) has limited neutron support (network, security group, security group rules, floating IP address and fixed IP address CRUD). [Proposal] Per OpenStack spec https://review.openstack.org/#/c/243348/, the OpenStackClient CLI should be the recommended CLI for OpenStack projects. Therefore, support provided by the existing neutron CLI should be moved from the python-neutronclient project to the python-openstackclient project. [Benefits] A common CLI for OpenStack projects provides users and operators a consistent CLI experience when working with OpenStack cloud environments. [What is the enhancement?] This enhancement would result in the neutron CLI moving from the python-neutronclient project to the python-openstackclient project. In addition, users of the python-neutronclient extensibility support would need to transition to the python-openstackclient plugin system. The state of the python-neutronclient project serving as a Python library is TBD given that the python-openstacksdk would be used to implement the neutron CLI support in the python-openstackclient project. Eventually, use of the python-neutronclient project would be deprecated and possibly removed. I've recorded additional details in the following etherpad: https://etherpad.openstack.org/p/transition_neutronclient_to_openstackclient [Related information] - OpenStack spec: Deprecate individual CLIs in favour of OSC ( https://review.openstack.org/#/c/243348/ ) - python-openstackclient spec: https://blueprints.launchpad.net/python-openstackclient/+spec/neutron-client - Related etherpad: https://etherpad.openstack.org/p/transition_neutronclient_to_openstackclient ** Affects: neutron Importance: Undecided Status: New ** Tags: rfe -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1521291 Title: [RFE] Transition neutron CLI from python-neutronclient to python- openstackclient Status in neutron: New Bug description: [Existing problem] The OpenStackClient CLI (python-openstackclient project) has limited neutron support (network, security group, security group rules, floating IP address and fixed IP address CRUD). [Proposal] Per OpenStack spec https://review.openstack.org/#/c/243348/, the OpenStackClient CLI should be the recommended CLI for OpenStack projects. Therefore, support provided by the existing neutron CLI should be moved from the python-neutronclient project to the python-openstackclient project. [Benefits] A common CLI for OpenStack projects provides users and operators a consistent CLI experience when working with OpenStack cloud environments. [What is the enhancement?] This enhancement would result in the neutron CLI moving from the python-neutronclient project to the python-openstackclient project. In addition, users of the python-neutronclient extensibility support would need to transition to the python-openstackclient plugin system. The state of the python-neutronclient project serving as a Python library is TBD given that the python-openstacksdk would be used to implement the neutron CLI support in the python-openstackclient project. Eventually, use of the python-neutronclient project would be deprecated and possibly removed. I've recorded additional details in the following etherpad: https://etherpad.openstack.org/p/transition_neutronclient_to_openstackclient [Related information] - OpenStack spec: Deprecate individual CLIs in favour of OSC ( https://review.openstack.org/#/c/243348/ ) - python-openstackclient spec: https://blueprints.launchpad.net/python-openstackclient/+spec/neutron-client - Related etherpad: https://etherpad.openstack.org/p/transition_neutronclient_to_openstackclient To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1521291/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1521314] [NEW] Changing physical interface mapping may result in multiple physical interfaces in bridge
Public bug reported: Version: 2015.2 (Liberty) Plugin: ML2 w/ LinuxBridge While testing various NICs, I found that changing the physical interface mapping in the ML2 configuration file and restarting the agent resulted in the old physical interface remaining in the bridge. This can be observed with the following steps: Original configuration: [linux_bridge] physical_interface_mappings = physnet1:eth2 racker@compute01:~$ brctl show bridge name bridge id STP enabled interfaces brqad516357-478000.e41d2d5b6213 noeth2 tap72e7d2be-24 Modify the bridge mapping: [linux_bridge] #physical_interface_mappings = physnet1:eth2 physical_interface_mappings = physnet1:eth1 Restart the agent: racker@compute01:~$ sudo service neutron-plugin-linuxbridge-agent restart neutron-plugin-linuxbridge-agent stop/waiting neutron-plugin-linuxbridge-agent start/running, process 12803 Check the bridge: racker@compute01:~$ brctl show bridge name bridge id STP enabled interfaces brqad516357-478000.6805ca37dc39 noeth1 eth2 tap72e7d2be-24 This behavior was observed with flat or vlan networks, and can result in some wonky behavior. Removing the original interface from the bridge(s) by hand or restarting the node is a workaround, but I suspect LinuxBridge users aren't used to modifying the bridges manually as the agent usually handles that. ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1521314 Title: Changing physical interface mapping may result in multiple physical interfaces in bridge Status in neutron: New Bug description: Version: 2015.2 (Liberty) Plugin: ML2 w/ LinuxBridge While testing various NICs, I found that changing the physical interface mapping in the ML2 configuration file and restarting the agent resulted in the old physical interface remaining in the bridge. This can be observed with the following steps: Original configuration: [linux_bridge] physical_interface_mappings = physnet1:eth2 racker@compute01:~$ brctl show bridge name bridge id STP enabled interfaces brqad516357-478000.e41d2d5b6213 noeth2 tap72e7d2be-24 Modify the bridge mapping: [linux_bridge] #physical_interface_mappings = physnet1:eth2 physical_interface_mappings = physnet1:eth1 Restart the agent: racker@compute01:~$ sudo service neutron-plugin-linuxbridge-agent restart neutron-plugin-linuxbridge-agent stop/waiting neutron-plugin-linuxbridge-agent start/running, process 12803 Check the bridge: racker@compute01:~$ brctl show bridge name bridge id STP enabled interfaces brqad516357-478000.6805ca37dc39 noeth1 eth2 tap72e7d2be-24 This behavior was observed with flat or vlan networks, and can result in some wonky behavior. Removing the original interface from the bridge(s) by hand or restarting the node is a workaround, but I suspect LinuxBridge users aren't used to modifying the bridges manually as the agent usually handles that. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1521314/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1518016] Re: [SRU] Nova kilo requires concurrency 1.8.2 or better
This bug was fixed in the package python-oslo.concurrency - 1.8.2-0ubuntu0.15.04.1 --- python-oslo.concurrency (1.8.2-0ubuntu0.15.04.1) vivid; urgency=medium * New upstream point release, resolving incompatibilies in latest Nova stable kilo update with 1.8.0 (LP: #1518016). -- James Page Fri, 20 Nov 2015 09:56:39 + ** Changed in: python-oslo.concurrency (Ubuntu Vivid) Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1518016 Title: [SRU] Nova kilo requires concurrency 1.8.2 or better Status in Ubuntu Cloud Archive: Fix Committed Status in Ubuntu Cloud Archive kilo series: Fix Committed Status in OpenStack Compute (nova): Invalid Status in nova package in Ubuntu: Invalid Status in python-oslo.concurrency package in Ubuntu: Invalid Status in nova source package in Vivid: Fix Released Status in python-oslo.concurrency source package in Vivid: Fix Released Bug description: [Impact] Some operations on instances will fail due to missing functions in oslo-concurrency 1.8.0 that the latest Nova stable release requires. [Test Case] Resize or migrate an instance on the latest stable kilo updates [Regression Potential] Minimal - this is recommended and tested upstream already. [Original Bug Report] OpenStack Nova Kilo release requires 1.8.2 or higher, this is due to the addition of on_execute and on_completion to the execute(..) function. The latest Ubuntu OpenStack Kilo packages currently have code that depend on this new updated release. This results in a crash in some operations like resizes or migrations. 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] Traceback (most recent call last): 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6459, in _error_out_instance_on_exception 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] yield 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4054, in resize_instance 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] timeout, retry_interval) 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 6353, in migrate_disk_and_power_off 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] shared_storage) 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 85, in __exit__ 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] six.reraise(self.type_, self.value, self.tb) 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 6342, in migrate_disk_and_power_off 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] on_completion=on_completion) 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/utils.py", line 329, in copy_image 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] on_execute=on_execute, on_completion=on_completion) 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/utils.py", line 55, in execute 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] return utils.execute(*args, **kwargs) 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] File "/usr/lib/python2.7/dist-packages/nova/utils.py", line 207, in execute 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] return processutils.execute(*cmd, **kwargs) 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] File "/usr/lib/python2.7/dist-packages/oslo_concurrency/processutils.py", line 174, in execute 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instan
[Yahoo-eng-team] [Bug 1518016] Re: [SRU] Nova kilo requires concurrency 1.8.2 or better
This bug was fixed in the package nova - 1:2015.1.2-0ubuntu2 --- nova (1:2015.1.2-0ubuntu2) vivid; urgency=medium * d/control: Bump oslo.concurrency to >= 1.8.2 (LP: #1518016). -- Corey Bryant Fri, 20 Nov 2015 07:56:47 -0500 ** Changed in: nova (Ubuntu Vivid) Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1518016 Title: [SRU] Nova kilo requires concurrency 1.8.2 or better Status in Ubuntu Cloud Archive: Fix Committed Status in Ubuntu Cloud Archive kilo series: Fix Committed Status in OpenStack Compute (nova): Invalid Status in nova package in Ubuntu: Invalid Status in python-oslo.concurrency package in Ubuntu: Invalid Status in nova source package in Vivid: Fix Released Status in python-oslo.concurrency source package in Vivid: Fix Released Bug description: [Impact] Some operations on instances will fail due to missing functions in oslo-concurrency 1.8.0 that the latest Nova stable release requires. [Test Case] Resize or migrate an instance on the latest stable kilo updates [Regression Potential] Minimal - this is recommended and tested upstream already. [Original Bug Report] OpenStack Nova Kilo release requires 1.8.2 or higher, this is due to the addition of on_execute and on_completion to the execute(..) function. The latest Ubuntu OpenStack Kilo packages currently have code that depend on this new updated release. This results in a crash in some operations like resizes or migrations. 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] Traceback (most recent call last): 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6459, in _error_out_instance_on_exception 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] yield 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4054, in resize_instance 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] timeout, retry_interval) 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 6353, in migrate_disk_and_power_off 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] shared_storage) 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 85, in __exit__ 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] six.reraise(self.type_, self.value, self.tb) 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 6342, in migrate_disk_and_power_off 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] on_completion=on_completion) 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/utils.py", line 329, in copy_image 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] on_execute=on_execute, on_completion=on_completion) 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/utils.py", line 55, in execute 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] return utils.execute(*args, **kwargs) 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] File "/usr/lib/python2.7/dist-packages/nova/utils.py", line 207, in execute 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] return processutils.execute(*cmd, **kwargs) 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] File "/usr/lib/python2.7/dist-packages/oslo_concurrency/processutils.py", line 174, in execute 2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] raise UnknownArgumentError(_('Got unknown keyword args: %r') % kwargs)
[Yahoo-eng-team] [Bug 1521340] [NEW] BDMNotFound race while building resources since 11/27
Public bug reported: http://logs.openstack.org/94/242594/2/check/gate-tempest-dsvm- nova-v20-api/085f243/logs/screen-n-cpu.txt.gz?level=TRACE#_2015-11-30_18_03_48_003 2015-11-30 18:03:48.003 ERROR nova.compute.manager [req-b3bc5257-4673-4850-9ba0-e1e136c131b3 tempest-DeleteServersTestJSON-651179004 tempest-DeleteServersTestJSON-684647951] [instance: 75986322-d59d-411c-a60c-776415cdd6d1] Failure prepping block device 2015-11-30 18:03:48.003 4993 ERROR nova.compute.manager [instance: 75986322-d59d-411c-a60c-776415cdd6d1] Traceback (most recent call last): 2015-11-30 18:03:48.003 4993 ERROR nova.compute.manager [instance: 75986322-d59d-411c-a60c-776415cdd6d1] File "/opt/stack/new/nova/nova/compute/manager.py", line 2132, in _build_resources 2015-11-30 18:03:48.003 4993 ERROR nova.compute.manager [instance: 75986322-d59d-411c-a60c-776415cdd6d1] block_device_mapping) 2015-11-30 18:03:48.003 4993 ERROR nova.compute.manager [instance: 75986322-d59d-411c-a60c-776415cdd6d1] File "/opt/stack/new/nova/nova/compute/manager.py", line 1689, in _default_block_device_names 2015-11-30 18:03:48.003 4993 ERROR nova.compute.manager [instance: 75986322-d59d-411c-a60c-776415cdd6d1] root_bdm.save() 2015-11-30 18:03:48.003 4993 ERROR nova.compute.manager [instance: 75986322-d59d-411c-a60c-776415cdd6d1] File "/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 206, in wrapper 2015-11-30 18:03:48.003 4993 ERROR nova.compute.manager [instance: 75986322-d59d-411c-a60c-776415cdd6d1] ctxt, self, fn.__name__, args, kwargs) 2015-11-30 18:03:48.003 4993 ERROR nova.compute.manager [instance: 75986322-d59d-411c-a60c-776415cdd6d1] File "/opt/stack/new/nova/nova/conductor/rpcapi.py", line 246, in object_action 2015-11-30 18:03:48.003 4993 ERROR nova.compute.manager [instance: 75986322-d59d-411c-a60c-776415cdd6d1] objmethod=objmethod, args=args, kwargs=kwargs) 2015-11-30 18:03:48.003 4993 ERROR nova.compute.manager [instance: 75986322-d59d-411c-a60c-776415cdd6d1] File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 158, in call 2015-11-30 18:03:48.003 4993 ERROR nova.compute.manager [instance: 75986322-d59d-411c-a60c-776415cdd6d1] retry=self.retry) 2015-11-30 18:03:48.003 4993 ERROR nova.compute.manager [instance: 75986322-d59d-411c-a60c-776415cdd6d1] File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 90, in _send 2015-11-30 18:03:48.003 4993 ERROR nova.compute.manager [instance: 75986322-d59d-411c-a60c-776415cdd6d1] timeout=timeout, retry=retry) 2015-11-30 18:03:48.003 4993 ERROR nova.compute.manager [instance: 75986322-d59d-411c-a60c-776415cdd6d1] File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 464, in send 2015-11-30 18:03:48.003 4993 ERROR nova.compute.manager [instance: 75986322-d59d-411c-a60c-776415cdd6d1] retry=retry) 2015-11-30 18:03:48.003 4993 ERROR nova.compute.manager [instance: 75986322-d59d-411c-a60c-776415cdd6d1] File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 455, in _send 2015-11-30 18:03:48.003 4993 ERROR nova.compute.manager [instance: 75986322-d59d-411c-a60c-776415cdd6d1] raise result 2015-11-30 18:03:48.003 4993 ERROR nova.compute.manager [instance: 75986322-d59d-411c-a60c-776415cdd6d1] BDMNotFound_Remote: No Block Device Mapping with id 48. 2015-11-30 18:03:48.003 4993 ERROR nova.compute.manager [instance: 75986322-d59d-411c-a60c-776415cdd6d1] Traceback (most recent call last): 2015-11-30 18:03:48.003 4993 ERROR nova.compute.manager [instance: 75986322-d59d-411c-a60c-776415cdd6d1] 2015-11-30 18:03:48.003 4993 ERROR nova.compute.manager [instance: 75986322-d59d-411c-a60c-776415cdd6d1] File "/opt/stack/new/nova/nova/conductor/manager.py", line 85, in _object_dispatch 2015-11-30 18:03:48.003 4993 ERROR nova.compute.manager [instance: 75986322-d59d-411c-a60c-776415cdd6d1] return getattr(target, method)(*args, **kwargs) 2015-11-30 18:03:48.003 4993 ERROR nova.compute.manager [instance: 75986322-d59d-411c-a60c-776415cdd6d1] 2015-11-30 18:03:48.003 4993 ERROR nova.compute.manager [instance: 75986322-d59d-411c-a60c-776415cdd6d1] File "/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 222, in wrapper 2015-11-30 18:03:48.003 4993 ERROR nova.compute.manager [instance: 75986322-d59d-411c-a60c-776415cdd6d1] return fn(self, *args, **kwargs) 2015-11-30 18:03:48.003 4993 ERROR nova.compute.manager [instance: 75986322-d59d-411c-a60c-776415cdd6d1] 2015-11-30 18:03:48.003 4993 ERROR nova.compute.manager [instance: 75986322-d59d-411c-a60c-776415cdd6d1] File "/opt/stack/new/nova/nova/objects/block_device.py", line 183, in save 2015-11-30 18:03:48.003 4993 ERROR nova.compute.manager [instance: 75986322-d59d-411c-a60c-776415cdd6d1] raise exception.BDMNotFound(id=self.id) 2015-11-30 18:03:48.003 4993 ERROR nova.compute.manager [instanc
[Yahoo-eng-team] [Bug 1498370] Re: [SRU] DHCP agent: interface unplug leads to exception
This bug was fixed in the package neutron - 1:2015.1.2-0ubuntu2 --- neutron (1:2015.1.2-0ubuntu2) vivid; urgency=medium * Fix DHCP agent delete non-existant interface (LP: #1498370) - d/p/dhcp-protect-against-case-when-device-name-is-none.patch -- Edward Hope-Morley Tue, 10 Nov 2015 00:11:31 + ** Changed in: neutron (Ubuntu Vivid) Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1498370 Title: [SRU] DHCP agent: interface unplug leads to exception Status in neutron: Fix Released Status in neutron package in Ubuntu: Fix Released Status in neutron source package in Trusty: New Status in neutron source package in Vivid: Fix Released Status in neutron source package in Wily: Fix Released Status in neutron source package in Xenial: Fix Released Bug description: [Impact] There are edge cases when the DHCP agent attempts to unplug an interface and the device does not exist. This patch ensures that the agent can tolerate this case. [Test Case] * create subnet with dhcp enabled * set pdb.set_trace() in neutron.agent.linux.dhcp.DeviceManager.destroy() * manually delete ns- device in tenant namespace * pdb continue and should not raise any error [Regression Potential] None 2015-09-22 01:23:42.612 ERROR neutron.agent.dhcp.agent [-] Unable to disable dhcp for c543db4d-e077-488f-b58c-5805f63f86b6. 2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent Traceback (most recent call last): 2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent File "/opt/stack/neutron/neutron/agent/dhcp/agent.py", line 115, in call_driver 2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent getattr(driver, action)(**action_kwargs) 2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 221, in disable 2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent self._destroy_namespace_and_port() 2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 226, in _destroy_namespace_and_port 2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent self.device_manager.destroy(self.network, self.interface_name) 2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 1223, in destroy 2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent self.driver.unplug(device_name, namespace=network.namespace) 2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent File "/opt/stack/neutron/neutron/agent/linux/interface.py", line 358, in unplug 2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent tap_name = self._get_tap_name(device_name, prefix) 2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent File "/opt/stack/neutron/neutron/agent/linux/interface.py", line 299, in _get_tap_name 2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent dev_name = dev_name.replace(prefix or self.DEV_NAME_PREFIX, 2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent AttributeError: 'NoneType' object has no attribute 'replace' 2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent 2015-09-22 01:23:42.616 INFO neutron.agent.dhcp.agent [-] Synchronizing state complete The reason is the device is None To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1498370/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1521379] [NEW] Filter instances by subnet id/name
Public bug reported: Under compute -> instances, we can filter instances by instance name, image id, flavor id or status. We can add another filter where user can filter instances based on subnet id or name. It is useful when user has multiple subnets in project and he wants to view/list instances running on particular subnet. ** Affects: horizon Importance: Undecided Status: New ** Description changed: Under compute -> instances, we can filter instances by instance name, image id, flavor id or status. We can add another filter where user can filter instances based on subnet id or name. - It is useful when I have multiple subnets in my project and I want to + It is useful when user has multiple subnets in project and he wants to view/list instances running on particular subnet. -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1521379 Title: Filter instances by subnet id/name Status in OpenStack Dashboard (Horizon): New Bug description: Under compute -> instances, we can filter instances by instance name, image id, flavor id or status. We can add another filter where user can filter instances based on subnet id or name. It is useful when user has multiple subnets in project and he wants to view/list instances running on particular subnet. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1521379/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1521398] [NEW] Lint warnings in openstack_dashboard/static/
Public bug reported: In openstack_dashboard/static/ folder, a lot of lint warnings are thrown to do with code formatting. ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1521398 Title: Lint warnings in openstack_dashboard/static/ Status in OpenStack Dashboard (Horizon): New Bug description: In openstack_dashboard/static/ folder, a lot of lint warnings are thrown to do with code formatting. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1521398/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1521360] [NEW] Add Member action in loadbalancers does not enforce protocol as required field
You have been subscribed to a public bug: In Project->Networks->Load Balancers: 1. Click on Members tab 2. Click on 'Add Member' button **Make sure there are no instances to be added as members.** Protocol Port is not a required field but if you don't enter the port, it fails with message "Unable to add members" which does not indicate the actual error. Horizon log shows this error: 2015-11-30 21:09:39,964 DEBUG 25947 [neutronclient.client]: RESP:400 {'date': 'Mon, 30 Nov 2015 21:09:39 GMT', 'connection': 'keep-alive', 'content-type': 'application/json; charset=UTF-8', 'content-length': '124', 'x-openstack-request-id': 'req-e140a1b7-72dd-46d1-9033-159d1897988a'} {"NeutronError": {"message": "Invalid input for operation: 'None' is not a integer.", "type": "InvalidInput", "detail": ""}} 2015-11-30 21:09:39,964 DEBUG 25947 [neutronclient.v2_0.client]: Error message: {"NeutronError": {"message": "Invalid input for operation: 'None' is not a integer.", "type": "InvalidInput", "detail": ""}} Neutron Log shows the following error: 2015-11-30 21:09:39,961DEBUG [neutron.api.v2.base] 597 Request body: {u'member': {u'protocol_port': None, u'pool_id': u'b4996970-2d8f-4d9f-b553-a55bf1a159d0', u'address': u'10.2.2.5', u'admin_state_up': True}} 2015-11-30 21:09:39,962 INFO [neutron.api.v2.resource] 95 create failed (client error): Invalid input for operation: 'None' is not a integer. Looks like the underlying problem is that protocol_port is removed as a requirement when there are no instances available to be selected as members. However, user can still choose the other way of entering the member ip address and will run into this problem (protocol_port.required will still be set to False as there are no servers). ** Affects: horizon Importance: Undecided Status: New -- Add Member action in loadbalancers does not enforce protocol as required field https://bugs.launchpad.net/bugs/1521360 You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1521394] [NEW] Loadbalancer Add member fails silently
You have been subscribed to a public bug: The instance to be added should not have any ports i.e. it is still in scheduling state. In Project->Networks->Load Balancers: 1. Click on Members tab 2. Click on 'Add Member' button 3. Enter member details and pick the instance with no ports You will get a success message that a member has been added but you won't see it in the table. The AddMember workflow calls neutron to retrieve a port list. If the port list is empty the function returns True and fails silently. The function should return False with a failure message if there are no ports so that the error message can be displayed to the user. ** Affects: horizon Importance: Undecided Status: New -- Loadbalancer Add member fails silently https://bugs.launchpad.net/bugs/1521394 You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1521360] Re: Add Member action in loadbalancers does not enforce protocol as required field
** Project changed: django-openstack-auth => horizon -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1521360 Title: Add Member action in loadbalancers does not enforce protocol as required field Status in OpenStack Dashboard (Horizon): New Bug description: In Project->Networks->Load Balancers: 1. Click on Members tab 2. Click on 'Add Member' button **Make sure there are no instances to be added as members.** Protocol Port is not a required field but if you don't enter the port, it fails with message "Unable to add members" which does not indicate the actual error. Horizon log shows this error: 2015-11-30 21:09:39,964 DEBUG 25947 [neutronclient.client]: RESP:400 {'date': 'Mon, 30 Nov 2015 21:09:39 GMT', 'connection': 'keep-alive', 'content-type': 'application/json; charset=UTF-8', 'content-length': '124', 'x-openstack-request-id': 'req-e140a1b7-72dd-46d1-9033-159d1897988a'} {"NeutronError": {"message": "Invalid input for operation: 'None' is not a integer.", "type": "InvalidInput", "detail": ""}} 2015-11-30 21:09:39,964 DEBUG 25947 [neutronclient.v2_0.client]: Error message: {"NeutronError": {"message": "Invalid input for operation: 'None' is not a integer.", "type": "InvalidInput", "detail": ""}} Neutron Log shows the following error: 2015-11-30 21:09:39,961DEBUG [neutron.api.v2.base] 597 Request body: {u'member': {u'protocol_port': None, u'pool_id': u'b4996970-2d8f-4d9f-b553-a55bf1a159d0', u'address': u'10.2.2.5', u'admin_state_up': True}} 2015-11-30 21:09:39,962 INFO [neutron.api.v2.resource] 95 create failed (client error): Invalid input for operation: 'None' is not a integer. Looks like the underlying problem is that protocol_port is removed as a requirement when there are no instances available to be selected as members. However, user can still choose the other way of entering the member ip address and will run into this problem (protocol_port.required will still be set to False as there are no servers). To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1521360/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1521394] Re: Loadbalancer Add member fails silently
** Project changed: django-openstack-auth => horizon -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1521394 Title: Loadbalancer Add member fails silently Status in OpenStack Dashboard (Horizon): New Bug description: The instance to be added should not have any ports i.e. it is still in scheduling state. In Project->Networks->Load Balancers: 1. Click on Members tab 2. Click on 'Add Member' button 3. Enter member details and pick the instance with no ports You will get a success message that a member has been added but you won't see it in the table. The AddMember workflow calls neutron to retrieve a port list. If the port list is empty the function returns True and fails silently. The function should return False with a failure message if there are no ports so that the error message can be displayed to the user. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1521394/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1518390] Re: Scrub python 2.6 compatibility from python-novaclient
Moved to python-novaclient, change is up at https://review.openstack.org/#/c/248874/ ** Also affects: python-novaclient Importance: Undecided Status: New ** No longer affects: nova ** Changed in: python-novaclient Importance: Undecided => Medium ** Changed in: python-novaclient Status: New => In Progress ** Changed in: python-novaclient Assignee: (unassigned) => Chuck Carmack (chuckcarmack75) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1518390 Title: Scrub python 2.6 compatibility from python-novaclient Status in python-novaclient: In Progress Bug description: Oslo is removing python 2.6 support, so nova needs to scrub python 2.6 compatibility code. This bug is for scrubbing python-novaclient. To manage notifications about this bug go to: https://bugs.launchpad.net/python-novaclient/+bug/1518390/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1521422] [NEW] glance image-create should not work without parameters
Public bug reported: when i do glance image-create it creates the empty image and shows a image without name on image-list It should not be created without specifying parameters like container and image path or is it like you can create empty one and later update it ? ** Affects: glance Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1521422 Title: glance image-create should not work without parameters Status in Glance: New Bug description: when i do glance image-create it creates the empty image and shows a image without name on image- list It should not be created without specifying parameters like container and image path or is it like you can create empty one and later update it ? To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1521422/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1521423] [NEW] is-public = true should be removed from manual seems
Public bug reported: when i tried creating image using --is-public True I gave me following message unrecognized arguments: --is-public True ubuntu-14.04-server-cloudimg- amd64-disk1.img It should be removed from manuals http://docs.openstack.org/cli- reference/content/glanceclient_commands.html ** Affects: glance Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1521423 Title: is-public = true should be removed from manual seems Status in Glance: New Bug description: when i tried creating image using --is-public True I gave me following message unrecognized arguments: --is-public True ubuntu-14.04-server-cloudimg- amd64-disk1.img It should be removed from manuals http://docs.openstack.org/cli- reference/content/glanceclient_commands.html To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1521423/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1521423] Re: is-public = true should be removed from manual seems
In v1 , we use "--is-public True(or False)". In v2, we use "--visibility public(or private)". The client's default api version is 2 now. So you should use --visibility. The doc is ok. You can find "glance image-create (v1)" and "glance image-create (v2)" in it. Please check again. ** Changed in: glance Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1521423 Title: is-public = true should be removed from manual seems Status in Glance: Invalid Bug description: when i tried creating image using --is-public True I gave me following message unrecognized arguments: --is-public True ubuntu-14.04-server-cloudimg- amd64-disk1.img It should be removed from manuals http://docs.openstack.org/cli- reference/content/glanceclient_commands.html To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1521423/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1373337] Re: Updating quotas path issue
Reviewed: https://review.openstack.org/245230 Committed: https://git.openstack.org/cgit/openstack/api-site/commit/?id=256610411a7826f67bd9ed79438b6da0529eae41 Submitter: Jenkins Branch:master commit 256610411a7826f67bd9ed79438b6da0529eae41 Author: Diane Fleming Date: Fri Nov 13 10:56:51 2015 -0600 Update quotas extension Add current code samples Update descriptions for clarity Change-Id: Ibafb2a49ca8c680ef6d29f20ba9e2f23fcbe61c1 Closes-Bug: #1373337 ** Changed in: openstack-api-site Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1373337 Title: Updating quotas path issue Status in neutron: Incomplete Status in openstack-api-site: Fix Released Bug description: In the docs (http://developer.openstack.org/api-ref-networking-v2.html #quotas-ext), it clearly says that to update quota values, the request should be: PUT /v2.0/quotas But I'm getting a 404 when you do this. If you do this instead: PUT /v2.0/quotas/foo it works as expected, where "foo" can literally be anything. I looked at how the python-neutronclient handles this, and they seem to append the tenant_id to the end - which is completely undocumented. So: 1. Is this a bug with the Neutron API or with the Neutron docs? 2. Why does any arbitrary string get accepted? I'm using Neutron on devstack, Icehouse release. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1373337/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1291157] Re: idp deletion should trigger token revocation
unassigning due to inactivity and failing patch sets, also this doesn't affect keystoneclient ** Changed in: python-keystoneclient Status: Triaged => Invalid ** Changed in: keystone Status: In Progress => Confirmed ** Changed in: keystone Assignee: Marek Denis (marek-denis) => (unassigned) ** Changed in: python-keystoneclient Assignee: Paweł Pamuła (pawel-pamula) => (unassigned) ** Changed in: keystone Milestone: next => None -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1291157 Title: idp deletion should trigger token revocation Status in OpenStack Identity (keystone): Confirmed Status in python-keystoneclient: Invalid Bug description: When a federation IdP is deleted, the tokens that were issued (and still active) and associated with the IdP should be deleted. To prevent unwarranted access. The fix should delete any tokens that are associated with the idp, upon deletion (and possibly update, too). To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1291157/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1521458] [NEW] some unsupported vmware adapter types cannot be filtered out
Public bug reported: Based on the API: http://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.wssdk.apiref.doc/vim.VirtualDiskManager.VirtualDiskAdapterType.html There is no paraVirtualscsi Constant, so it is rejected by the API as "A specified parameter was not correct". in nova/virt/vmwareapi/vm_util.py: def get_vmdk_adapter_type(adapter_type): """Return the adapter type to be used in vmdk descriptor. Adapter type in vmdk descriptor is same for LSI-SAS, LSILogic & ParaVirtual because Virtual Disk Manager API does not recognize the newer controller types. """ if adapter_type in [constants.ADAPTER_TYPE_LSILOGICSAS, constants.ADAPTER_TYPE_PARAVIRTUAL]: vmdk_adapter_type = constants.DEFAULT_ADAPTER_TYPE else: vmdk_adapter_type = adapter_type return vmdk_adapter_type in nova/virt/vmwareapi/constants.py DEFAULT_ADAPTER_TYPE = "lsiLogic" ... ADAPTER_TYPE_BUSLOGIC = "busLogic" ADAPTER_TYPE_IDE = "ide" ADAPTER_TYPE_LSILOGICSAS = "lsiLogicsas" ADAPTER_TYPE_PARAVIRTUAL = "paraVirtual" Here we can see, only "lsiLogicsas" and "paraVirtual" are filtered out as unsupported adapter types and switched to "lsiLogic". But there are still other unsupported adapter types such as paraVirtualscsi and so on. So the filter list seems should be extent or errors will occur if other unsupported adapter types are used. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1521458 Title: some unsupported vmware adapter types cannot be filtered out Status in OpenStack Compute (nova): New Bug description: Based on the API: http://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.wssdk.apiref.doc/vim.VirtualDiskManager.VirtualDiskAdapterType.html There is no paraVirtualscsi Constant, so it is rejected by the API as "A specified parameter was not correct". in nova/virt/vmwareapi/vm_util.py: def get_vmdk_adapter_type(adapter_type): """Return the adapter type to be used in vmdk descriptor. Adapter type in vmdk descriptor is same for LSI-SAS, LSILogic & ParaVirtual because Virtual Disk Manager API does not recognize the newer controller types. """ if adapter_type in [constants.ADAPTER_TYPE_LSILOGICSAS, constants.ADAPTER_TYPE_PARAVIRTUAL]: vmdk_adapter_type = constants.DEFAULT_ADAPTER_TYPE else: vmdk_adapter_type = adapter_type return vmdk_adapter_type in nova/virt/vmwareapi/constants.py DEFAULT_ADAPTER_TYPE = "lsiLogic" ... ADAPTER_TYPE_BUSLOGIC = "busLogic" ADAPTER_TYPE_IDE = "ide" ADAPTER_TYPE_LSILOGICSAS = "lsiLogicsas" ADAPTER_TYPE_PARAVIRTUAL = "paraVirtual" Here we can see, only "lsiLogicsas" and "paraVirtual" are filtered out as unsupported adapter types and switched to "lsiLogic". But there are still other unsupported adapter types such as paraVirtualscsi and so on. So the filter list seems should be extent or errors will occur if other unsupported adapter types are used. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1521458/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1521463] [NEW] Openstack RC file downloaded in LDAP environment did not support domain setting
Public bug reported: Enabled Read-only LDAP backend, and accessed horizon using the LDAP user and domain named domain1. But I found the function of Project->Access & Security->API Acess-> Download Openstack RC File did not support domain setting now. Using download RC file, since it does not include domain information, can not access project and got an error as below. [liwbj@zcu13 ~]$ ./openrcv3-ldap Please enter your OpenStack Password: [liwbj@zcu13 ~]$ nova list ERROR (BadRequest): KS-EE09F51 Expecting to find domain in project - the server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400) the downloaed rc file: #!/bin/bash # To use an OpenStack cloud you need to authenticate against the Identity # service named keystone, which returns a **Token** and **Service Catalog**. # The catalog contains the endpoints for all services the user/tenant has # access to - such as Compute, Image Service, Identity, Object Storage, Block # Storage, and Networking (code-named nova, glance, keystone, swift, # cinder, and neutron). # # *NOTE*: Using the 2.0 *Identity API* does not necessarily mean any other # OpenStack API is version 2.0. For example, your cloud provider may implement # Image API v1.1, Block Storage API v2, and Compute API v2.0. OS_AUTH_URL is # only for the Identity API served through keystone. export OS_AUTH_URL=http://9.12.35.139:5000/v2.0 # With the addition of Keystone we have standardized on the term **tenant** # as the entity that owns the resources. export OS_TENANT_ID=28e2c7b82c6742219edb66181a5e94e3 export OS_TENANT_NAME="admin" export OS_PROJECT_NAME="admin" # In addition to the owning entity (tenant), OpenStack stores the entity # performing the action as the **user**. export OS_USERNAME="admin" # With Keystone you pass the keystone password. echo "Please enter your OpenStack Password: " read -sr OS_PASSWORD_INPUT export OS_PASSWORD=$OS_PASSWORD_INPUT # If your configuration has multiple regions, we set that information here. # OS_REGION_NAME is optional and only valid in certain environments. export OS_REGION_NAME="RegionOne" # Don't leave a blank variable, unset it if it was empty if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1521463 Title: Openstack RC file downloaded in LDAP environment did not support domain setting Status in OpenStack Dashboard (Horizon): New Bug description: Enabled Read-only LDAP backend, and accessed horizon using the LDAP user and domain named domain1. But I found the function of Project->Access & Security->API Acess-> Download Openstack RC File did not support domain setting now. Using download RC file, since it does not include domain information, can not access project and got an error as below. [liwbj@zcu13 ~]$ ./openrcv3-ldap Please enter your OpenStack Password: [liwbj@zcu13 ~]$ nova list ERROR (BadRequest): KS-EE09F51 Expecting to find domain in project - the server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400) the downloaed rc file: #!/bin/bash # To use an OpenStack cloud you need to authenticate against the Identity # service named keystone, which returns a **Token** and **Service Catalog**. # The catalog contains the endpoints for all services the user/tenant has # access to - such as Compute, Image Service, Identity, Object Storage, Block # Storage, and Networking (code-named nova, glance, keystone, swift, # cinder, and neutron). # # *NOTE*: Using the 2.0 *Identity API* does not necessarily mean any other # OpenStack API is version 2.0. For example, your cloud provider may implement # Image API v1.1, Block Storage API v2, and Compute API v2.0. OS_AUTH_URL is # only for the Identity API served through keystone. export OS_AUTH_URL=http://9.12.35.139:5000/v2.0 # With the addition of Keystone we have standardized on the term **tenant** # as the entity that owns the resources. export OS_TENANT_ID=28e2c7b82c6742219edb66181a5e94e3 export OS_TENANT_NAME="admin" export OS_PROJECT_NAME="admin" # In addition to the owning entity (tenant), OpenStack stores the entity # performing the action as the **user**. export OS_USERNAME="admin" # With Keystone you pass the keystone password. echo "Please enter your OpenStack Password: " read -sr OS_PASSWORD_INPUT export OS_PASSWORD=$OS_PASSWORD_INPUT # If your configuration has multiple regions, we set that information here. # OS_REGION_NAME is optional and only valid in certain environments. export OS_REGION_NAME="RegionOne" # Don't leave a blank variable, unset it if it was empty if [ -
[Yahoo-eng-team] [Bug 1521464] [NEW] create an instance with ephemeral disk or swap disk error
Public bug reported: I boot an instance with ephemeral disk(or swap disk) in kilo, but return http request error: eg: nova boot --flavor 2 --image 5905bd7e-a87f-4856-8401-b8eb7211c84d --nic net-id=12ace164-d996-4261-9228-23ca0680f7a8 --ephemeral size=5,format=ext3 test_vm1 ERROR (BadRequest): Block Device Mapping is Invalid: Boot sequence for the instance and image/block device mapping combination is not valid. (HTTP 400) (Request-ID: req-b571662f-e554-49a7-979f-763f34b4b162) I think the instance with ephemeral disk should be able to boot from the image, instead of invalid boot sequence. the log in: http://paste.openstack.org/show/480452/ ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1521464 Title: create an instance with ephemeral disk or swap disk error Status in OpenStack Compute (nova): New Bug description: I boot an instance with ephemeral disk(or swap disk) in kilo, but return http request error: eg: nova boot --flavor 2 --image 5905bd7e-a87f-4856-8401-b8eb7211c84d --nic net-id=12ace164-d996-4261-9228-23ca0680f7a8 --ephemeral size=5,format=ext3 test_vm1 ERROR (BadRequest): Block Device Mapping is Invalid: Boot sequence for the instance and image/block device mapping combination is not valid. (HTTP 400) (Request-ID: req-b571662f-e554-49a7-979f-763f34b4b162) I think the instance with ephemeral disk should be able to boot from the image, instead of invalid boot sequence. the log in: http://paste.openstack.org/show/480452/ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1521464/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1521466] [NEW] volume snapshot which is created by create snapshot from storage base instance don't be deleted automatic
Public bug reported: Reproduce Step: 1. Create an instance holly-vm1 base on storage volume(xiv use here with 2 volumes). 2. Create a snapshot from the instance in step1. 3. Found snapshot holly-vm1-snapshot under image list. 4. Found 2 volume snapshots under volume snapshot page.(snapshot for holly-vm1-snapshot) 5. Delete snapshot holly-vm1-snapshot 6. Found the 2 volume snapshot .(snapshot for holly-vm1-snapshot) still there which should be deleted with snapshot image. ** Affects: glance Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1521466 Title: volume snapshot which is created by create snapshot from storage base instance don't be deleted automatic Status in Glance: New Bug description: Reproduce Step: 1. Create an instance holly-vm1 base on storage volume(xiv use here with 2 volumes). 2. Create a snapshot from the instance in step1. 3. Found snapshot holly-vm1-snapshot under image list. 4. Found 2 volume snapshots under volume snapshot page.(snapshot for holly-vm1-snapshot) 5. Delete snapshot holly-vm1-snapshot 6. Found the 2 volume snapshot .(snapshot for holly-vm1-snapshot) still there which should be deleted with snapshot image. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1521466/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1521467] [NEW] Snapshot of volume base instance shows type as image not snapshot and size as 0
Public bug reported: Reproduce Step: 1. Create an instance holly-vm1 base on storage volume(xiv use here). 2. Create a snapshot from the instance in step1. 3. List the images and found 2 problems: a, the type of holly-vm1-snapshot is "Image", which should be "Snapshot"! b, the size of holly-vm1-snapshot is "0". 4. I created a snapshot from an instance based on local disk holly-vm4-snapshot to compare and put the screenshot in attachment. You could see the different value showed in table. Using Kilo version OpenStack, not sure if the same on the latest code. ** Affects: glance Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1521467 Title: Snapshot of volume base instance shows type as image not snapshot and size as 0 Status in Glance: New Bug description: Reproduce Step: 1. Create an instance holly-vm1 base on storage volume(xiv use here). 2. Create a snapshot from the instance in step1. 3. List the images and found 2 problems: a, the type of holly-vm1-snapshot is "Image", which should be "Snapshot"! b, the size of holly-vm1-snapshot is "0". 4. I created a snapshot from an instance based on local disk holly-vm4-snapshot to compare and put the screenshot in attachment. You could see the different value showed in table. Using Kilo version OpenStack, not sure if the same on the latest code. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1521467/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1009215] Re: During resuming instances get error "could not open /dev/net/tun: Operation not permitted"
[Expired for neutron because there has been no activity for 60 days.] ** Changed in: neutron Status: Incomplete => Expired -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1009215 Title: During resuming instances get error "could not open /dev/net/tun: Operation not permitted" Status in neutron: Expired Bug description: Hello all from rainy Kiev! Playing with nova+quantum+openvswitch without nova-volume lab on two Ubuntu 12.04 nodes - one controller with everything on it except nova-compute and second dedicated compute node with nova-compute. In /etc/libvirt/qemu.conf have this: cgroup_device_acl = [ "/dev/null", "/dev/full", "/dev/zero", "/dev/random", "/dev/urandom", "/dev/ptmx", "/dev/kvm", "/dev/kqemu", "/dev/rtc", "/dev/hpet","/dev/net/tun", ] But If I suspend instances, reboot host and try to resume it I get error in /var/log/nova/nova-compute.log: 2012-05-27 05:30:01 TRACE nova.rpc.amqp kvm: -netdev tap,ifname=tap4362ce16-32,script=,id=hostnet0: could not configure /dev/net/tun (tap4362ce16-32): Operation not permitted 2012-05-27 05:30:01 TRACE nova.rpc.amqp kvm: -netdev tap,ifname=tap4362ce16-32,script=,id=hostnet0: Device 'tap' could not be initialized Have anybody this behavior? To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1009215/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1521467] Re: Snapshot of volume base instance shows type as image not snapshot and size as 0
When create images for volume back-end instances, the image's size is 0. It's not a bug as I know, because in this action, cinder create d a backup volume and then glance create a image of which properties contained the backup volume's information. I'm not sure the behavior "the type is Image, not Snapshot" is a bug or not. At last, this is a Nova problem, not Glance. :) ** Also affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1521467 Title: Snapshot of volume base instance shows type as image not snapshot and size as 0 Status in Glance: New Status in OpenStack Compute (nova): New Bug description: Reproduce Step: 1. Create an instance holly-vm1 base on storage volume(xiv use here). 2. Create a snapshot from the instance in step1. 3. List the images and found 2 problems: a, the type of holly-vm1-snapshot is "Image", which should be "Snapshot"! b, the size of holly-vm1-snapshot is "0". 4. I created a snapshot from an instance based on local disk holly-vm4-snapshot to compare and put the screenshot in attachment. You could see the different value showed in table. Using Kilo version OpenStack, not sure if the same on the latest code. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1521467/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1521489] [NEW] Support interchangeable datasource
Public bug reported: It would be highly valuable for a user to be able to run cloud-init with custom user-data using local datasource post deploy e.g: 1# juju deploy myservice 2# juju scp user-data.yaml myservice/0:/var/lib/cloud/seed/nocloud-net/user-data2 3# juju run --service myservice "/usr/bin/cloud-init --local -f /var/lib/cloud/seed/nocloud-net/user-data2" Support for something along these lines would allow for charming up cloud-init too, something I've been dreaming about for a while now! Thanks! ** Affects: cloud-init Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1521489 Title: Support interchangeable datasource Status in cloud-init: New Bug description: It would be highly valuable for a user to be able to run cloud-init with custom user-data using local datasource post deploy e.g: 1# juju deploy myservice 2# juju scp user-data.yaml myservice/0:/var/lib/cloud/seed/nocloud-net/user-data2 3# juju run --service myservice "/usr/bin/cloud-init --local -f /var/lib/cloud/seed/nocloud-net/user-data2" Support for something along these lines would allow for charming up cloud-init too, something I've been dreaming about for a while now! Thanks! To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1521489/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1521466] Re: volume snapshot which is created by create snapshot from storage base instance don't be deleted automatic
** Also affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1521466 Title: volume snapshot which is created by create snapshot from storage base instance don't be deleted automatic Status in Glance: New Status in OpenStack Compute (nova): New Bug description: Reproduce Step: 1. Create an instance holly-vm1 base on storage volume(xiv use here with 2 volumes). 2. Create a snapshot from the instance in step1. 3. Found snapshot holly-vm1-snapshot under image list. 4. Found 2 volume snapshots under volume snapshot page.(snapshot for holly-vm1-snapshot) 5. Delete snapshot holly-vm1-snapshot 6. Found the 2 volume snapshot .(snapshot for holly-vm1-snapshot) still there which should be deleted with snapshot image. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1521466/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1521491] Re: OSC doesn't support list server by searching flavor and image name.
Well, sorry. Wrong place... Please ignore this. It should be sent to OSC. :) ** Changed in: nova Status: New => Confirmed ** Changed in: nova Status: Confirmed => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1521491 Title: OSC doesn't support list server by searching flavor and image name. Status in OpenStack Compute (nova): Invalid Bug description: Nova API only supports list servers searching by flavor and image ID. So OSC only supports this. But the doc of the command help doesn't tell the user that they can only specify the ID. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1521491/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1521491] [NEW] OSC doesn't support list server by searching flavor and image name.
Public bug reported: Nova API only supports list servers searching by flavor and image ID. So OSC only supports this. But the doc of the command help doesn't tell the user that they can only specify the ID. ** Affects: nova Importance: Undecided Status: Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1521491 Title: OSC doesn't support list server by searching flavor and image name. Status in OpenStack Compute (nova): Invalid Bug description: Nova API only supports list servers searching by flavor and image ID. So OSC only supports this. But the doc of the command help doesn't tell the user that they can only specify the ID. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1521491/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1521467] Re: Snapshot of volume base instance shows type as image not snapshot and size as 0
** Changed in: glance Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1521467 Title: Snapshot of volume base instance shows type as image not snapshot and size as 0 Status in Glance: Invalid Status in OpenStack Compute (nova): New Bug description: Reproduce Step: 1. Create an instance holly-vm1 base on storage volume(xiv use here). 2. Create a snapshot from the instance in step1. 3. List the images and found 2 problems: a, the type of holly-vm1-snapshot is "Image", which should be "Snapshot"! b, the size of holly-vm1-snapshot is "0". 4. I created a snapshot from an instance based on local disk holly-vm4-snapshot to compare and put the screenshot in attachment. You could see the different value showed in table. Using Kilo version OpenStack, not sure if the same on the latest code. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1521467/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1521466] Re: volume snapshot which is created by create snapshot from storage base instance don't be deleted automatic
** Changed in: glance Status: New => Opinion -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1521466 Title: volume snapshot which is created by create snapshot from storage base instance don't be deleted automatic Status in Glance: Opinion Status in OpenStack Compute (nova): New Bug description: Reproduce Step: 1. Create an instance holly-vm1 base on storage volume(xiv use here with 2 volumes). 2. Create a snapshot from the instance in step1. 3. Found snapshot holly-vm1-snapshot under image list. 4. Found 2 volume snapshots under volume snapshot page.(snapshot for holly-vm1-snapshot) 5. Delete snapshot holly-vm1-snapshot 6. Found the 2 volume snapshot .(snapshot for holly-vm1-snapshot) still there which should be deleted with snapshot image. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1521466/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1521466] Re: volume snapshot which is created by create snapshot from storage base instance don't be deleted automatic
Now if a user want to delete the snapshot which is created by back-end instance, he should delete both the image in Glance and the snapshot in Cinder. So do you think that when delete the image, Glance should delete the snapshot in Cinder? ** No longer affects: nova -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1521466 Title: volume snapshot which is created by create snapshot from storage base instance don't be deleted automatic Status in Glance: Opinion Bug description: Reproduce Step: 1. Create an instance holly-vm1 base on storage volume(xiv use here with 2 volumes). 2. Create a snapshot from the instance in step1. 3. Found snapshot holly-vm1-snapshot under image list. 4. Found 2 volume snapshots under volume snapshot page.(snapshot for holly-vm1-snapshot) 5. Delete snapshot holly-vm1-snapshot 6. Found the 2 volume snapshot .(snapshot for holly-vm1-snapshot) still there which should be deleted with snapshot image. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1521466/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp