[Yahoo-eng-team] [Bug 1964940] Re: Compute tests are failing with failed to reach ACTIVE status and task state "None" within the required time.

2023-02-14 Thread Alan Pevec
closing old promotion-blocker 
fixed in Neutron

** Changed in: tripleo
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1964940

Title:
  Compute tests are failing with failed to reach ACTIVE status and task
  state "None" within the required time.

Status in neutron:
  Fix Released
Status in tripleo:
  Invalid

Bug description:
  On Fs001 CentOS Stream 9 wallaby, Multiple compute server tempest tests are 
failing with following error [1][2]:
  ```
  {1} 
tempest.api.compute.images.test_images.ImagesTestJSON.test_create_image_from_paused_server
 [335.060967s] ... FAILED

  Captured traceback:
  ~~~
  Traceback (most recent call last):
    File 
"/usr/lib/python3.9/site-packages/tempest/api/compute/images/test_images.py", 
line 99, in test_create_image_from_paused_server
  server = self.create_test_server(wait_until='ACTIVE')
    File "/usr/lib/python3.9/site-packages/tempest/api/compute/base.py", 
line 270, in create_test_server
  body, servers = compute.create_test_server(
    File "/usr/lib/python3.9/site-packages/tempest/common/compute.py", line 
267, in create_test_server
  LOG.exception('Server %s failed to delete in time',
    File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 
227, in __exit__
  self.force_reraise()
    File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 
200, in force_reraise
  raise self.value
    File "/usr/lib/python3.9/site-packages/tempest/common/compute.py", line 
237, in create_test_server
  waiters.wait_for_server_status(
    File "/usr/lib/python3.9/site-packages/tempest/common/waiters.py", line 
100, in wait_for_server_status
  raise lib_exc.TimeoutException(message)
  tempest.lib.exceptions.TimeoutException: Request timed out
  Details: (ImagesTestJSON:test_create_image_from_paused_server) Server 
6d1d8906-46fd-42ad-8b4e-0f89adb25ed1 failed to reach ACTIVE status and task 
state "None" within the required time (300 s). Server boot request ID: 
req-4930f047-7f5f-4d08-9ebb-8ac99b29ad7b. Current status: BUILD. Current task 
state: spawning.
  ```

  Below is the list of other tempest tests failing on the same job.[2]
  ```
  
tempest.api.compute.images.test_images.ImagesTestJSON.test_create_image_from_paused_server[id-71bcb732-0261-11e7-9086-fa163e4fa634]
  
tempest.api.compute.admin.test_volume.AttachSCSIVolumeTestJSON.test_attach_scsi_disk_with_config_drive[id-777e468f-17ca-4da4-b93d-b7dbf56c0494]
  
tempest.api.compute.servers.test_delete_server.DeleteServersTestJSON.test_delete_server_while_in_attached_volume[id-d0f3f0d6-d9b6-4a32-8da4-23015dcab23c,volume]
  
tempest.api.compute.servers.test_attach_interfaces.AttachInterfacesV270Test.test_create_get_list_interfaces[id-2853f095-8277-4067-92bd-9f10bd4f8e0c,network]
  
tempest.api.compute.servers.test_delete_server.DeleteServersTestJSON.test_delete_server_while_in_shelved_state[id-bb0cb402-09dd-4947-b6e5-5e7e1cfa61ad]
  setUpClass 
(tempest.api.compute.images.test_images_oneserver_negative.ImagesOneServerNegativeTestJSON)
  
tempest.api.compute.servers.test_device_tagging.TaggedBootDevicesTest_v242.test_tagged_boot_devices[id-a2e65a6c-66f1-4442-aaa8-498c31778d96,image,network,slow,volume]
  
tempest.api.compute.servers.test_delete_server.DeleteServersTestJSON.test_delete_server_while_in_suspended_state[id-1f82ebd3-8253-4f4e-b93f-de9b7df56d8b]
  
tempest.api.compute.servers.test_attach_interfaces.AttachInterfacesTestJSON.test_create_list_show_delete_interfaces_by_network_port[id-73fe8f02-590d-4bf1-b184-e9ca81065051,network]
  setUpClass 
(tempest.api.compute.servers.test_server_rescue.ServerRescueTestJSONUnderV235)
  ```

  Here is the traceback from nova-compute logs [3],
  ```
  2022-03-15 09:05:39.011 2 ERROR nova.compute.manager 
[req-4930f047-7f5f-4d08-9ebb-8ac99b29ad7b d5ea6c724785473b8ea1104d70fb0d14 
64c7d31d84284a28bc9aaa4eaad2b9fb - default default] [instance: 
6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] Instance failed to spawn: 
nova.exception.VirtualInterfaceCreateException: Virtual Interface creation 
failed
  2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 
6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] Traceback (most recent call last):
  2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 
6d1d8906-46fd-42ad-8b4e-0f89adb25ed1]   File 
"/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 7231, in 
_create_guest_with_network
  2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 
6d1d8906-46fd-42ad-8b4e-0f89adb25ed1] guest = self._create_guest(
  2022-03-15 09:05:39.011 2 ERROR nova.compute.manager [instance: 
6d1d8906-46fd-42ad-8b4e-0f89adb25ed1]   File 
"/usr/lib64/python3.9/contextlib.py", line 126, in __exit__
  2022-03-15 09:05:39.011 2 ERROR nova.compute.manager 

[Yahoo-eng-team] [Bug 1960902] Re: Wallaby ovb fs001 failing on tempest.api.compute.servers.test_delete_server.DeleteServersTestJSON.test_delete_server_while_in_building_state

2022-02-16 Thread Alan Pevec
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1960902

Title:
  Wallaby ovb fs001 failing on
  
tempest.api.compute.servers.test_delete_server.DeleteServersTestJSON.test_delete_server_while_in_building_state

Status in OpenStack Compute (nova):
  New
Status in tripleo:
  Triaged

Bug description:
  Reporting due the fact the test fails, and on the failure says this is
  a nova internal error and should be reported as bug:

  
  Logs:
  
https://logserver.rdoproject.org/49/39449/2/check/periodic-tripleo-ci-centos-9-ovb-3ctlr_1comp-featureset001-wallaby/6607433/logs/

  
  Error on tempest side:

  ft1.3: 
tempest.api.compute.servers.test_delete_server.DeleteServersTestJSON.test_delete_server_while_in_building_state[id-9e6e0c87-3352-42f7-9faf-5d6210dbd159]testtools.testresult.real._StringException:
 pythonlogging:'': {{{
  2022-02-14 17:49:07,053 254588 INFO [tempest.lib.common.rest_client] 
Request (DeleteServersTestJSON:test_delete_server_while_in_building_state): 201 
POST https://10.0.0.5:13000/v3/auth/tokens 0.474s
  2022-02-14 17:49:07,054 254588 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json'}
  Body: 
  Response - Headers: {'date': 'Mon, 14 Feb 2022 22:49:06 GMT', 'server': 
'Apache', 'content-length': '5989', 'x-subject-token': '', 'vary': 
'X-Auth-Token', 'x-openstack-request-id': 
'req-07da4513-88c6-4ade-a4f7-b1f8b75595c2', 'content-type': 'application/json', 
'connection': 'close', 'status': '201', 'content-location': 
'https://10.0.0.5:13000/v3/auth/tokens'}
  Body: b'{"token": {"methods": ["password"], "user": {"domain": {"id": 
"default", "name": "Default"}, "id": "10d3ad43a61641ce8182ca7275eadae3", 
"name": "tempest-DeleteServersTestJSON-1760533763-project", 
"password_expires_at": null}, "audit_ids": ["lZA50RCXTlqRxKgejUylog"], 
"expires_at": "2022-02-14T23:49:07.00Z", "issued_at": 
"2022-02-14T22:49:07.00Z", "project": {"domain": {"id": "default", "name": 
"Default"}, "id": "833c1ddd2dfb4db8a31719eba1705a4b", "name": 
"tempest-DeleteServersTestJSON-1760533763"}, "is_domain": false, "roles": 
[{"id": "69eeb16b59ff4b6f9cb6e2eb34025513", "name": "reader"}, {"id": 
"946f9c5be3ca413c9f8ae3261ed391c5", "name": "member"}], "catalog": 
[{"endpoints": [{"id": "20fa93c3887648949dfeb21c594b7c0b", "interface": 
"admin", "region_id": "regionOne", "url": "http://172.17.0.173:9696;, "region": 
"regionOne"}, {"id": "a710cf1fd64e4293bb60d54e29074a99", "interface": "public", 
"region_id": "regionOne", "url": "https://10.0.0.5:13696;, "region": 
"regionOne"}, {"id": "f7f132e73d1243f984bd2d4a6db0bedb", "interface": 
"internal", "region_id": "regionOne", "url": "http://172.17.0.173:9696;, 
"region": "regionOne"}], "id": "0bce0bee2c80453d9b8fe1d47b36a2d0", "type": 
"network", "name": "neutron"}, {"endpoints": [{"id": 
"3c2c1cdd6852421d9905869844fabd34", "interface": "internal", "region_id": 
"regionOne", "url": "http://172.17.0.173:8000/v1;, "region": "regionOne"}, 
{"id": "d948ad8956a642d5a016f164c8d53c8f", "interface": "admin", "region_id": 
"regionOne", "url": "http://172.17.0.173:8000/v1;, "region": "regionOne"}, 
{"id": "e7fa7141606c428a9c582ecd93100f3e", "interface": "public", "region_id": 
"regionOne", "url": "https://10.0.0.5:13005/v1;, "region": "regionOne"}], "id": 
"247706a8fccf414e8e79aed9573e4e4c", "type": "cloudformation", "name": 
"heat-cfn"}, {"endpoints": [{"id": "3847d57ab18a413a99629fabf6cfbf95", 
"interface": "internal", "region_id": "regionOne", "url": 
"http://172.17.0.173:8778/placement;, "region": "regionOne"}, {"id": 
"73f48d783ddd4c658399e9c5ca4e4524", "interface": "admin", "region_id": 
"regionOne", "url": "http://172.17.0.173:8778/placement;, "region": 
"regionOne"}, {"id": "ae5a64a560c54899a1d56ec8755e4692", "interface": "public", 
"region_id": "regionOne", "url": "https://10.0.0.5:13778/placement;, "region": 
"regionOne"}], "id": "3b3d32f96dc2455fa19ebaa1fe46a318", "type": "placement", 
"name": "placement"}, {"endpoints": [{"id": "47412821c43b464790d3b9310a27f298", 
"interface": "internal", "region_id": "regionOne", "url": 
"http://172.17.0.173:8776/v3/833c1ddd2dfb4db8a31719eba1705a4b;, "region": 
"regionOne"}, {"id": "ae54f76d2c2a4e518ac09c38094e36d0", "interface": "public", 
"region_id": "regionOne", "url": 
"https://10.0.0.5:13776/v3/833c1ddd2dfb4db8a31719eba1705a4b;, "region": 
"regionOne"}, {"id": "cead87f50b9040658aa8897f38cb8ff0", "interface": "admin", 
"region_id": "regionOne", "url": 
"http://172.17.0.173:8776/v3/833c1ddd2dfb4db8a31719eba1705a4b;, "region": 
"regionOne"}], "id": "53dc49ca65b447ba943e5def068e8859", "type": "volumev3", 
"name": "cinderv3"}, {"endpoints": [{"id": "1b82f0b12b474c75bb9c3e4d31fe5ec4", 
"interface": "public", "region_id": "regionOne", "url": 

[Yahoo-eng-team] [Bug 1749747] Re: Queens promotion - error on ControllerDeployment_Step3.0 - error running glance_api_db_sync

2018-02-20 Thread Alan Pevec
*** This bug is a duplicate of bug 1749640 ***
https://bugs.launchpad.net/bugs/1749640

** This bug is no longer a duplicate of bug 1749641
   Overcloud deployment failing in promotion jobs at glance-manage db_sync
** This bug has been marked a duplicate of bug 1749640
   db sync fails for mysql while adding triggers

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1749747

Title:
  Queens promotion - error on ControllerDeployment_Step3.0 - error
  running glance_api_db_sync

Status in Glance:
  New
Status in tripleo:
  Triaged

Bug description:
  Multinode jobs are failing in the Queens promotion pipeline with the
  following error:

  overcloud.AllNodesDeploySteps.ControllerDeployment_Step3.0:
    resource_type: OS::Heat::StructuredDeployment
    physical_resource_id: f8c4b43d-b5b0-48ea-a124-ecc2a10be1be
    status: CREATE_FAILED
    status_reason: |
  Error: resources[0]: Deployment to server failed: deploy_status_code : 
Deployment exited with non-zero status code: 2

  .

  Error running ['docker', 'run', '--name', 'glance_api_db_sync', '--
  label', 'config_id=tripleo_step3', '--label',
  'container_name=glance_api_db_sync', '--label', 'managed_by=paunch', '
  --label', 'config_data={\"image\": \"192.168.24.1:8787/queens/centos-
  binary-glance-api:813c7290c3a8d77eef397526d1ea6dc108943b0d_90604cd8\",
  \"environment\": ...],

  ...
   "DBError: (pymysql.err.InternalError) (1419, u'You do not have the SUPER 
privilege and binary logging is enabled (you *might* want to use the less safe 
log_bin_trust_function_creators variable)') ...

  see full logs at:

  https://logs.rdoproject.org/openstack-periodic/periodic-tripleo-ci-
  centos-7-multinode-1ctlr-
  
featureset017-queens/3977a1f/undercloud/home/jenkins/failed_deployment_list.log.txt.gz

  and

  https://logs.rdoproject.org/openstack-periodic/periodic-tripleo-ci-
  centos-7-multinode-1ctlr-
  
featureset010-queens/904f18d/undercloud/home/jenkins/overcloud_deploy.log.txt.gz#_2018-02-15_15_57_01

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1749747/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1747690] Re: master promotion: Failed to call refresh: glance-manage db_sync

2018-02-07 Thread Alan Pevec
** Changed in: tripleo
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1747690

Title:
  master promotion: Failed to call refresh: glance-manage  db_sync

Status in Glance:
  Fix Released
Status in tripleo:
  Invalid

Bug description:
  periodic-tripleo-centos-7-master-containers-build

  undercloud install fails on glance_manage db_sync:

  https://logs.rdoproject.org/openstack-periodic/periodic-tripleo-
  centos-7-master-containers-
  
build/e7c61cd/undercloud/home/jenkins/undercloud_install.log.txt.gz#_2018-02-06_14_23_33

  2018-02-06 14:23:33 | 2018-02-06 14:23:33,117 INFO: Error: 
/Stage[main]/Glance::Db::Sync/Exec[glance-manage db_sync]: Failed to call 
refresh: glance-manage  db_sync returned 1 instead of one of [0]
  2018-02-06 14:23:33 | 2018-02-06 14:23:33,118 INFO: Error: 
/Stage[main]/Glance::Db::Sync/Exec[glance-manage db_sync]: glance-manage  
db_sync returned 1 instead of one of [0]

  
  
https://logs.rdoproject.org/openstack-periodic/periodic-tripleo-centos-7-master-containers-build/e7c61cd/undercloud/var/log/extra/errors.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1747690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1742827] [NEW] nova-scheduler reports dead compute nodes but nova-compute is enabled and up

2018-01-11 Thread Alan Pevec
Public bug reported:

(originally reported by David Manchado in
https://bugzilla.redhat.com/show_bug.cgi?id=1533196 )

Description of problem:
We are seeing that nova scheduler is removing compute nodes because it 
considers them as dead but openstack compute service list reports nova-compute 
to be up an running.
We can see in nova-scheduler entries with the following pattern:
- Removing dead compute node XXX from scheduler
- Filter ComputeFilter returned 0 hosts
- Filtering removed all hosts for the request with instance ID 
'11feeba9-f46c-416d-a97e-7c0c9d565b5a'. Filter results: 
['AggregateInstanceExtraSpecsFilter: (start: 19, end: 2)', 
'AggregateCoreFilter: (start: 2, end: 2)', 'AggregateDiskFilter: (start: 2, 
end: 2)', 'AggregateRamFilter: (start: 2, end: 2)', 'RetryFilter: (start: 2, 
end: 2)', 'AvailabilityZoneFilter: (start: 2, end: 2)', 'ComputeFilter: (start: 
2, end: 0)']

Version-Release number of selected component (if applicable):
Ocata

How reproducible:
N/A

Actual results:
Instances are not being spawned reporting 'no valid host found' because of 

Additional info:
This has been happening for a week.
We did an upgrade from Newton three weeks ago.
We have also done a minor update and the issue still persists.

Nova related RPMs
openstack-nova-scheduler-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch
python2-novaclient-7.1.2-1.el7.noarch
openstack-nova-novncproxy-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch
openstack-nova-cert-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch
openstack-nova-console-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch
openstack-nova-conductor-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch
openstack-nova-common-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch
openstack-nova-compute-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch
openstack-nova-placement-api-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch
puppet-nova-10.4.2-0.2018010220.f4bc1f0.el7.centos.noarch
openstack-nova-api-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch
python-nova-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1742827

Title:
  nova-scheduler reports dead compute nodes but nova-compute is enabled
  and up

Status in OpenStack Compute (nova):
  New

Bug description:
  (originally reported by David Manchado in
  https://bugzilla.redhat.com/show_bug.cgi?id=1533196 )

  Description of problem:
  We are seeing that nova scheduler is removing compute nodes because it 
considers them as dead but openstack compute service list reports nova-compute 
to be up an running.
  We can see in nova-scheduler entries with the following pattern:
  - Removing dead compute node XXX from scheduler
  - Filter ComputeFilter returned 0 hosts
  - Filtering removed all hosts for the request with instance ID 
'11feeba9-f46c-416d-a97e-7c0c9d565b5a'. Filter results: 
['AggregateInstanceExtraSpecsFilter: (start: 19, end: 2)', 
'AggregateCoreFilter: (start: 2, end: 2)', 'AggregateDiskFilter: (start: 2, 
end: 2)', 'AggregateRamFilter: (start: 2, end: 2)', 'RetryFilter: (start: 2, 
end: 2)', 'AvailabilityZoneFilter: (start: 2, end: 2)', 'ComputeFilter: (start: 
2, end: 0)']

  Version-Release number of selected component (if applicable):
  Ocata

  How reproducible:
  N/A

  Actual results:
  Instances are not being spawned reporting 'no valid host found' because of 

  Additional info:
  This has been happening for a week.
  We did an upgrade from Newton three weeks ago.
  We have also done a minor update and the issue still persists.

  Nova related RPMs
  openstack-nova-scheduler-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch
  python2-novaclient-7.1.2-1.el7.noarch
  openstack-nova-novncproxy-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch
  openstack-nova-cert-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch
  openstack-nova-console-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch
  openstack-nova-conductor-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch
  openstack-nova-common-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch
  openstack-nova-compute-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch
  openstack-nova-placement-api-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch
  puppet-nova-10.4.2-0.2018010220.f4bc1f0.el7.centos.noarch
  openstack-nova-api-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch
  python-nova-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1742827/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1742826] [NEW] Nova reports wrong quota usage

2018-01-11 Thread Alan Pevec
Public bug reported:

(originally reported by David Manchado in
https://bugzilla.redhat.com/show_bug.cgi?id=1528643 )

Description of problem:
Nova reports unaccurate quota usage. This can even lead to prevent spawning new 
instances when the project should have still room for more resources


Version-Release number of selected component (if applicable):
Ocata.
Nova related RPMs:
openstack-nova-scheduler-15.0.9-0.20171201203754.bbfc423.el7.centos.noarch
openstack-nova-console-15.0.9-0.20171201203754.bbfc423.el7.centos.noarch
python-nova-15.0.9-0.20171201203754.bbfc423.el7.centos.noarch
openstack-nova-conductor-15.0.9-0.20171201203754.bbfc423.el7.centos.noarch
puppet-nova-10.4.2-0.20171127233709.eb1fafa.el7.centos.noarch
openstack-nova-api-15.0.9-0.20171201203754.bbfc423.el7.centos.noarch
openstack-nova-compute-15.0.9-0.20171201203754.bbfc423.el7.centos.noarch
openstack-nova-placement-api-15.0.9-0.20171201203754.bbfc423.el7.centos.noarch
openstack-nova-cert-15.0.9-0.20171201203754.bbfc423.el7.centos.noarch
openstack-nova-common-15.0.9-0.20171201203754.bbfc423.el7.centos.noarch
python2-novaclient-7.1.2-1.el7.noarch
openstack-nova-novncproxy-15.0.9-0.20171201203754.bbfc423.el7.centos.noarch

How reproducible:
Not sure. We got to this situation during an upgrade from Newton to Ocata.
I guess it might be due to instance deletion while some services like galera 
and/or rabbit were not behaving properly

Actual results:
Nova reports a given project to be using 39 instances while openstack server 
list reports 17.

openstack limits show --absolute --project  | grep Instances
| maxTotalInstances| 48 |
| totalInstancesUsed   | 39 |

openstack server list --project   --format csv | wc -l
18 (note there is an extra line for the csv header)

Expected results:
Match openstack server list (accurate) and openstack limits show (unaccurate)

Additional info:
While doing some troubleshooting on nova.quota_usages I have found several 
projects and resources defined more than once.
SELECT * FROM (SELECT project_id,resource,COUNT(*) times FROM nova.quota_usages 
GROUP BY project_id, resource) as T WHERE times > 1;

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1742826

Title:
  Nova reports wrong quota usage

Status in OpenStack Compute (nova):
  New

Bug description:
  (originally reported by David Manchado in
  https://bugzilla.redhat.com/show_bug.cgi?id=1528643 )

  Description of problem:
  Nova reports unaccurate quota usage. This can even lead to prevent spawning 
new instances when the project should have still room for more resources

  
  Version-Release number of selected component (if applicable):
  Ocata.
  Nova related RPMs:
  openstack-nova-scheduler-15.0.9-0.20171201203754.bbfc423.el7.centos.noarch
  openstack-nova-console-15.0.9-0.20171201203754.bbfc423.el7.centos.noarch
  python-nova-15.0.9-0.20171201203754.bbfc423.el7.centos.noarch
  openstack-nova-conductor-15.0.9-0.20171201203754.bbfc423.el7.centos.noarch
  puppet-nova-10.4.2-0.20171127233709.eb1fafa.el7.centos.noarch
  openstack-nova-api-15.0.9-0.20171201203754.bbfc423.el7.centos.noarch
  openstack-nova-compute-15.0.9-0.20171201203754.bbfc423.el7.centos.noarch
  openstack-nova-placement-api-15.0.9-0.20171201203754.bbfc423.el7.centos.noarch
  openstack-nova-cert-15.0.9-0.20171201203754.bbfc423.el7.centos.noarch
  openstack-nova-common-15.0.9-0.20171201203754.bbfc423.el7.centos.noarch
  python2-novaclient-7.1.2-1.el7.noarch
  openstack-nova-novncproxy-15.0.9-0.20171201203754.bbfc423.el7.centos.noarch

  How reproducible:
  Not sure. We got to this situation during an upgrade from Newton to Ocata.
  I guess it might be due to instance deletion while some services like galera 
and/or rabbit were not behaving properly

  Actual results:
  Nova reports a given project to be using 39 instances while openstack server 
list reports 17.

  openstack limits show --absolute --project  | grep Instances
  | maxTotalInstances| 48 |
  | totalInstancesUsed   | 39 |

  openstack server list --project   --format csv | wc -l
  18 (note there is an extra line for the csv header)

  Expected results:
  Match openstack server list (accurate) and openstack limits show (unaccurate)

  Additional info:
  While doing some troubleshooting on nova.quota_usages I have found several 
projects and resources defined more than once.
  SELECT * FROM (SELECT project_id,resource,COUNT(*) times FROM 
nova.quota_usages GROUP BY project_id, resource) as T WHERE times > 1;

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1742826/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : 

[Yahoo-eng-team] [Bug 1696094] Re: CI: ovb-ha promotion job fails with 504 gateway timeout

2017-06-06 Thread Alan Pevec
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1696094

Title:
  CI: ovb-ha promotion job fails with 504 gateway timeout

Status in neutron:
  New
Status in tripleo:
  Triaged

Bug description:
  http://logs.openstack.org/15/359215/106/experimental-tripleo/gate-
  tripleo-ci-centos-7-ovb-
  ha/2ea94ab/console.html#_2017-06-05_23_52_38_539282

  2017-06-05 23:50:34.148537 | 
+---+--+
  2017-06-05 23:50:35.545475 | neutron CLI is deprecated and will be removed in 
the future. Use openstack CLI instead.
  2017-06-05 23:52:38.539282 | 504 Gateway Time-out
  2017-06-05 23:52:38.539408 | The server didn't respond in time.
  2017-06-05 23:52:38.539437 | 

  It happens on where subnet creation should be.
  I see in logs ovs-vsctl failure, but not sure it's not red herring.

  http://logs.openstack.org/15/359215/106/experimental-tripleo/gate-
  tripleo-ci-centos-7-ovb-ha/2ea94ab/logs/controller-1-tripleo-
  ci-b-bar/var/log/messages

  Jun  5 23:48:22 localhost ovs-vsctl: ovs|1|vsctl|INFO|Called as 
/bin/ovs-vsctl --timeout=5 --id=@manager -- create Manager 
"target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
  Jun  5 23:48:22 localhost ovs-vsctl: ovs|2|db_ctl_base|ERR|transaction 
error: {"details":"Transaction causes multiple rows in \"Manager\" table to 
have identical values (\"ptcp:6640:127.0.0.1\") for index on column \"target\". 
 First row, with UUID 7e2b866a-40d5-4f9c-9e08-0be3bb34b199, existed in the 
database before this transaction and was not modified by the transaction.  
Second row, with UUID 49488cff-271a-457a-b1e7-e6ca3da6f069, was inserted by 
this transaction.","error":"constraint violation"}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1696094/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656276] Re: Error running nova-manage cell_v2 simple_cell_setup when configuring nova with puppet-nova

2017-01-14 Thread Alan Pevec
** Also affects: packstack
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1656276

Title:
  Error running nova-manage  cell_v2 simple_cell_setup when configuring
  nova with puppet-nova

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) newton series:
  New
Status in Packstack:
  New
Status in puppet-nova:
  New
Status in tripleo:
  Triaged

Bug description:
  When installing and configuring nova with puppet-nova (with either
  tripleo, packstack or puppet-openstack-integration), we are getting
  following errors:

  Debug: Executing: '/usr/bin/nova-manage  cell_v2 simple_cell_setup 
--transport-url=rabbit://guest:guest@172.19.2.159:5672/?ssl=0'
  Debug: 
/Stage[main]/Nova::Db::Sync_cell_v2/Exec[nova-cell_v2-simple-cell-setup]/returns:
 Sleeping for 5 seconds between tries
  Notice: 
/Stage[main]/Nova::Db::Sync_cell_v2/Exec[nova-cell_v2-simple-cell-setup]/returns:
 Cell0 is already setup.
  Notice: 
/Stage[main]/Nova::Db::Sync_cell_v2/Exec[nova-cell_v2-simple-cell-setup]/returns:
 No hosts found to map to cell, exiting.

  The issue seems to be that it's running "nova-manage  cell_v2
  simple_cell_setup" as part of the nova database initialization when no
  compute nodes have been created but it returns 1 in that case [1].
  However, note that the previous steps (Cell0 mapping and schema
  migration) were successfully run.

  I think for nova bootstrap a reasonable orchestrated workflow would
  be:

  1. Create required databases (including the one for cell0).
  2. Nova db sync
  3. nova cell0 mapping and schema creation.
  4. Adding compute nodes
  5. mapping compute nodes (by running nova-manage cell_v2 discover_hosts)

  For step 3 we'd need to get simple_cell_setup to return 0 when not
  having compute nodes, or having a different command.

  With current behavior of nova-manage the only working workflow we can
  do is:

  1. Create required databases (including the one for cell0).
  2. Nova db sync
  3. Adding all compute nodes
  4. nova cell0 mapping and schema creation with "nova-manage cell_v2 
simple_cell_setup".

  Am I right?, Is there any better alternative?

  
  [1] 
https://github.com/openstack/nova/blob/master/nova/cmd/manage.py#L1112-L1114

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1656276/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588003] Re: Skip host to guest CPU compatibility check for emulated (QEMU "TCG" mode) guests during live migration

2016-06-07 Thread Alan Pevec
** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1588003

Title:
  Skip host to guest CPU compatibility check for emulated (QEMU "TCG"
  mode) guests  during live migration

Status in OpenStack Compute (nova):
  Invalid
Status in OpenStack Compute (nova) liberty series:
  In Progress
Status in OpenStack Compute (nova) mitaka series:
  Fix Committed

Bug description:
  The _compare_cpu() method of Nova's libvirt driver performs guest vCPU 
  model to destination host CPU model comparison (during live migration) 
  even in the case of emulated (QEMU "TCG" mode) guests, where the CPU
  instructions are emulated completely in software, and no hardware
  acceleration, such as KVM is involved.

  From nova/virt/libvirt/driver.py:

 [...]
 5464 def _compare_cpu(self, guest_cpu, host_cpu_str, instance):
 5465 """Check the host is compatible with the requested CPU
 [...][...]
 5481 if CONF.libvirt.virt_type not in ['qemu', 'kvm']:
 5482 return
 5483

  Skip the comparison for 'qemu' part above.

  Fix for master branch is here:

  https://review.openstack.org/#/c/323467/ -- 
  libvirt: Skip CPU compatibility check for emulated guests

  
  This bug is for stable branch backports: Mitaka and Liberty.

  [Thanks: Daniel P. Berrange for the pointer.]

  
  Related context and references
  --

  (a) This upstream discussion thread where using the custom CPU model 
  ("gate64") is causing live migration CI jobs to fail.

  http://lists.openstack.org/pipermail/openstack-dev/2016-May/095811.html 
  -- "[gate] [nova] live migration, libvirt 1.3, and the gate"

  (b) Gate DevStack change to avoid setting the custom CPU model in 
  nova.conf

  https://review.openstack.org/#/c/320925/4 -- don't set libvirt 
  cpu_model

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1588003/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1380965] Re: Floating IPs don't have instance ids in Juno

2015-11-24 Thread Alan Pevec
** Tags removed: in-stable-juno juno-backport-potential

** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1380965

Title:
  Floating IPs don't have instance ids in Juno

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in nova package in Ubuntu:
  Fix Released

Bug description:
  In Icehouse, when I associate a floating IP with an instance, the Nova
  API for listing floating IPs (/os-floating-ips) gives you the instance
  ID of the associated instance:

{"floating_ips": [{"instance_id": "82c2aff3-511b-
  4e9e-8353-79da86281dfd", "ip": "10.1.151.1", "fixed_ip": "10.10.0.4",
  "id": "8113e71b-7194-447a-ad37-98182f7be80a", "pool": "ext_net"}]}

  
  With latest rc for Juno, the instance_id always seem to be null:

{"floating_ips": [{"instance_id": null, "ip": "10.96.201.0",
  "fixed_ip": "10.10.0.8", "id": "00ffd9a0-5afe-4221-8913-7e275da7f82a",
  "pool": "ext_net"}]}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1380965/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1379077] Re: Tenants can be created with invalid ids

2015-11-24 Thread Alan Pevec
** Changed in: keystone/juno
   Status: Confirmed => Won't Fix

** Tags removed: juno-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1379077

Title:
  Tenants can be created with invalid ids

Status in OpenStack Identity (keystone):
  In Progress
Status in OpenStack Identity (keystone) icehouse series:
  Won't Fix
Status in OpenStack Identity (keystone) juno series:
  Won't Fix
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  When creating a new tenant, there is an optional argument 'id' that
  may be passed:

  
https://github.com/openstack/keystone/blob/9025b64a8f2bf5cf01a18453d6728e081bd2c3b9/keystone/assignment/controllers.py#L114

  If not passed, this just creates a uuid and proceeds.  If a value is
  passed, it will use that value.  So a user with priv's to create a
  tenant can pass something like "../../../../../" as the id.  If this
  is done, then the project can't be deleted without manually removing
  the value from the database. This can lead to a DoS that could fill
  the db and take down the cloud, in the worst of circumstances.

  I believe the proper fix here would be to just remove this feature
  altogether.  But this is because I'm not clear about why we would ever
  want to allow someone to set the id manually.  If there's a valid use
  case here, then we should at least do some input validation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1379077/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408498] Re: Can't delete when control Juno and compute icehouse

2015-11-24 Thread Alan Pevec
** Tags removed: juno-backport-potential

** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1408498

Title:
  Can't delete when control Juno and compute icehouse

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) juno series:
  Won't Fix

Bug description:
  When I have Juno control and Icehouse compute and icehouse network
  deleting an instance doesn't work.

  This is due to the Fixed IP object having an embedded version of the
  network object that is too new for Icehouse. This causes and infinite
  loop

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1408498/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1415087] Re: [OSSA 2015-011] Format-guessing and file disclosure in image convert (CVE-2015-1850, CVE-2015-1851)

2015-11-19 Thread Alan Pevec
** Changed in: cinder/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1415087

Title:
  [OSSA 2015-011] Format-guessing and file disclosure in image convert
  (CVE-2015-1850, CVE-2015-1851)

Status in Cinder:
  Fix Released
Status in Cinder icehouse series:
  Fix Released
Status in Cinder juno series:
  Fix Released
Status in Cinder kilo series:
  Fix Released
Status in OpenStack Compute (nova):
  Incomplete
Status in OpenStack Security Advisory:
  Fix Released

Bug description:
  Cinder does not provide input format to several calls of "qemu-img
  convert". This allows the attacker to play the format guessing by
  providing a volume with a qcow2 signature. If this signature contains
  a base file, this file will be read by a process running as root and
  embedded in the output. This bug is similar to CVE-2013-1922.

  Tested with: lvm backed volume storage, it may apply to others as well
  Steps to reproduce:
  - create volume and attach to vm,
  - create a qcow2 signature with base-file[1] from within the vm and
  - trigger upload to glance with "cinder upload-to-image --disk-type qcow2"[2].
  The image uploaded to glance will have /etc/passwd from the cinder-volume 
host embedded.
  Affected versions: tested on 2014.1.3, found while reading 2014.2.1

  Fix: Always specify both input "-f" and output format "-O" to "qemu-
  img convert". The code is in module cinder.image.image_utils.

  Bastian Blank

  [1]: qemu-img create -f qcow2 -b /etc/passwd /dev/vdb
  [2]: The disk-type != raw triggers the use of "qemu-img convert"

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1415087/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460741] Re: security groups iptables can block legitimate traffic as INVALID

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460741

Title:
  security groups iptables can block legitimate traffic as INVALID

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  The iptables implementation of security groups includes a default rule
  to drop any INVALID packets (according to the Linux connection state
  tracking system.)  It looks like this:

  -A neutron-openvswi-od0518220-e -m state --state INVALID -j DROP

  This is placed near the top of the rule stack, before any security
  group rules added by the user.  See:

  
https://github.com/openstack/neutron/blob/stable/kilo/neutron/agent/linux/iptables_firewall.py#L495
  
https://github.com/openstack/neutron/blob/stable/kilo/neutron/agent/linux/iptables_firewall.py#L506-L510

  However, there are some cases where you would not want traffic marked
  as INVALID to be dropped here.  Specifically, our use case:

  We have a load balancing scheme where requests from the LB are
  tunneled as IP-in-IP encapsulation between the LB and the VM.
  Response traffic is configured for DSR, so the responses go directly
  out the default gateway of the VM.

  The results of this are iptables on the hypervisor does not see the
  initial SYN from the LB to VM (because it is encapsulated in IP-in-
  IP), and thus it does not make it into the connection table.  The
  response that comes out of the VM (not encapsulated) hits iptables on
  the hypervisor and is dropped as invalid.

  I'd like to see a Neutron option to enable/disable the population of
  this INVALID state rule, so that operators (such as us) can disable it
  if desired.  Obviously it's better in general to keep it in there to
  drop invalid packets, but there are cases where you would like to not
  do this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460741/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460562] Re: ipset can't be destroyed when last sg rule is deleted

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460562

Title:
  ipset can't be destroyed when last sg rule is deleted

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  reproduce steps:
  1. a VM A in default security group
  2. default security group has rules: 1. allow all traffic out; 2. allow it 
self as remote_group in
  3. firstly delete rule 1, then delete rule2

  I found the iptables in compute node which VM A resids didn't be
  reload, and the relevant ipset didn't be destroyed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460562/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461054] Re: [OSSA 2015-012] Adding 0.0.0.0/0 to allowed address pairs breaks l2 agent (CVE-2015-3221)

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1461054

Title:
  [OSSA 2015-012] Adding 0.0.0.0/0 to allowed address pairs breaks l2
  agent (CVE-2015-3221)

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in neutron kilo series:
  Fix Committed
Status in OpenStack Security Advisory:
  Fix Released

Bug description:
  vagrant@node1:~$ neutron port-update $PORT_ID --allowed_address_pairs 
list=true type=dict ip_address=0.0.0.0/0
  Updated port: 28dc7eb1-6f95-429f-8e30-adaefffcec70

  This does not work - the ipset man page says that zero prefix size is not 
allowed for type hash:net.
  But it also breaks the l2 agent and so affects other ports/vms/tenants ... - 
so opening as security vulnerability.

  2015-06-02 11:02:31.897 ERROR neutron.agent.linux.utils 
[req-6dfc4e3b-7162-4528-b821-295de80aa7ed None None]
  Command: ['ipset', 'add', '-exist', u'NETIPv48a445928-2f41-43de-a', 
u'0.0.0.0/0']
  Exit code: 1
  Stdin:
  Stdout:
  Stderr: ipset v6.20.1: The value of the CIDR parameter of the IP address is 
invalid

  2015-06-02 11:02:31.898 DEBUG oslo_concurrency.lockutils 
[req-6dfc4e3b-7162-4528-b821-295de80aa7ed None None] Releasing file lock 
"/opt/stack/data/neutron/lock/neutron-ipset" after holding it for 0.006s 
release /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:227
  2015-06-02 11:02:31.898 DEBUG oslo_concurrency.lockutils 
[req-6dfc4e3b-7162-4528-b821-295de80aa7ed None None] Lock "ipset" released by 
"set_members" :: held 0.006s inner 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:456
  2015-06-02 11:02:31.898 ERROR 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-6dfc4e3b-7162-4528-b821-295de80aa7ed None None] Error while processing VIF 
ports
  2015-06-02 11:02:31.898 3679 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Traceback (most recent call 
last):
  2015-06-02 11:02:31.898 3679 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py", 
line 1640, in rpc_loop
  2015-06-02 11:02:31.898 3679 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent ovs_restarted)
  2015-06-02 11:02:31.898 3679 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py", 
line 1434, in process_network_ports
  2015-06-02 11:02:31.898 3679 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
port_info.get('updated', set()))
  2015-06-02 11:02:31.898 3679 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/agent/securitygroups_rpc.py", line 302, in 
setup_port_filters
  2015-06-02 11:02:31.898 3679 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
self.prepare_devices_filter(new_devices)
  2015-06-02 11:02:31.898 3679 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/agent/securitygroups_rpc.py", line 159, in 
decorated_function
  2015-06-02 11:02:31.898 3679 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent *args, **kwargs)
  2015-06-02 11:02:31.898 3679 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/agent/securitygroups_rpc.py", line 185, in 
prepare_devices_filter
  2015-06-02 11:02:31.898 3679 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent security_groups, 
security_group_member_ips)
  2015-06-02 11:02:31.898 3679 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/contextlib.py", line 24, in __exit__
  2015-06-02 11:02:31.898 3679 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent self.gen.next()
  2015-06-02 11:02:31.898 3679 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/agent/firewall.py", line 106, in defer_apply
  2015-06-02 11:02:31.898 3679 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
self.filter_defer_apply_off()
  2015-06-02 11:02:31.898 3679 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/agent/linux/iptables_firewall.py", line 671, in 
filter_defer_apply_off
  2015-06-02 11:02:31.898 3679 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent self.unfiltered_ports)
  2015-06-02 11:02:31.898 3679 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/agent/linux/iptables_firewall.py", line 155, in 
_setup_chains_apply
  2015-06-02 11:02:31.898 3679 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent self._setup_chain(port, 
INGRESS_DIRECTION)
  2015-06-02 11:02:31.898 3679 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 

[Yahoo-eng-team] [Bug 1468828] Re: HA router-create breaks ML2 drivers that implement create_network such as Arista

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1468828

Title:
  HA router-create  breaks ML2 drivers that implement create_network
  such as Arista

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  This issue was discovered with Arista ML2 driver, when an HA router
  was created. However, this will impact any ML2 driver that implements
  create_network.

  When an admin creates HA router (neutron router-create --ha ), the HA 
framework invokes network_create() and sets tenant-id to '' (The empty string).
  network_create() ML2 mech driver API expects tenant-id to be set to a valid 
ID.
  Any ML2 driver, which relies on tenant-id, will fail/reject network_create() 
request, resulting in router-create to fail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1468828/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477253] Re: ovs arp_responder unsuccessfully inserts IPv6 address into arp table

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1477253

Title:
  ovs arp_responder unsuccessfully inserts IPv6 address into arp table

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  The ml2 openvswitch arp_responder agent attempts to install IPv6
  addresses into the OVS arp response tables. The action obviously
  fails, reporting:

  ovs-ofctl: -:4: 2001:db8::x:x:x:x invalid IP address

  The end result is that the OVS br-tun arp tables are incomplete.

  The submitted patch verifies that the address is IPv4 before
  attempting to add the address to the table.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1477253/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464377] Re: Keystone v2.0 api accepts tokens deleted with v3 api

2015-11-19 Thread Alan Pevec
** Changed in: keystone/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1464377

Title:
  Keystone v2.0 api accepts tokens deleted with v3 api

Status in OpenStack Identity (keystone):
  Expired
Status in OpenStack Identity (keystone) juno series:
  Fix Released

Bug description:
  Keystone tokens that are deleted using the v3 api are still accepted by
  the v2 api. Steps to reproduce:

  1. Request a scoped token as a member of a tenant.
  2. Delete it using DELETE /v3/auth/tokens
  3. Request the tenants you can access with GET v2.0/tenants
  4. The token is accepted and keystone returns the list of tenants

  The token was a PKI token. Admin tokens appear to be deleted correctly.
  This could be a problem if a user's access needs to be revoked but they
  are still able to access v2 functions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1464377/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472458] Re: Arista ML2 VLAN driver should ignore non-VLAN network types

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1472458

Title:
  Arista ML2 VLAN driver should ignore non-VLAN network types

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  Arista ML2 VLAN driver should process only VLAN based networks. Any
  other network type (e.g. vxlan) should be ignored.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1472458/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1465922] Re: Password visible in clear text in keystone.log when user created and keystone debug logging is enabled

2015-11-19 Thread Alan Pevec
** Changed in: keystone/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1465922

Title:
  Password visible in clear text in keystone.log when user created and
  keystone debug logging is enabled

Status in Bandit:
  New
Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Identity (keystone) juno series:
  Fix Released
Status in OpenStack Identity (keystone) kilo series:
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  grep CLEARTEXTPASSWORD keystone.log

  2015-06-16 06:44:39.770 20986 DEBUG keystone.common.controller [-]
  RBAC: Authorizing identity:create_user(user={u'domain_id': u'default',
  u'password': u'CLEARTEXTPASSWORD', u'enabled': True,
  u'default_project_id': u'0175b43419064ae38c4b74006baaeb8d', u'name':
  u'DermotJ'}) _build_policy_check_credentials /usr/lib/python2.7/site-
  packages/keystone/common/controller.py:57

  Issue code:
  
https://github.com/openstack/keystone/blob/master/keystone/common/controller.py#L57

  LOG.debug('RBAC: Authorizing %(action)s(%(kwargs)s)', {
  'action': action,
  'kwargs': ', '.join(['%s=%s' % (k, kwargs[k]) for k in kwargs])})

  Shadow the values of sensitive fields like 'password' by some
  meaningless garbled text like "X" is one way to fix.

  Well, in addition to this, I think we should never pass the 'password'
  with its original value along the code and save it in any persistence,
  instead we should convert it to a strong hash value as early as
  possible. With the help of a good hash system, we never have to need
  the original value of the password, right?

To manage notifications about this bug go to:
https://bugs.launchpad.net/bandit/+bug/1465922/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462974] Re: Network gateway vlan connection fails because of int conversion

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1462974

Title:
  Network gateway vlan connection fails because of int conversion

Status in neutron:
  Invalid
Status in neutron juno series:
  Fix Released
Status in vmware-nsx:
  Fix Committed

Bug description:
  So far there has been an implicit assumption that segmentation_id would be an 
integer.
  In fact, it is a string value, which was been passed down to NSX.

  This means that passing a string value, like "xyz", rather than a validation 
error would have triggered a backend error.
  Moreover, the check for validity of the VLAN tag is now in the form min < tag 
< max, and this does not work unless tag is converted to integer.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1462974/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1454434] Re: NoNetworkFoundInMaximumAllowedAttempts during concurrent network creation

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1454434

Title:
  NoNetworkFoundInMaximumAllowedAttempts during concurrent network
  creation

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  NoNetworkFoundInMaximumAllowedAttempts  could be thrown if networks are 
created by multiple threads simultaneously.
  This is related to https://bugs.launchpad.net/bugs/1382064
  Currently DB logic works correctly, however 11 attempts that code does right 
now might not be enough in some rare unlucky cases under extreme concurrency.

  We need to randomize segmentation_id selection to avoid such issues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1454434/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1438331] Re: Nova fails to delete rbd image, puts guest in to ERROR state

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1438331

Title:
  Nova fails to delete rbd image, puts guest in to ERROR state

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  When removing guests  that have been booted on Ceph, Nova will
  occasionally put guests in to ERROR state with the following ...

  Reported to the controller:

  | fault| {"message": "error removing image", 
"code": 500, "details": "  File 
\"/usr/lib/python2.7/site-packages/nova/compute/manager.py\", line 314, in 
decorated_function |
  |  | return function(self, context, 
*args, **kwargs)
   |
  |  |   File 
\"/usr/lib/python2.7/site-packages/nova/compute/manager.py\", line 2525, in 
terminate_instance |
  |  | do_terminate_instance(instance, 
bdms)   
  |
  |  |   File 
\"/usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py\", line 
272, in inner|
  |  | return f(*args, **kwargs)

 |
  |  |   File 
\"/usr/lib/python2.7/site-packages/nova/compute/manager.py\", line 2523, in 
do_terminate_instance  |
  |  | 
self._set_instance_error_state(context, instance)   
  |
  |  |   File 
\"/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py\", line 
82, in __exit__   |
  |  | six.reraise(self.type_, 
self.value, self.tb)
  |
  |  |   File 
\"/usr/lib/python2.7/site-packages/nova/compute/manager.py\", line 2513, in 
do_terminate_instance  |
  |  | self._delete_instance(context, 
instance, bdms, quotas) 
   |
  |  |   File 
\"/usr/lib/python2.7/site-packages/nova/hooks.py\", line 131, in inner  
   |
  |  | rv = f(*args, **kwargs)  

 |
  |  |   File 
\"/usr/lib/python2.7/site-packages/nova/compute/manager.py\", line 2482, in 
_delete_instance   |
  |  | quotas.rollback()

 |
  |  |   File 
\"/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py\", line 
82, in __exit__   |
  |  | six.reraise(self.type_, 
self.value, self.tb)
  |
  |  |   File 
\"/usr/lib/python2.7/site-packages/nova/compute/manager.py\", line 2459, in 
_delete_instance   |
  |  | self._shutdown_instance(context, 
instance, bdms) 
 |
  |  |   File 
\"/usr/lib/python2.7/site-packages/nova/compute/manager.py\", line 2389, in 
_shutdown_instance

[Yahoo-eng-team] [Bug 1439223] Re: misleading power state logging in _sync_instance_power_state

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1439223

Title:
  misleading power state logging in _sync_instance_power_state

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  Commit aa1792eb4c1d10e9a192142ce7e20d37871d916a added more verbose
  logging of the various database and hypervisor states when
  _sync_instance_power_state is called (which can be called from
  handle_lifecycle_event - triggered by the libvirt driver, or from the
  _sync_power_states periodic task).

  The current instance power_state from the DB's POV and the power state
  from the hypervisor's POV (via handle_lifecycle_event) can be
  different and if they are different, the database is updated with the
  power_state from the hypervisor and the local db_power_state variable
  is updated to be the same as the vm_power_state (from the hypervisor).

  Then later, the db_power_state value is used to log the different
  states when we have conditions like the database says an instance is
  running / active but the hypervisor says it's stopped, so we call
  compute_api.stop().

  We should be logging the original database power state and the
  power_state from the hypervisor to more accurately debug when we're
  out of sync.

  This is already fixed on master:
  https://review.openstack.org/#/c/159263/

  I'm reporting the bug so it this can be backported to stable/juno.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1439223/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1430239] Re: Hyper-V: *DataRoot paths are not set for instances

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1430239

Title:
  Hyper-V: *DataRoot paths are not set for instances

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  The Nova Hyper-V Driver does not set the Data Root path locations for
  the newly created instances to the same location as the instances. By
  default. Hyper-V sets the location on C:\. This can cause issues for
  small C:\ partitions, as some of these files can be large.

  The path locations that needs to be set are: ConfigurationDataRoot,
  LogDataRoot, SnapshotDataRoot, SuspendDataRoot, SwapFileDataRoot.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1430239/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439302] Re: "FixedIpNotFoundForAddress: Fixed ip not found for address None." traces in gate runs

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1439302

Title:
  "FixedIpNotFoundForAddress: Fixed ip not found for address None."
  traces in gate runs

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  Seeing this quite a bit in normal gate runs:

  http://logs.openstack.org/53/169753/2/check/check-tempest-dsvm-full-
  ceph/07dcae0/logs/screen-n-cpu.txt.gz?level=TRACE#_2015-04-01_14_34_37_110

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRml4ZWRJcE5vdEZvdW5kRm9yQWRkcmVzczogRml4ZWQgaXAgbm90IGZvdW5kIGZvciBhZGRyZXNzIE5vbmUuXCIgQU5EIHRhZ3M6XCJzY3JlZW4tbi1jcHUudHh0XCIgQU5EIHRhZ3M6XCJtdWx0aWxpbmVcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQyNzkwMjQ0NTg4OSwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d] FixedIpNotFoundForAddress: Fixed ip not 
found for address None.
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d] Traceback (most recent call last):
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
142, in _dispatch_and_reply
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d] executor_callback))
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
186, in _dispatch
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d] executor_callback)
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
130, in _do_dispatch
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d] result = func(ctxt, **new_args)
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]   File 
"/opt/stack/new/nova/nova/network/floating_ips.py", line 186, in 
deallocate_for_instance
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d] super(FloatingIP, 
self).deallocate_for_instance(context, **kwargs)
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]   File 
"/opt/stack/new/nova/nova/network/manager.py", line 558, in 
deallocate_for_instance
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d] instance=instance)
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]   File 
"/opt/stack/new/nova/nova/network/manager.py", line 214, in deallocate_fixed_ip
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d] context, address, 
expected_attrs=['network'])
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]   File 
"/opt/stack/new/nova/nova/objects/base.py", line 161, in wrapper
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d] args, kwargs)
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]   File 
"/opt/stack/new/nova/nova/conductor/rpcapi.py", line 329, in 

[Yahoo-eng-team] [Bug 1431404] Re: Don't trace when @reverts_task_state fails on InstanceNotFound

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1431404

Title:
  Don't trace when @reverts_task_state fails on InstanceNotFound

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  This change https://review.openstack.org/#/c/163515/ added a warning
  when the @reverts_task_state decorator in the compute manager fails
  rather than just pass, because we were getting KeyErrors and never
  noticing them which broke the decorator.

  However, now we're tracing on InstanceNotFound which is a normal case
  if we're deleting the instance after a failure (tempest will delete
  the instance immediately after failures when tearing down a test):

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRmFpbGVkIHRvIHJldmVydCB0YXNrIHN0YXRlIGZvciBpbnN0YW5jZVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI4NjQwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MjYxNzA3MDE2OTV9

  http://logs.openstack.org/98/163798/1/check/check-tempest-dsvm-
  postgres-
  full/6eff665/logs/screen-n-cpu.txt.gz#_2015-03-12_13_11_36_304

  2015-03-12 13:11:36.304 WARNING nova.compute.manager 
[req-a5f3b37e-19e9-4e1d-9be7-bbb9a8e7f4c1 DeleteServersTestJSON-706956764 
DeleteServersTestJSON-535578435] [instance: 
6de2ad51-3155-4538-830d-f02de39b4be3] Failed to revert task state for instance. 
Error: Instance 6de2ad51-3155-4538-830d-f02de39b4be3 could not be found.
  Traceback (most recent call last):

File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", 
line 142, in inner
  return func(*args, **kwargs)

File "/opt/stack/new/nova/nova/conductor/manager.py", line 134, in 
instance_update
  columns_to_join=['system_metadata'])

File "/opt/stack/new/nova/nova/db/api.py", line 774, in 
instance_update_and_get_original
  columns_to_join=columns_to_join)

File "/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 143, in wrapper
  return f(*args, **kwargs)

File "/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 2395, in 
instance_update_and_get_original
  columns_to_join=columns_to_join)

File "/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 181, in wrapped
  return f(*args, **kwargs)

File "/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 2434, in 
_instance_update
  columns_to_join=columns_to_join)

File "/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 1670, in 
_instance_get_by_uuid
  raise exception.InstanceNotFound(instance_id=uuid)

  InstanceNotFound: Instance 6de2ad51-3155-4538-830d-f02de39b4be3 could
  not be found.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1431404/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439857] Re: live-migration failure leave the port to BUILD state

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1439857

Title:
  live-migration failure leave the port to BUILD state

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  I've set up a lab where live migration can occur in block mode

  It seems that if I leave the default config, block live-migration
  fails;

  I can see that the port is left in BUILD state after the failure, but
  the VM is still running on the source host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1439857/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439817] Re: IP set full error in kernel log

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1439817

Title:
  IP set full error in kernel log

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released

Bug description:
  This is appearing in some logs upstream:
  http://logs.openstack.org/73/170073/1/experimental/check-tempest-dsvm-
  neutron-full-non-
  isolated/ac882e3/logs/kern_log.txt.gz#_Apr__2_13_03_06

  And it has also been reported by andreaf in IRC as having been
  observed downstream.

  Logstash is not very helpful as this manifests only with a job currently in 
the experimental queue.
  As said job runs in non-isolated mode, accruing of elements in the IPset 
until it reaches saturation is onet things that might need to be investigated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1439817/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1417745] Re: Cells connecting pool tracking

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1417745

Title:
  Cells connecting pool tracking

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  Cells has a rpc driver for inter-cell communication.  A
  oslo.messaging.Transport is created for each inter-cell message.

  In previous versions of oslo.messaging, connection pool references
  were maintained within the RabbitMQ driver abstraction in
  oslo.messaging.  As of oslo.messaging commit
  f3370da11a867bae287d7f549a671811e8b399ef, the application must
  maintain a single reference to Transport or references to the
  connection pool will be lost.

  The net effect of this is that cells constructs a new broker
  connection pool  (and a connection) on every message sent between
  cells.  This is leaking references to connections.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1417745/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408176] Re: Nova instance not boot after host restart but still show as Running

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1408176

Title:
  Nova instance not boot after host restart but still show as Running

Status in OpenStack Compute (nova):
  Confirmed
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  The nova host lost power and after restarted, the previous running instance 
is still shown in 
  "Running" state but actually not started:

  root@allinone-controller0-esenfmnxzcvk:~# nova list
  
+--++++-+---+
  | ID   | Name   | 
Status | Task State | Power State | Networks  |
  
+--++++-+---+
  | 13d9eead-191e-434e-8813-2d3bf8d3aae4 | alexcloud-controller0-rr5kdtqmv7qz | 
ACTIVE | -  | Running | default-net=172.16.0.15, 30.168.98.61 |
  
+--++++-+---+
  root@allinone-controller0-esenfmnxzcvk:~# ps -ef |grep -i qemu
  root  95513  90291  0 14:46 pts/000:00:00 grep --color=auto -i qemu

  
  Please note the resume_guests_state_on_host_boot flag is False. Log file is 
attached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1408176/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1415768] Re: the pci deivce assigned to instance is inconsistent with DB record when restarting nova-compute

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1415768

Title:
  the pci deivce assigned to instance is inconsistent with DB record
  when restarting nova-compute

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  After restarting nova-compute process, I found that the pci device
  assigned to instance in libvirt.xml was different with the record in
  'pci_devices' DB table.

  Every time nova-compute was restarted, pci_tracker.allocations was
  reset to empty dict, it didn't contain the pci devices had been
  allocated to instances, so some pci devices would be reallocated to
  the instances, and record these pci into DB, maybe they was
  inconsistent with the libvirt.xml.

  IOW, nova-compute would reallocated the pci device for the instance
  with pci request when restarting.

  See details:
  
http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/resource_tracker.py#n347

  This is a probabilistic problem, not always can be reproduced. If the
  instance have a lot of pci devices, it happen more.

  Face this bug in kilo master.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1415768/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1411383] Re: Arista ML2 plugin incorrectly syncs with EOS

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1411383

Title:
  Arista ML2 plugin incorrectly syncs with EOS

Status in neutron:
  In Progress
Status in neutron juno series:
  Fix Released

Bug description:
  The Arista ML2 plugin periodically compares the data in the Neutron DB
  with EOS to ensure that they are in sync. If EOS reboots, then the
  data might be out of sync and the plugin needs to push data from
  Neutron DB to EOS. As an optimization, the plugin gets and stores the
  time at which the data on EOS was modified. Just before a sync, the
  plugin compares the stored time with the timestamp on EOS and performs
  the sync only if the timestamps differ.

  Due to a bug, the timestamp is incorrectly stored in the plugin
  because of which the sync never takes place and the only way to force
  a sync is to restart the neutron server.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1411383/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1414065] Re: Nova can lose track of running VM if live migration raises an exception

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1414065

Title:
  Nova can lose track of running VM if live migration raises an
  exception

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  There is a fairly serious bug in VM state handling during live
  migration, with a result that if libvirt raises an error *after* the
  VM has successfully live migrated to the target host, Nova can end up
  thinking the VM is shutoff everywhere, despite it still being active.
  The consequences of this are quite dire as the user can then manually
  start the VM again and corrupt any data in shared volumes and the
  like.

  The fun starts in the _live_migration method in
  nova.virt.libvirt.driver, if the 'migrateToURI2' method fails *after*
  the guest has completed migration.

  At start of migration, we see an event received by Nova for the new
  QEMU process starting on target host

  2015-01-23 15:39:57.743 DEBUG nova.compute.manager [-] [instance:
  12bac45e-aca8-40d1-8f39-941bc6bb59f0] Synchronizing instance power
  state after lifecycle event "Started"; current vm_state: active,
  current task_state: migrating, current DB power_state: 1, VM
  power_state: 1 from (pid=19494) handle_lifecycle_event
  /home/berrange/src/cloud/nova/nova/compute/manager.py:1134

  
  Upon migration completion we see CPUs start running on the target host

  2015-01-23 15:40:02.794 DEBUG nova.compute.manager [-] [instance:
  12bac45e-aca8-40d1-8f39-941bc6bb59f0] Synchronizing instance power
  state after lifecycle event "Resumed"; current vm_state: active,
  current task_state: migrating, current DB power_state: 1, VM
  power_state: 1 from (pid=19494) handle_lifecycle_event
  /home/berrange/src/cloud/nova/nova/compute/manager.py:1134

  And finally an event saying that the QEMU on the source host has
  stopped

  2015-01-23 15:40:03.629 DEBUG nova.compute.manager [-] [instance:
  12bac45e-aca8-40d1-8f39-941bc6bb59f0] Synchronizing instance power
  state after lifecycle event "Stopped"; current vm_state: active,
  current task_state: migrating, current DB power_state: 1, VM
  power_state: 4 from (pid=23081) handle_lifecycle_event
  /home/berrange/src/cloud/nova/nova/compute/manager.py:1134

  
  It is the last event that causes the trouble.  It causes Nova to mark the VM 
as shutoff at this point.

  Normally the '_live_migrate' method would succeed and so Nova would
  then immediately & explicitly mark the guest as running on the target
  host.   If an exception occurrs though, this explicit update of VM
  state doesn't happen so Nova considers the guest shutoff, even though
  it is still running :-(

  
  The lifecycle events from libvirt have an associated "reason", so we could 
see that the shutoff event from libvirt corresponds to a migration being 
completed, and so not mark the VM as shutoff in Nova.  We would also have to 
make sure the target host processes the 'resume' event upon migrate completion.

  An safer approach though, might be to just mark the VM as in an ERROR
  state if any exception occurs during migration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1414065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1407664] Re: Race: instance nw_info cache is updated to empty list because of nova/neutron event mechanism

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1407664

Title:
  Race: instance nw_info cache is updated to empty list because of
  nova/neutron event mechanism

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  This applies only when the nova/neutron event reporting mechanism is
  enabled.

  Boot instance, like this:
  nova boot --image xxx --flavor xxx --nic port-id=xxx test_vm

  The booting instance is successful, but instance nw_info cache is empty.
  This is a probabilistic problem, not always can be reproduced.

  After analysis the booting instance and nova/neutron event mechanism workflow,
  I get the reproduce timeline:

  1. neutronv2.api.allocate_for_instance when booting instance
  2. neutronclient.update_port trigger neutron network_change event
  3. nova get the port change event, start to dispose event
  4. instance.get_by_uuid in external_instance_event , at this time 
instance.nw_info_cache is empty,
  because nw_info cache hadn't been added into db in booting instance thread.
  5. booting instance thread start to save the instance nw_info cache into db.
  6. event disposing thread start to update instance nw_info cache to empty.

  Face this issue in Juno.
  I add some breakpoints in order to reproduce this bug in my devstack.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1407664/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408480] Re: PciDevTracker passes context module instead of instance

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1408480

Title:
  PciDevTracker passes context module instead of instance

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  Currently, the code in the PciDevTracker.__init__() method of
  nova/pci/manager.py reads:

  ```
  def __init__(self, node_id=None): 

  
  """Create a pci device tracker.   

  


  
  If a node_id is passed in, it will fetch pci devices information  

  
  from database, otherwise, it will create an empty devices list

  
  and the resource tracker will update the node_id information later.   

  
  """   

  


  
  super(PciDevTracker, self).__init__() 

  
  self.stale = {}   

  
  self.node_id = node_id

  
  self.stats = stats.PciDeviceStats()   

  
  if node_id:   

  
  self.pci_devs = list( 

  
  objects.PciDeviceList.get_by_compute_node(context, node_id))  

  
  else: 

  
  self.pci_devs = []

  
  self._initial_instance_usage()  
  ```

  The problem is that in the call to
  `objects.PciDeviceList.get_by_compute_node(context, node_id)`, there
  is no local value for the 'context' parameter, so as a result, the
  context module defined in the imports is what is passed.

  Instead, the parameter should be changed to
  `context.get_admin_context()`.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1408480/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361186] Re: nova service-delete fails for services on non-child (top) cell

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361186

Title:
  nova service-delete fails for services on non-child (top) cell

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  Nova service-delete fails for services on non-child (top) cell.

  How to reproduce:

  $ nova --os-username admin service-list

  
++--+-+--+-+---++-+
  | Id | Binary   | Host| Zone | Status 
 | State | Updated_at | Disabled Reason |
  
++--+-+--+-+---++-+
  | region!child@1 | nova-conductor   | region!child@ubuntu | internal | 
enabled | up| 2014-08-18T06:06:56.00 | -   |
  | region!child@2 | nova-compute | region!child@ubuntu | nova | 
enabled | up| 2014-08-18T06:06:55.00 | -   |
  | region!child@3 | nova-cells   | region!child@ubuntu | internal | 
enabled | up| 2014-08-18T06:06:59.00 | -   |
  | region!child@4 | nova-scheduler   | region!child@ubuntu | internal | 
enabled | up| 2014-08-18T06:06:50.00 | -   |
  | region@1   | nova-cells   | region@ubuntu   | internal | 
enabled | up| 2014-08-18T06:06:59.00 | -   |
  | region@2   | nova-cert| region@ubuntu   | internal | 
enabled | up| 2014-08-18T06:06:58.00 | -   |
  | region@3   | nova-consoleauth | region@ubuntu   | internal | 
enabled | up| 2014-08-18T06:06:57.00 | -   |
  
++--+-+--+-+---++-+

  Stop one of the services on top cell (e.g. nova-cert).

  $ nova --os-username admin service-list

  
++--+-+--+-+---++-+
  | Id | Binary   | Host| Zone | Status 
 | State | Updated_at | Disabled Reason |
  
++--+-+--+-+---++-+
  | region!child@1 | nova-conductor   | region!child@ubuntu | internal | 
enabled | up| 2014-08-18T06:09:26.00 | -   |
  | region!child@2 | nova-compute | region!child@ubuntu | nova | 
enabled | up| 2014-08-18T06:09:25.00 | -   |
  | region!child@3 | nova-cells   | region!child@ubuntu | internal | 
enabled | up| 2014-08-18T06:09:19.00 | -   |
  | region!child@4 | nova-scheduler   | region!child@ubuntu | internal | 
enabled | up| 2014-08-18T06:09:20.00 | -   |
  | region@1   | nova-cells   | region@ubuntu   | internal | 
enabled | up| 2014-08-18T06:09:19.00 | -   |
  | region@2   | nova-cert| region@ubuntu   | internal | 
enabled | down  | 2014-08-18T06:08:28.00 | -   |
  | region@3   | nova-consoleauth | region@ubuntu   | internal | 
enabled | up| 2014-08-18T06:09:27.00 | -   |
  
++--+-+--+-+---++-+

  Nova service-delete:
  $ nova --os-username admin service-delete 'region@2'

  Check the request id from nova-api.log:

  2014-08-18 15:10:23.491 INFO nova.osapi_compute.wsgi.server [req-
  e134d915-ad66-41ba-a6f8-33ec51b7daee admin demo] 192.168.101.31
  "DELETE /v2/d66804d2e78549cd8f5efcedd0abecb2/os-services/region@2
  HTTP/1.1" status: 204 len: 179 time: 0.1334069

  Error log in n-cell-region service:

  2014-08-18 15:10:23.464 ERROR nova.cells.messaging 
[req-e134d915-ad66-41ba-a6f8-33ec51b7daee admin demo] Error locating next hop 
for message: 'NoneType' object has no attribute 'count'
  2014-08-18 15:10:23.464 TRACE nova.cells.messaging Traceback (most recent 
call last):
  2014-08-18 15:10:23.464 TRACE nova.cells.messaging   File 
"/opt/stack/nova/nova/cells/messaging.py", line 406, in process
  2014-08-18 15:10:23.464 TRACE nova.cells.messaging next_hop = 
self._get_next_hop()
  2014-08-18 15:10:23.464 TRACE nova.cells.messaging   File 
"/opt/stack/nova/nova/cells/messaging.py", line 361, in _get_next_hop
  2014-08-18 15:10:23.464 TRACE nova.cells.messaging dest_hops = 
target_cell.count(_PATH_CELL_SEP)
  2014-08-18 15:10:23.464 TRACE nova.cells.messaging AttributeError: 'NoneType' 
object has no attribute 

[Yahoo-eng-team] [Bug 1367189] Re: multipath not working with Storwize backend if CHAP enabled

2015-11-19 Thread Alan Pevec
** Changed in: cinder/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367189

Title:
  multipath not working with Storwize backend if CHAP enabled

Status in Cinder:
  Fix Released
Status in Cinder juno series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Committed
Status in os-brick:
  Fix Released

Bug description:
  if I try to attach a volume to a VM while having multipath enabled in
  nova and CHAP enabled in the storwize backend, it fails:

  2014-09-09 11:37:14.038 22944 ERROR nova.virt.block_device 
[req-f271874a-9720-4779-96a8-01575641a939 a315717e20174b10a39db36b722325d6 
76d25b1928e7407392a69735a894c7fc] [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] Driver failed to attach volume 
c460f8b7-0f1d-4657-bdf7-e142ad34a132 at /dev/vdb
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] Traceback (most recent call last):
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/block_device.py", line 239, in 
attach
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] device_type=self['device_type'], 
encryption=encryption)
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1235, in 
attach_volume
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] disk_info)
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1194, in 
volume_driver_method
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] return method(connection_info, *args, 
**kwargs)
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py", line 
249, in inner
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] return f(*args, **kwargs)
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/volume.py", line 280, in 
connect_volume
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] check_exit_code=[0, 255])[0] \
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/volume.py", line 579, in 
_run_iscsiadm_bare
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] check_exit_code=check_exit_code)
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/utils.py", line 165, in execute
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] return processutils.execute(*cmd, 
**kwargs)
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py", line 
193, in execute
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] cmd=' '.join(cmd))
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] ProcessExecutionError: Unexpected error 
while running command.
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] Command: sudo nova-rootwrap 
/etc/nova/rootwrap.conf iscsiadm -m discovery -t sendtargets -p 
192.168.1.252:3260
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] Exit code: 5
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] Stdout: ''
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] Stderr: 'iscsiadm: Connection to 
Discovery Address 192.168.1.252 closed\niscsiadm: Login I/O error, failed to 
receive a PDU\niscsiadm: retrying discovery login to 192.168.1.252\niscsiadm: 
Connection to Discovery Address 

[Yahoo-eng-team] [Bug 1466547] Re: Hyper-V: Cannot add ICMPv6 security group rule

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1466547

Title:
  Hyper-V: Cannot add ICMPv6 security group rule

Status in networking-hyperv:
  Fix Committed
Status in neutron:
  Invalid
Status in neutron juno series:
  Fix Released

Bug description:
  Security Group rules created with ethertype 'IPv6' and protocol 'icmp'
  cannot be added by the Hyper-V Security Groups Driver, as it cannot
  add rules with the protocol 'icmpv6'.

  This can be easily fixed by having the Hyper-V Security Groups Driver
  create rules with protocol '58' instead. [1] These rules will also
  have to be stateless, as ICMP rules cannot be stateful on Hyper-V.

  This bug is causing the test
  tempest.scenario.test_network_v6.TestGettingAddress.test_slaac_from_os
  to fail on Hyper-V.

  [1] http://www.iana.org/assignments/protocol-numbers/protocol-
  numbers.xhtml

  Log: http://paste.openstack.org/show/301866/

  Security Groups: http://paste.openstack.org/show/301870/

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-hyperv/+bug/1466547/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476360] Re: stable/juno gate is failing on oslo import

2015-11-19 Thread Alan Pevec
** Changed in: horizon/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1476360

Title:
  stable/juno gate is failing on oslo import

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in OpenStack Dashboard (Horizon) juno series:
  Fix Released

Bug description:
  File 
"/home/jenkins/workspace/gate-horizon-python26/.tox/py26/lib/python2.6/site-packages/openstack_auth/utils.py",
 line 24, in 
  2015-07-20 18:48:01.107 | from keystoneclient.v2_0 import client as 
client_v2
  2015-07-20 18:48:01.107 |   File 
"/home/jenkins/workspace/gate-horizon-python26/.tox/py26/lib/python2.6/site-packages/keystoneclient/__init__.py",
 line 33, in 
  2015-07-20 18:48:01.107 | from keystoneclient import access
  2015-07-20 18:48:01.107 |   File 
"/home/jenkins/workspace/gate-horizon-python26/.tox/py26/lib/python2.6/site-packages/keystoneclient/access.py",
 line 20, in 
  2015-07-20 18:48:01.107 | from oslo.utils import timeutils
  2015-07-20 18:48:01.107 | ImportError: No module named utils

  Error is due to the oslo namespace

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1476360/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1481346] Re: MH: router delete might return a 500 error

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1481346

Title:
  MH: router delete might return a 500 error

Status in neutron:
  New
Status in neutron juno series:
  Fix Released
Status in vmware-nsx:
  New

Bug description:
  If a logical router has been removed from the backend, and the DB is
  an inconsistent state where no NSX mapping is stored for the neutron
  logical router, the backend will fail when attempting eletion of the
  router, causing the neutron operation to return a 500.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1481346/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443598] Re: [OSSA 2015-008] backend_argument containing a password leaked in logs (CVE-2015-3646)

2015-11-19 Thread Alan Pevec
** Changed in: keystone/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1443598

Title:
  [OSSA 2015-008] backend_argument containing a password leaked in logs
  (CVE-2015-3646)

Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Identity (keystone) icehouse series:
  Fix Released
Status in OpenStack Identity (keystone) juno series:
  Fix Released
Status in OpenStack Identity (keystone) kilo series:
  Fix Released
Status in OpenStack Security Advisory:
  Fix Released

Bug description:
  The keystone.conf has an option backend_argument to set various
  options for the caching backend.  As documented, some of the potential
  values can contain a password.

  Snippet from
  http://docs.openstack.org/developer/keystone/developing.html#dogpile-
  cache-based-mongodb-nosql-backend

  [cache]
  # Global cache functionality toggle.
  enabled = True

  # Referring to specific cache backend
  backend = keystone.cache.mongo

  # Backend specific configuration arguments
  backend_argument = db_hosts:localhost:27017
  backend_argument = db_name:ks_cache
  backend_argument = cache_collection:cache
  backend_argument = username:test_user
  backend_argument = password:test_password

  As a result, passwords can be leaked to the keystone logs since the
  config options is not marked secret.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1443598/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449260] Re: [OSSA 2015-009] Sanitation of metadata label (CVE-2015-3988)

2015-11-19 Thread Alan Pevec
** Changed in: horizon/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1449260

Title:
  [OSSA 2015-009] Sanitation of metadata label (CVE-2015-3988)

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) juno series:
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released
Status in OpenStack Security Advisory:
  Fix Released

Bug description:
  1) Start up Horizon
  2) Go to Images
  3) Next to an image, pick "Update Metadata"
  4) From the dropdown button, select "Update Metadata"
  5) In the Custom box, enter a value with some HTML like 
'alert(1)//', click +
  6) On the right-hand side, give it a value, like "ee"
  7) Click "Save"
  8) Pick "Update Metadata" for the image again, the page will fail to load, 
and the JavaScript console says:

  SyntaxError: invalid property id
  var existing_metadata = {"

  An alternative is if you change the URL to update_metadata for the
  image (for example,
  
http://192.168.122.239/admin/images/fa62ba27-e731-4ab9-8487-f31bac355b4c/update_metadata/),
  it will actually display the alert box and a bunch of junk.

  I'm not sure if update_metadata is actually a page, though... can't
  figure out how to get to it other than typing it in.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1449260/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453074] Re: [OSSA 2015-010] help_text parameter of fields is vulnerable to arbitrary html injection (CVE-2015-3219)

2015-11-19 Thread Alan Pevec
** Changed in: horizon/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1453074

Title:
  [OSSA 2015-010] help_text parameter of fields is vulnerable to
  arbitrary html injection (CVE-2015-3219)

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) juno series:
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released
Status in OpenStack Security Advisory:
  Fix Released

Bug description:
  The Field class help_text attribute is vulnerable to code injection if
  the text is somehow taken from the user input.

  Heat UI allows to create stacks from the user input which define
  parameters. Those parameters are then converted to the input field
  which are vulnerable.

  The heat stack example exploit:

  description: Does not matter
  heat_template_version: '2013-05-23'
  outputs: {}
  parameters:
    param1:
  type: string
  label: normal_label
  description: hack=">alert('YOUR HORIZON IS PWNED')"
  resources: {}

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1453074/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441523] Re: changing flavor details on running instances will result in errors popping up for users

2015-11-19 Thread Alan Pevec
** Changed in: horizon/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1441523

Title:
  changing flavor details on running instances will result in errors
  popping up for users

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) juno series:
  Fix Released

Bug description:
  1. Install/use and all-in-one w/ demo project
  2. As admin, create a flavor and assign to the demo project
  3. Log out as admin and log in as demo (must not have admin privs)
  4. As demo, launch an instance on this flavor in the demo project
  5. Log out as demo and log in as admin
  6. As admin, change the amount of RAM for the flavor
  7. Log out as admin, log in as demo
  8. Check the instances page and size should show "Not available" and there 
should be an error in the upper right saying "Error: Unable to retrieve 
instance size information."

  The error is only shown for non-admin users.

  what happens here: 
  when editing flavors, nova silently deletes the old flavor, creating a new 
one. running instances are not touched. the old flavor is marked as deleted, 
and normal users can not get specifics of that flavor any more.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1441523/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374108] Re: Hyper-V agent cannot disconnect orphaned switch ports

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1374108

Title:
  Hyper-V agent cannot disconnect orphaned switch ports

Status in networking-hyperv:
  Fix Released
Status in neutron:
  Invalid
Status in neutron juno series:
  Fix Released

Bug description:
  On Windows / Hyper-V Server 2008 R2, when a switch port have to be 
disconnected because the VM using it was removed,
  DisconnectSwitchPort will fail, returning an error code and a HyperVException 
is raised. If the exception is raised, the switch port is not removed and will 
make the WMI operations more expensive.

  If the VM's VNIC has been removed, disconnecting the switch port is no
  longer necessary and it should be removed.

  Trace:
  http://paste.openstack.org/show/115297/

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-hyperv/+bug/1374108/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378558] Re: Plugin panel not listed in configured panel group

2015-11-19 Thread Alan Pevec
** Changed in: horizon/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1378558

Title:
  Plugin panel not listed in configured panel group

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) juno series:
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  When adding panel Foo to the Admin dashboard's System panel group via
  the openstack_dashboard/local/enabled/ directory, with something like:

  PANEL = 'foo'
  PANEL_DASHBOARD = 'admin'
  PANEL_GROUP = 'admin'
  ADD_PANEL = 'openstack_dashboard.dashboards.admin.foo.panel.Foo'

  Foo appears under the panel group Other instead of System. This is the
  error in the Apache log:

  Could not process panel foo: 'tuple' object has no attribute 'append'

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1378558/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1392316] Re: Hypervisors returns TemplateSyntaxError instead of error message

2015-11-19 Thread Alan Pevec
** Changed in: horizon/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1392316

Title:
  Hypervisors returns TemplateSyntaxError instead of error message

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) juno series:
  Fix Released

Bug description:
  When trying to list hypervisor at /admin/hypervisors/
  I got TemplateSyntaxError. It happens when novaclient (nova-api) 
  cannot fulfil the request.

  The exception in Horizon:

  Error while rendering table rows.
  Traceback (most recent call last):
File "/opt/stack/horizon/horizon/tables/base.py", line 1751, in get_rows
  for datum in self.filtered_data:
  TypeError: 'NoneType' object is not iterable
  Internal Server Error: /admin/hypervisors/
  Traceback (most recent call last):
...
File "/opt/stack/horizon/horizon/tables/base.py", line 1751, in get_rows
  for datum in self.filtered_data:
  TemplateSyntaxError: 'NoneType' object is not iterable

  
  IMO it should be more robust and just return error message. It would be more 
  consistent with how other views handles unavailable services.

  To reproduce the error it is enough that novaclient raise exception. Example 
for this 
  is my case was when zookeeper as servicegroup driver is used, but 
  nova-conductor hasn't yet prepared the required namespace (because of bug 
[1]) - which 
  ends that nova-api had internal error:

  nova.api.openstack ServiceGroupUnavailable: The service from servicegroup 
driver 
  ZooKeeperDriver is temporarily unavailable.

  This overall result is that whole hypervisor list page was
  unaccessible only because is was not possible to list nova services.

  [1] https://bugs.launchpad.net/nova/+bug/1389782

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1392316/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394051] Re: Can't display port list on a shared network in "Manage Floating IP Associations" page

2015-11-19 Thread Alan Pevec
** Changed in: horizon/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1394051

Title:
  Can't display port list on a shared network in "Manage Floating IP
  Associations" page

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) juno series:
  Fix Released

Bug description:
  
  I used below command to configure floating IP. Juno on CentOS 7.

  neutron net-create public --shared  --router:external True
  --provider:network_type vlan --provider:physical_network physnet2
  --provider:segmentation_id 125

  neutron subnet-create public --name public-subnet \
--allocation-pool start=125.2.249.170,end=125.2.249.248 \
--disable-dhcp --gateway 125.2.249.1 --dns-nameserver 125.1.166.20 
125.2.249.0/24

  neutron net-create  --shared OAM120 \
--provider:network_type vlan --provider:physical_network physnet2 
--provider:segmentation_id 120

  neutron subnet-create --name oam120-subnet \
--allocation-pool start=192.168.120.1,end=192.168.120.200 \
--gateway 192.168.120.254 --dns-nameserver 10.1.1.1 --dns-nameserver 
125.1.166.20 OAM120 192.168.120.0/24

  neutron router-create my-router

  neutron router-interface-add my-router oam120-subnet

  neutron router-gateway-set my-router public

  
  Just checked the dashborad code, It seems that there are some errors in below 
code.

  /usr/share/openstack-dashboard/openstack_dashboard/api/neutron.py
  def _get_reachable_subnets(self, ports):
  # Retrieve subnet list reachable from external network
  ext_net_ids = [ext_net.id for ext_net in self.list_pools()]
  gw_routers = [r.id for r in router_list(self.request)
if (r.external_gateway_info and
r.external_gateway_info.get('network_id')
in ext_net_ids)]
  reachable_subnets = set([p.fixed_ips[0]['subnet_id'] for p in ports
   if ((p.device_owner ==
'network:router_interface')
   and (p.device_id in gw_routers))])
  return reachable_subnets

  
  Why only list "device_owner = 'network:router_interface'", I guess it should 
list all "device_owner = 'compute:xxx'"

  Here is my work around:
  diff output:
  /usr/share/openstack-dashboard
  [root@jn-controller openstack-dashboard]# diff 
./openstack_dashboard/api/neutron.py.orig ./openstack_dashboard/api/neutron.py
  413,415c415
  <  if ((p.device_owner ==
  <   'network:router_interface')
  <  and (p.device_id in gw_routers))])
  ---
  >  if 
(p.device_owner.startswith('compute:'))])

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1394051/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1457900] Re: dhcp_agents_per_network > 1 cause conflicts (NACKs) from dnsmasqs (break networks)

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1457900

Title:
  dhcp_agents_per_network > 1 cause conflicts (NACKs) from dnsmasqs
  (break networks)

Status in neutron:
  Fix Released
Status in neutron icehouse series:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  If neutron was configured to have more than one DHCP agent per network
  (option dhcp_agents_per_network=2), it causes dnsmasq to reject leases
  of others dnsmasqs, creating mess and stopping instances to boot
  normally.

  Symptoms:

  Cirros (at the log):
  Sending discover...
  Sending select for 188.42.216.146...
  Received DHCP NAK
  Usage: /sbin/cirros-dhcpc 
  Sending discover...
  Sending select for 188.42.216.146...
  Received DHCP NAK
  Usage: /sbin/cirros-dhcpc 
  Sending discover...
  Sending select for 188.42.216.146...
  Received DHCP NAK

  Steps to reproduce:
  1. Set up neutron with VLANs and dhcp_agents_per_network=2 option in 
neutron.conf
  2. Set up two or more different nodes with enabled neutron-dhcp-agent
  3. Create VLAN neutron network with --enable-dhcp option
  4. Create instance with that network

  Expected behaviour:

  Instance recieve IP address via DHCP without problems or delays.

  Actual behaviour:

  Instance stuck in the network boot for long time.
  There are complains about NACKs in the logs of dhcp client.
  There are multiple NACKs on tcpdump on interfaces

  Additional analysis: It is very complex, so I attach example of two
  parallel tcpdumps from two dhcp namespaces in HTML format.

  
  Version: 2014.2.3

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1457900/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459467] Re: port update multiple fixed IPs anticipating allocation fails with mac address error

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1459467

Title:
  port update multiple fixed IPs anticipating allocation fails with mac
  address error

Status in neutron:
  Fix Released
Status in neutron icehouse series:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  A port update with multiple fixed IP specifications, one with a subnet
  ID and one with a fixed IP that conflicts with the address picked by
  the one specifying the subnet ID will result in a dbduplicate entry
  which is presented to the user as a mac address error.

  ~$ neutron port-update 7521786b-6c7f-4385-b5e1-fb9565552696 --fixed-ips 
type=dict 
{subnet_id=ca9dd2f0-cbaf-4997-9f59-dee9a39f6a7d,ip_address=42.42.42.42}
  Unable to complete operation for network 
0897a051-bf56-43c1-9083-3ac38ffef84e. The mac address None is in use.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1459467/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462973] Re: Network gateway flat connection fail because of None tenant_id

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1462973

Title:
  Network gateway flat connection fail because of None tenant_id

Status in neutron:
  Invalid
Status in neutron juno series:
  Fix Released
Status in vmware-nsx:
  Fix Committed

Bug description:
  The NSX-mh backend does not accept "None" values for tags.
  Tags are applied to all NSX-mh ports. In particular there is always a tag 
with the neutron tenant_id (q_tenant_id)

  _get_tenant_id_for_create now in admin context returns the tenant_id of the 
resource being created, if there is one.
  Otherwise still returns context.tenant_id.
  The default L2 gateway unfortunately does not have a tenant_id, but has the 
tenant_id attribute in its data structure.
  This means that _get_tenant_id_for_create will return None, and NSX-mh will 
reject the request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1462973/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461777] Re: Random NUMA cell selection can leave NUMA cells unused

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1461777

Title:
  Random NUMA cell selection can leave NUMA cells unused

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  NUMA cell overcommit can leave NUMA cells unused

  When no NUMA configuration is defined for the guest (no flavor extra specs),
  nova identifies the NUMA topology of the host and tries to match the cpu 
  placement to a NUMA cell ("cpuset"). 

  The cpuset is selected randomly.
  pin_cpuset = random.choice(viable_cells_cpus) #nova/virt/libvirt/driver.py

  However, this can lead to NUMA cells not being used.
  This is particular noticeable when the flavor as the same number of vcpus 
  as the host NUMA cells and in the host CPUs are not overcommit 
(cpu_allocation_ratio = 1)

  ###
  Particular use case:

  Compute nodes with the NUMA topology:
  

  No CPU overcommit: cpu_allocation_ratio = 1
  Boot instances using a flavor with 8 vcpus. 
  (No NUMA topology defined for the guest in the flavor)

  In this particular case the host can have 2 instances. (no cpu overcommit)
  Both instances can be allocated (random) with the same cpuset from the 2 
options:
  8
  8

  As consequence half of the host CPUs are not used.

  
  ###
  How to reproduce:

  Using: nova 2014.2.2
  (not tested in trunk however the code path looks similar)

  1. set cpu_allocation_ratio = 1
  2. Identify the NUMA topology of the compute node
  3. Using a flavor with a number of vcpus that matches a NUMA cell in the 
compute node,
  boot instances until fill the compute node.
  4. Check the cpu placement "cpuset" used by the each instance.

  Notes: 
  - at this point instances can use the same "cpuset" leaving NUMA cells unused.
  - the selection of the cpuset is random. Different tries may be needed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1461777/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460220] Re: ipset functional tests assume system capability

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460220

Title:
  ipset functional tests assume system capability

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  Production code uses ipset in the root namespace, but functional
  testing uses them in non-root namespaces. As it turns out, that
  functionality requires versions of the kernel and ipset not found in
  all versions of all distributions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460220/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444397] Re: single allowed address pair rule can exhaust entire ipset space

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1444397

Title:
  single allowed address pair rule can exhaust entire ipset space

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  The hash type used by the ipsets is 'ip' which explodes a CIDR into
  every member address (i.e. 10.100.0.0/16 becomes 65k entries). The
  allowed address pairs extension allows CIDRs so a single allowed
  address pair set can exhaust the entire IPset and break the security
  group rules for a tenant.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1444397/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1440762] Re: Rebuild an instance with attached volume fails

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1440762

Title:
  Rebuild an instance with attached volume fails

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  When trying to rebuild an instance with attached volume, it fails with
  the errors:

  2015-02-04 08:41:27.477 22000 TRACE oslo.messaging.rpc.dispatcher 
libvirtError: Failed to terminate process 22913 with SIGKILL: Device or 
resource busy
  2015-02-04 08:41:27.477 22000 TRACE oslo.messaging.rpc.dispatcher
  <180>Feb 4 08:43:12 node-2 nova-compute Periodic task is updating the host 
stats, it is trying to get disk info for instance-0003, but the backing 
volume block device was removed by concurrent operations such as resize. Error: 
No volume Block Device Mapping at path: 
/dev/disk/by-path/ip-192.168.0.4:3260-iscsi-iqn.2010-10.org.openstack:volume-82ba5653-3e07-4f0f-b44d-a946f4dedde9-lun-1
  <182>Feb 4 08:43:13 node-2 nova-compute VM Stopped (Lifecycle Event)

  The full log of rebuild process is here:
  http://paste.openstack.org/show/166892/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1440762/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1442494] Re: test_add_list_remove_router_on_l3_agent race-y for dvr

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1442494

Title:
  test_add_list_remove_router_on_l3_agent race-y for dvr

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  Logstash:

  message:"in test_add_list_remove_router_on_l3_agent" AND build_name
  :"check-tempest-dsvm-neutron-dvr"

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiaW4gdGVzdF9hZGRfbGlzdF9yZW1vdmVfcm91dGVyX29uX2wzX2FnZW50XCIgQU5EIGJ1aWxkX25hbWU6XCJjaGVjay10ZW1wZXN0LWRzdm0tbmV1dHJvbi1kdnJcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQyODY0OTgxNDY3MSwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

  Change [1], enabled by [2], exposed an intermittent failure when
  determining whether an agent is eligible for binding or not.

  [1] https://review.openstack.org/#/c/154289/
  [2] https://review.openstack.org/#/c/165246/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1442494/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1406486] Re: Suspending an instance fails when using vnic_type=direct

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1406486

Title:
  Suspending an instance fails when using vnic_type=direct

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in python-glanceclient:
  New

Bug description:
  When launching an instance with a pre-created port with 
binding:vnic_type='direct' and suspending the instance 
  fails with error  'NoneType' object has no attribute 'encode'

  Nova compute log:
  http://paste.openstack.org/show/155141/

  Version
  ==
  openstack-nova-common-2014.2.1-3.el7ost.noarch
  openstack-nova-compute-2014.2.1-3.el7ost.noarch
  python-novaclient-2.20.0-1.el7ost.noarch
  python-nova-2014.2.1-3.el7ost.noarch

  How to Reproduce
  ===
  # neutron port-create tenant1-net1 --binding:vnic-type direct
  # nova boot --flavor m1.small --image rhel7 --nic port-id= vm1
  # nova suspend 
  # nova show 

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1406486/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399244] Re: rbd resize revert fails

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1399244

Title:
  rbd resize revert fails

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  In Ceph CI, the revert-resize server test is failing.  It appears that
  revert_resize() does not take shared storage into account and deletes
  the orignal volume, which causes the start of the original instance to
  fail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1399244/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378132] Re: Hard-reboots ignore root_device_name

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1378132

Title:
  Hard-reboots ignore root_device_name

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  Hard-rebooting an instance causes the root_device_name to get
  ignored/reset, which can cause wailing and gnashing of teeth if the
  guest operating system is expecting it to not do that.

  Steps to reproduce:

  1. Stand up a devstack
  2. Load the openrc with admin credentials
  3. glance image-update --property root_device_name=sda SOME_CIRROS_IMAGE
  4. Spawn a cirros instance using the above image. The root filesystem should 
present as being mounted on /dev/sda1, and the libvirt.xml should show the disk 
with a target of "scsi"
  5. Hard-reboot the instance

  Expected Behaviour

  The instance comes back up with the same hardware configuration as it
  had when initially spawned, i.e., with its root filesystem attached to
  a SCSI bus

  Actual Behaviour

  The instance comes back with its root filesystem attached to an IDE
  bus.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1378132/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367189] Re: multipath not working with Storwize backend if CHAP enabled

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367189

Title:
  multipath not working with Storwize backend if CHAP enabled

Status in Cinder:
  Fix Released
Status in Cinder juno series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in os-brick:
  Fix Released

Bug description:
  if I try to attach a volume to a VM while having multipath enabled in
  nova and CHAP enabled in the storwize backend, it fails:

  2014-09-09 11:37:14.038 22944 ERROR nova.virt.block_device 
[req-f271874a-9720-4779-96a8-01575641a939 a315717e20174b10a39db36b722325d6 
76d25b1928e7407392a69735a894c7fc] [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] Driver failed to attach volume 
c460f8b7-0f1d-4657-bdf7-e142ad34a132 at /dev/vdb
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] Traceback (most recent call last):
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/block_device.py", line 239, in 
attach
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] device_type=self['device_type'], 
encryption=encryption)
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1235, in 
attach_volume
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] disk_info)
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1194, in 
volume_driver_method
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] return method(connection_info, *args, 
**kwargs)
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py", line 
249, in inner
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] return f(*args, **kwargs)
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/volume.py", line 280, in 
connect_volume
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] check_exit_code=[0, 255])[0] \
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/volume.py", line 579, in 
_run_iscsiadm_bare
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] check_exit_code=check_exit_code)
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/utils.py", line 165, in execute
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] return processutils.execute(*cmd, 
**kwargs)
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py", line 
193, in execute
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] cmd=' '.join(cmd))
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] ProcessExecutionError: Unexpected error 
while running command.
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] Command: sudo nova-rootwrap 
/etc/nova/rootwrap.conf iscsiadm -m discovery -t sendtargets -p 
192.168.1.252:3260
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] Exit code: 5
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] Stdout: ''
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] Stderr: 'iscsiadm: Connection to 
Discovery Address 192.168.1.252 closed\niscsiadm: Login I/O error, failed to 
receive a PDU\niscsiadm: retrying discovery login to 192.168.1.252\niscsiadm: 
Connection to Discovery Address 

[Yahoo-eng-team] [Bug 1374473] Re: 500 error on router-gateway-set for DVR on second external network

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1374473

Title:
  500 error on router-gateway-set for DVR on second external network

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released

Bug description:
  Under some circumstances this operation may fail.

  Steps to reproduce:

  1) Run Devstack with DVR *on* (devstack by default creates an external 
network and sets the gateway to the router)
  2) Create an external network
  3) Create a router
  4) Set the gateway to the router
  5) Observe the Internal Server Error

  Expected outcome: the gateway is correctly set.

  This occurs with the latest Juno code. The underlying error is an
  attempted double binding of the router to the L3 agent.

  More details in:

  http://paste.openstack.org/show/115614/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1374473/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362676] Re: Hyper-V agent doesn't create stateful security group rules

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1362676

Title:
  Hyper-V agent doesn't create stateful security group rules

Status in networking-hyperv:
  Fix Released
Status in neutron:
  New
Status in neutron juno series:
  Fix Released

Bug description:
  Hyper-V agent does not create stateful security group rules (ACLs),
  meaning it doesn't allow any response traffic to pass through.

  For example, the following security group rule:
  {"direction": "ingress", "remote_ip_prefix": null, "protocol": "tcp", 
"port_range_max": 22,  "port_range_min": 22, "ethertype": "IPv4"}
  Allows tcp  inbound traffic through port 22, but since the Hyper-V agent does 
not add this rule as stateful, the reply traffic never received, unless 
specifically added an egress security group rule as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-hyperv/+bug/1362676/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1293480] Re: Reboot host didn't restart instances due to libvirt lifecycle event change instance's power_stat as shutdown

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1293480

Title:
  Reboot host  didn't restart instances due to  libvirt lifecycle event
  change instance's power_stat as shutdown

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  1. Libvirt driver can receive libvirt lifecycle events(registered in
  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L1004),
  then handle it in
  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L969
  , that means  shutdown a domain  will  send out shutdown lifecycle
  event and nova compute will try to sync the instance's power_state.

  2. When reboot compute service ,  compute service is trying to reboot 
instance which were running before reboot.
  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L911.  
Compute service only checks the power_state in database. the value of 
power_state can be changed in 3.  That leads out  reboot host, some instances 
which were running before reboot can't be restarted.

  3. When reboot the host,  the code path like  1)libvirt-guests will
  shutdown all the domain,   2)then sendout  lifecycle event , 3)nova
  compute receive it and 4)save power_state 'shutoff' in db , 5)then try
  to stop it.   Compute service may be killed in any step,  In my test
  enviroment,  two running instances , only one instance was restarted
  succefully. another was set power_state with 'shutoff', task_state
  with 'power off' in  step 4) .  So it can't pass the check in
  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L911.
  won't be restarted.

  
  Not sure this is a bug ,  wonder if there is solution for this .

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1293480/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332917] Re: Deadlock when deleting from ipavailabilityranges

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332917

Title:
  Deadlock when deleting from ipavailabilityranges

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released

Bug description:
  Traceback:
   TRACE neutron.api.v2.resource Traceback (most recent call last):
   TRACE neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/resource.py", line 87, in resource
   TRACE neutron.api.v2.resource result = method(request=request, **args)
   TRACE neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/base.py", line 477, in delete
   TRACE neutron.api.v2.resource obj_deleter(request.context, id, **kwargs)
   TRACE neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py", line 608, in 
delete_subnet
   TRACE neutron.api.v2.resource break
   TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 463, 
in __exit__
   TRACE neutron.api.v2.resource self.rollback()
   TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py", line 
57, in __exit__
   TRACE neutron.api.v2.resource compat.reraise(exc_type, exc_value, exc_tb)
   TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 460, 
in __exit__
   TRACE neutron.api.v2.resource self.commit()
   TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 370, 
in commit
   TRACE neutron.api.v2.resource self._prepare_impl()
   TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 350, 
in _prepare_impl
   TRACE neutron.api.v2.resource self.session.flush()
   TRACE neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/openstack/common/db/sqlalchemy/session.py", 
line 444, in _wrap
   TRACE neutron.api.v2.resource _raise_if_deadlock_error(e, 
self.bind.dialect.name)
   TRACE neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/openstack/common/db/sqlalchemy/session.py", 
line 427, in _raise_if_deadlock_error
   TRACE neutron.api.v2.resource raise 
exception.DBDeadlock(operational_error)
   TRACE neutron.api.v2.resource DBDeadlock: (OperationalError) (1213, 
'Deadlock found when trying to get lock; try restarting transaction') 'DELETE 
FROM ipavailabilityranges WHERE ipavailabilityranges.allocation_pool_id = %s 
AND ipavailabilityranges.first_ip = %s AND ipavailabilityranges.last_ip = %s' 
('b19b08b6-90f2-43d6-bfe1-9cbe6e0e1d93', '10.100.0.2', '10.100.0.14')

  http://logs.openstack.org/21/76021/12/check/check-tempest-dsvm-
  neutron-
  full/7577c27/logs/screen-q-svc.txt.gz?level=TRACE#_2014-06-21_18_39_47_122

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1332917/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1327218] Re: Volume detach failure because of invalid bdm.connection_info

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1327218

Title:
  Volume detach failure because of invalid bdm.connection_info

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Trusty:
  Confirmed

Bug description:
  Example of this here:

  http://logs.openstack.org/33/97233/1/check/check-grenade-
  dsvm/f7b8a11/logs/old/screen-n-cpu.txt.gz?level=TRACE#_2014-06-02_14_13_51_125

     File "/opt/stack/old/nova/nova/compute/manager.py", line 4153, in 
_detach_volume
   connection_info = jsonutils.loads(bdm.connection_info)
     File "/opt/stack/old/nova/nova/openstack/common/jsonutils.py", line 164, 
in loads
   return json.loads(s)
     File "/usr/lib/python2.7/json/__init__.py", line 326, in loads
   return _default_decoder.decode(s)
     File "/usr/lib/python2.7/json/decoder.py", line 366, in decode
   obj, end = self.raw_decode(s, idx=_w(s, 0).end())
   TypeError: expected string or buffer

  and this was in grenade with stable/icehouse nova commit 7431cb9

  There's nothing unusual about the test which triggers this - simply
  attaches a volume to an instance, waits for it to show up in the
  instance and then tries to detach it

  logstash query for this:

    message:"Exception during message handling" AND message:"expected
  string or buffer" AND message:"connection_info =
  jsonutils.loads(bdm.connection_info)" AND tags:"screen-n-cpu.txt"

  but it seems to be very rare

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1327218/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1313573] Re: nova backup fails to backup an instance with attached volume (libvirt, LVM backed)

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1313573

Title:
  nova backup fails to backup an instance with attached volume (libvirt,
  LVM backed)

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  Description of problem:
  An instance has an attached volume, after running the command:
  # nova backup   snapshot  
  An image has been created (type backup) and the status is stuck in 'queued'. 

  Version-Release number of selected component (if applicable):
  openstack-nova-compute-2013.2.3-6.el6ost.noarch
  openstack-nova-conductor-2013.2.3-6.el6ost.noarch
  openstack-nova-novncproxy-2013.2.3-6.el6ost.noarch
  openstack-nova-scheduler-2013.2.3-6.el6ost.noarch
  openstack-nova-api-2013.2.3-6.el6ost.noarch
  openstack-nova-cert-2013.2.3-6.el6ost.noarch

  python-glance-2013.2.3-2.el6ost.noarch
  python-glanceclient-0.12.0-2.el6ost.noarch
  openstack-glance-2013.2.3-2.el6ost.noarch

  How reproducible:
  100%

  Steps to Reproduce:
  1. launch an instance from a volume.
  2. backup the instance.

  
  Actual results:
  The backup is stuck in queued state.

  Expected results:
  the backup should be available as an image in Glance.

  Additional info:
  The nova-compute error & the glance logs are attached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1313573/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1187102] Re: quantum-ns-metadata-proxy listens on external interfaces too

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1187102

Title:
  quantum-ns-metadata-proxy listens on external interfaces too

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in OpenStack Security Advisory:
  Invalid

Bug description:
  Running Grizzy 2013.1 on Ubuntu 12.04. Three nodes: controller,
  network and compute.

  netnode# ip netns exec qrouter-7a44de32-3ac0-4f3e-92cc-1a37d8211db8 netstat 
-anp
  Active Internet connections (servers and established)
  Proto Recv-Q Send-Q Local Address   Foreign Address State 
  PID/Program name
  tcp0  0 0.0.0.0:96970.0.0.0:*   LISTEN
  18462/python

  So this router is uplinked to an external network:

  netnode# ip netns exec qrouter-7a44de32-3ac0-4f3e-92cc-1a37d8211db8 ip -4 a
  14: lo:  mtu 16436 qdisc noqueue state UNKNOWN
  inet 127.0.0.1/8 scope host lo
  23: qr-123f9b7f-43:  mtu 1500 qdisc 
noqueue state UNKNOWN
  inet 172.17.17.1/24 brd 172.17.17.255 scope global qr-123f9b7f-43
  24: qg-c8a6a6cd-6d:  mtu 1500 qdisc 
noqueue state UNKNOWN
  inet 192.168.101.2/24 brd 192.168.101.255 scope global qg-c8a6a6cd-6d

  Now from outside can do:

  $ nmap 192.168.101.2 -p 9697
  Starting Nmap 6.00 ( http://nmap.org ) at 2013-06-03 13:45 IST
  Nmap scan report for 192.168.101.2
  Host is up (0.0018s latency).
  PORT STATE SERVICE
  9697/tcp open  unknown

  As a test I tried changing namespace_proxy.py so it would not bind to
  0.0.0.0

  proxy.start(handler, self.port, host='127.0.0.1')

  but the metadata stopped working. In iptables this rule is being hit:

    -A quantum-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp
  --dport 80 -j REDIRECT --to-ports 9697

  I'm guessing the intention of that rule is also change the destination
  address to 127.0.0.1  ? as there is this:

    -A quantum-l3-agent-INPUT -d 127.0.0.1/32 -p tcp -m tcp --dport 9697
  -j ACCEPT

  but the counters show that this rule is not being hit. Anyway the
  default policy for INPUT is ACCEPT.

  From the iptables man page:
    REDIRECT
     "... It redirects the packet to the machine itself by changing the 
destination IP to the primary address  of  the  incoming  interface
     (locally-generated packets are mapped to the 127.0.0.1 address).  ..."

  so the primary address of the incoming interface is 172.17.17.1, not
  127.0.0.1.

  So I manually deleted the "-j REDIRECT --to-ports 9697" and added "-j
  DNAT --to-destination 127.0.0.1:9697" but that didn't work - seems
  like it is not possible: http://serverfault.com/questions/351816/dnat-
  to-127-0-0-1-with-iptables-destination-access-control-for-transparent-
  soc

  So I tried changing the ns proxy to listen on 172.17.17.1. I think
  this is the one and only address it should bind to anyway.

  proxy.start(handler, self.port, host='172.17.17.1') # hardwire as
  a test

  Stopped the l3-agent, killed the quantum-ns-metadata-proxy and
  restarted the l3-agent. But the ns proxy gave an error:

  Stderr: 'cat: /proc/10850/cmdline: No such file or directory\n'
  2013-06-03 15:05:18ERROR [quantum.wsgi] Unable to listen on 
172.17.17.1:9697
  Traceback (most recent call last):
    File "/usr/lib/python2.7/dist-packages/quantum/wsgi.py", line 72, in start
  backlog=backlog)
    File "/usr/lib/python2.7/dist-packages/eventlet/convenience.py", line 38, 
in listen
  sock.bind(addr)
    File "/usr/lib/python2.7/socket.py", line 224, in meth
  return getattr(self._sock,name)(*args)
  error: [Errno 99] Cannot assign requested address

  The l3-agent.log shows the agent deleted the port qr-123f9b7f-43 at
  15:05:10 and did not recreate it until 15:05:19 - ie a second too late
  for the ns proxy. From looking at the code it seems the l3-agent
  spawns the ns proxy just before it plugs its ports. I was able to
  start the ns proxy manually with the command line from the l3-agent
  log, and the metadata worked and was not reachable from outside.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1187102/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296414] Re: quotas not updated when periodic tasks or startup finish deletes

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1296414

Title:
  quotas not updated when periodic tasks or startup finish deletes

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  There are a couple of cases in the compute manager where we don't pass
  reservations to _delete_instance().  For example, one of them is
  cleaning up when we see a delete that is stuck in DELETING.

  The only place we ever update quotas as part of delete should be when
  the instance DB record is removed. If something is stuck in DELETING,
  it means that the quota was not updated.  We should make sure we're
  always updating the quota when the instance DB record is removed.

  Soft delete kinda throws a wrench in this, though, because I think you
  want soft deleted instances to not count against quotas -- yet their
  DB records will still exist. In this case, it seems we may have a race
  condition in _delete_instance() -> _complete_deletion() where if the
  instance somehow was SOFT_DELETED, quotas would have updated twice
  (once in soft_delete and once in _complete_deletion).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1296414/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1305897] Re: Hyper-V driver failing with dynamic memory due to virtual NUMA

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1305897

Title:
  Hyper-V driver failing with dynamic memory due to virtual NUMA

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  Starting with Windows Server 2012, Hyper-V provides the Virtual NUMA
  functionality. This option is enabled by default in the VMs depending
  on the underlying hardware.

  However, it's not compatible with dynamic memory. The Hyper-V driver
  is not aware of this constraint and it's not possible to boot new VMs
  if the nova.conf parameter 'dynamic_memory_ratio' > 1.

  The error in the logs looks like the following:
  2014-04-09 16:33:43.615 18600 TRACE nova.virt.hyperv.vmops HyperVException: 
WMI job failed with status 10. Error details: Failed to modify device 'Memory'.
  2014-04-09 16:33:43.615 18600 TRACE nova.virt.hyperv.vmops
  2014-04-09 16:33:43.615 18600 TRACE nova.virt.hyperv.vmops Dynamic memory and 
virtual NUMA cannot be enabled on the same virtual machine. - 
'instance-0001c90c' failed to modify device 'Memory'. (Virtual machine ID 
F4CB4E4D-CA06-4149-9FA3-CAD2E0C6CEDA)
  2014-04-09 16:33:43.615 18600 TRACE nova.virt.hyperv.vmops
  2014-04-09 16:33:43.615 18600 TRACE nova.virt.hyperv.vmops Dynamic memory and 
virtual NUMA cannot be enabled on the virtual machine 'instance-0001c90c' 
because the features are mutually exclusive. (Virtual machine ID 
F4CB4E4D-CA06-4149-9FA3-CAD2E0C6CEDA) - Error code: 32773

  In order to solve this problem, it's required to change the field
  'VirtualNumaEnabled' in 'Msvm_VirtualSystemSettingData' (option
  available only in v2 namespace) while creating the VM when dynamic
  memory is used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1305897/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1288039] Re: live-migration cinder boot volume target_lun id incorrect

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1288039

Title:
  live-migration cinder boot volume target_lun id incorrect

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  When nova goes to cleanup _post_live_migration on the source host, the
  block_device_mapping has incorrect data.

  I can reproduce this 100% of the time with a cinder iSCSI backend,
  such as 3PAR.

  This is a Fresh install on 2 new servers with no attached storage from Cinder 
and no VMs.
  I create a cinder volume from an image. 
  I create a VM booted from that Cinder volume.  That vm shows up on host1 with 
a LUN id of 0.
  I live migrate that vm.   The vm moves to host 2 and has a LUN id of 0.   The 
LUN on host1 is now gone.

  I create another cinder volume from image.
  I create another VM booted from the 2nd cinder volume.  The vm shows up on 
host1 with a LUN id of 0.  
  I live migrate that vm.  The VM moves to host 2 and has a LUN id of 1.  
  _post_live_migrate is called on host1 to clean up, and gets failures, because 
it's asking cinder to delete the volume
  on host1 with a target_lun id of 1, which doesn't exist.  It's supposed to be 
asking cinder to detach LUN 0.

  First migrate
  HOST2
  2014-03-04 19:02:07.870 WARNING nova.compute.manager 
[req-24521cb1-8719-4bc5-b488-73a4980d7110 admin admin] pre_live_migrate: 
{'block_device_mapping': [{'guest_format': None, 'boot_index': 0, 
'mount_device': u'vda', 'connection_info': {u'd
  river_volume_type': u'iscsi', 'serial': 
u'83fb6f13-905e-45f8-a465-508cb343b721', u'data': {u'target_discovered': True, 
u'qos_specs': None, u'target_iqn': 
u'iqn.2000-05.com.3pardata:20810002ac00383d', u'target_portal': 
u'10.10.120.253:3260'
  , u'target_lun': 0, u'access_mode': u'rw'}}, 'disk_bus': u'virtio', 
'device_type': u'disk', 'delete_on_termination': False}]}
  HOST1
  2014-03-04 19:02:16.775 WARNING nova.compute.manager [-] 
_post_live_migration: block_device_info {'block_device_mapping': 
[{'guest_format': None, 'boot_index': 0, 'mount_device': u'vda', 
'connection_info': {u'driver_volume_type': u'iscsi',
   u'serial': u'83fb6f13-905e-45f8-a465-508cb343b721', u'data': 
{u'target_discovered': True, u'qos_specs': None, u'target_iqn': 
u'iqn.2000-05.com.3pardata:20810002ac00383d', u'target_portal': 
u'10.10.120.253:3260', u'target_lun': 0, u'access_mode': u'rw'}}, 'disk_bus': 
u'virtio', 'device_type': u'disk', 'delete_on_termination': False}]}



  Second Migration
  This is in _post_live_migration on the host1.  It calls libvirt's driver.py 
post_live_migration with the volume information returned from the new volume on 
host2, hence the target_lun = 1.   It should be calling libvirt's driver.py to 
clean up the original volume on the source host, which has a target_lun = 0.
  2014-03-04 19:24:51.626 WARNING nova.compute.manager [-] 
_post_live_migration: block_device_info {'block_device_mapping': 
[{'guest_format': None, 'boot_index': 0, 'mount_device': u'vda', 
'connection_info': {u'driver_volume_type': u'iscsi', u'serial': 
u'f0087595-804d-4bdb-9bad-0da2166313ea', u'data': {u'target_discovered': True, 
u'qos_specs': None, u'target_iqn': 
u'iqn.2000-05.com.3pardata:20810002ac00383d', u'target_portal': 
u'10.10.120.253:3260', u'target_lun': 1, u'access_mode': u'rw'}}, 'disk_bus': 
u'virtio', 'device_type': u'disk', 'delete_on_termination': False}]}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1288039/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1482371] Re: [OSSA 2015-019] Image status can be changed by passing header 'x-image-meta-status' with PUT operation using v1 (CVE-2015-5251)

2015-11-19 Thread Alan Pevec
** Changed in: glance/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1482371

Title:
  [OSSA 2015-019] Image status can be changed by passing header 'x
  -image-meta-status' with PUT operation using v1 (CVE-2015-5251)

Status in Glance:
  Fix Released
Status in Glance juno series:
  Fix Released
Status in OpenStack Security Advisory:
  Fix Released

Bug description:
  Using Glance v1, one is able to change the status of an image to any
  one of the valid statuses by passing the header 'x-image-meta-status'
  with PUT on /images/.  This bug provides a way for an image
  to transition states that are otherwise not possible in an image's
  lifecycle.

  See http://paste.openstack.org/show/pNL7kvIZUz7cWJQwX64d/ for a
  reproduction of this behavior on devstack.

  As shown in the above paste, though one is able to change the status
  of an active image to queued, uploading data after re-setting the
  status to queued fails with a 400[1].  Though the purpose of [1]
  appears to be slightly different, it's fortunately saving us from
  badly breaking the immutability guarantees of glance images.

  [1]
  
https://github.com/openstack/glance/blob/master/glance/api/v1/images.py#L760-L765

  NOTE: Marking this as a security vulnerability for now as users would
  be able to activate the deactivated images on their own. This probably
  affects deployments only where v1 is exposed publicly. However, it's
  probably worth discussing this from a security perspective as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1482371/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479309] Re: Wrong pre-delete checks for distributed routers

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1479309

Title:
  Wrong pre-delete checks for distributed routers

Status in neutron:
  New
Status in neutron juno series:
  Fix Released
Status in vmware-nsx:
  Fix Committed

Bug description:
  The pre-delete checks [1] do not take into account DVR interfaces.
  This means that they will fail to raise an error when deleting a
  router with DVR interfaces on it, thus causing the router to be
  removed from the backend and leaving the system in an inconsistent
  state (as the subsequent db operation will fail)

  
  [1] 
http://git.openstack.org/cgit/openstack/vmware-nsx/tree/vmware_nsx/neutron/plugins/vmware/plugins/base.py#n1573

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1479309/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1481692] Re: Neutron usage_audit's router and floating IP reporting doesn't work with ML2 plugin

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1481692

Title:
  Neutron usage_audit's router and floating IP reporting doesn't work
  with ML2 plugin

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released

Bug description:
  Neutron usage_audit's router and floating IP reporting doesn't work
  with ML2 plugin as router functionality has been moved to L3 plugin.

  The bug has been noted earlier
  http://lists.openstack.org/pipermail/openstack/2014-September/009371.html
  but I couldn't find a bug report from launchpad.

  The error in neutron-usage-audit.log looks like this
  2015-08-05 12:00:04.295 30126 CRITICAL neutron 
[req-74df5d30-7070-4152-86d3-cc4e2ef4fefa None] 'Ml2Plugin' object has no 
attribute 'get_routers'
  2015-08-05 12:00:04.295 30126 TRACE neutron Traceback (most recent call last):
  2015-08-05 12:00:04.295 30126 TRACE neutron   File 
"/usr/bin/neutron-usage-audit", line 10, in 
  2015-08-05 12:00:04.295 30126 TRACE neutron sys.exit(main())
  2015-08-05 12:00:04.295 30126 TRACE neutron   File 
"/usr/lib/python2.6/site-packages/neutron/cmd/usage_audit.py", line 55, in main
  2015-08-05 12:00:04.295 30126 TRACE neutron for router in 
plugin.get_routers(cxt):
  2015-08-05 12:00:04.295 30126 TRACE neutron AttributeError: 'Ml2Plugin' 
object has no attribute 'get_routers'
  2015-08-05 12:00:04.295 30126 TRACE neutron 

  I found the bug on icehouse but the relevant code is  same in HEAD. My
  plan is to submit a patch to fix the bug, the fix is quite trivial.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1481692/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489111] Re: [OSSA 2015-018] IP, MAC, and DHCP spoofing rules can by bypassed by changing device_owner (CVE-2015-5240)

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489111

Title:
  [OSSA 2015-018] IP, MAC, and DHCP spoofing rules can by bypassed by
  changing device_owner (CVE-2015-5240)

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in OpenStack Security Advisory:
  Fix Released

Bug description:
  This issue is being treated as a potential security risk under
  embargo. Please do not make any public mention of embargoed (private)
  security vulnerabilities before their coordinated publication by the
  OpenStack Vulnerability Management Team in the form of an official
  OpenStack Security Advisory. This includes discussion of the bug or
  associated fixes in public forums such as mailing lists, code review
  systems and bug trackers. Please also avoid private disclosure to
  other individuals not already approved for access to this information,
  and provide this same reminder to those who are made aware of the
  issue prior to publication. All discussion should remain confined to
  this private bug report, and any proposed fixes should be added to the
  bug as attachments.

  --

  The anti-IP spoofing rules, anti-MAC spoofing rules, and anti-DHCP
  spoofing rules can be bypassed by changing the device_owner field of a
  compute node's port to something that starts with 'network:'.

  Steps to reproduce:

  Create a port on the target network:

  neutron port-create some_network

  Start a repeated update of the device_owner field to immediately
  change it back after nova sets it to 'compute:' on VM
  attachment. (This has to be done quickly because the owner has to be
  set to 'network:something' before the L2 agent wires up the security
  group rules.)

  watch neutron port-update  --device-owner
  network:hello

  Then boot the VM with the port UUID:

  nova boot test --nic port-id= --flavor m1.tiny
  --image cirros-0.3.4-x86_64-uec

  This VM will now have no iptables rules applied because it will be
  treated as a network owned port (e.g. router interface, DHCP
  interface, etc).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489111/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491131] Re: Ipset race condition

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1491131

Title:
  Ipset race condition

Status in neutron:
  In Progress
Status in neutron juno series:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  Hello,

  We have been using ipsets in neutron since juno.  We have upgraded our
  install to kilo a month or so and we have experienced 3 issues with
  ipsets.

  The issues are as follows:
  1.) Iptables attempts to apply rules for an ipset that was not added
  2.) iptables attempt to apply rules for an ipset that was removed, but still 
refrenced in the iptables config
  3.) ipset churns trying to remove an ipset that has already been removed.

  For issue one and two I am unable to get the logs for these issues
  because neutron was dumping the full iptables-restore entries to log
  once every second for a few hours and eventually filled up the disk
  and we removed the file to get things working again.

  For issue 3.) I have the start of the logs here:
  2015-08-31 12:17:00.100 6840 INFO neutron.agent.common.ovs_lib 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] Port 
29355e52-bae1-44b2-ace6-5bc7ce497d32 not present in bridge br-int
  2015-08-31 12:17:00.101 6840 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] port_unbound(): net_uuid None not 
in local_vlan_map
  2015-08-31 12:17:00.101 6840 INFO neutron.agent.securitygroups_rpc 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] Remove device filter for 
[u'2aa0f79d-4983-4c7a-b489-e0612c482e36']
  2015-08-31 12:17:00.861 4581 INFO neutron.agent.common.ovs_lib 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] Port 
2aa0f79d-4983-4c7a-b489-e0612c482e36 not present in bridge br-int
  2015-08-31 12:17:00.862 4581 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] port_unbound(): net_uuid None not 
in local_vlan_map
  2015-08-31 12:17:00.862 4581 INFO neutron.agent.securitygroups_rpc 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] Remove device filter for 
[u'c616328a-e44c-4cf8-bc8e-83058c5635dd']
  2015-08-31 12:17:01.499 4581 INFO neutron.agent.securitygroups_rpc 
[req-b5b95389-52b6-4051-ab35-ae383df56a0b ] Security group member updated 
[u'b05f4fa6-f1ec-41c0-8ba6-80b859dc23b0']
  2015-08-31 12:17:01.500 6840 INFO neutron.agent.securitygroups_rpc 
[req-b5b95389-52b6-4051-ab35-ae383df56a0b ] Security group member updated 
[u'b05f4fa6-f1ec-41c0-8ba6-80b859dc23b0']
  2015-08-31 12:17:01.608 6840 INFO neutron.agent.common.ovs_lib 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] Port 
2aa0f79d-4983-4c7a-b489-e0612c482e36 not present in bridge br-int
  2015-08-31 12:17:01.609 6840 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] port_unbound(): net_uuid None not 
in local_vlan_map
  2015-08-31 12:17:01.609 6840 INFO neutron.agent.securitygroups_rpc 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] Remove device filter for 
[u'c616328a-e44c-4cf8-bc8e-83058c5635dd']
  2015-08-31 12:17:02.358 4581 INFO neutron.agent.common.ovs_lib 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] Port 
c616328a-e44c-4cf8-bc8e-83058c5635dd not present in bridge br-int
  2015-08-31 12:17:02.359 4581 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] port_unbound(): net_uuid None not 
in local_vlan_map
  2015-08-31 12:17:02.359 4581 INFO neutron.agent.securitygroups_rpc 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] Remove device filter for 
[u'fddff586-9903-47ad-92e1-b334e02e9d1c']
  2015-08-31 12:17:03.108 6840 INFO neutron.agent.common.ovs_lib 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] Port 
c616328a-e44c-4cf8-bc8e-83058c5635dd not present in bridge br-int
  2015-08-31 12:17:03.109 6840 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] port_unbound(): net_uuid None not 
in local_vlan_map
  2015-08-31 12:17:03.109 6840 INFO neutron.agent.securitygroups_rpc 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] Remove device filter for 
[u'fddff586-9903-47ad-92e1-b334e02e9d1c']
  2015-08-31 12:17:03.855 4581 INFO neutron.agent.common.ovs_lib 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] Port 
fddff586-9903-47ad-92e1-b334e02e9d1c not present in bridge br-int
  2015-08-31 12:17:03.855 4581 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] port_unbound(): net_uuid None not 
in local_vlan_map
  2015-08-31 12:17:03.856 4581 INFO neutron.agent.securitygroups_rpc 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] Remove device filter for 
[u'3f706749-f8bb-41ab-aa4c-a0925dc67bd4']
  2015-08-31 12:17:03.919 4581 INFO neutron.agent.securitygroups_rpc 
[req-1872b212-b537-41cc-96af-0c6ad380824c ] Security group 

[Yahoo-eng-team] [Bug 1478879] Re: Enable extra dhcp opt extension

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1478879

Title:
  Enable extra dhcp opt extension

Status in neutron:
  New
Status in neutron juno series:
  Fix Released
Status in vmware-nsx:
  New

Bug description:
  This extension can be supported without effort by the NSX-mh plugin
  and it should be added to supported extension aliases.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1478879/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1498163] Re: [OSSA 2015-020] Glance storage quota bypass when token is expired (CVE-2015-5286)

2015-11-19 Thread Alan Pevec
** Changed in: glance/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1498163

Title:
  [OSSA 2015-020] Glance storage quota bypass when token is expired
  (CVE-2015-5286)

Status in Glance:
  Fix Released
Status in Glance juno series:
  Fix Released
Status in OpenStack Security Advisory:
  Fix Released

Bug description:
  About a year ago it was a vulnerability called 'Glance user storage quota 
bypass': https://security.openstack.org/ossa/OSSA-2015-003.html, where any user 
could overcome the quota and clog up the storage.
  The fix was proposed in master and all other stable branches, but it turned 
out, that it doesn't completely remove the issue and any user still can exceed 
the quota.

  It happens in case if user token is expired during file upload and
  when glance tries to update image status from 'saving' to 'active'.
  Then glance gets Unauthenticated exception from registry server and
  fails with 500 error. On the other side garbage file is left in
  storage.

  Steps to reproduce mostly coincide with the related from the previous bug, 
but in general it is:
  1. Set some value (like 1Gb) to user_storage_quota in glance-api.conf and 
restart the server.
  2. Make sure that your token will expire soon, when you'll be able to create 
an image instance in DB and begin the upload, but the token will expire during 
it.
  3. Create an image, begin the upload and quickly remove the image with 
'glance image-delete'.
  4. After the upload check that image is not in the list, i.e. it's deleted, 
and file is still located in the store.
  5. Perform steps 2-4 several times to make sure that user quota is exceeded.

  Related script (test_images.py from here
  https://bugs.launchpad.net/glance/+bug/1398830) works fine, too, but
  it's better to reduce token life time in keystone config to 1 or 2
  minutes, just for not to wait for one hour.

  Glance api v2 is affected as well, but only if registry db_api is
  enabled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1498163/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501451] Re: Inconsistency in dhcp-agent when filling hosts and opts files

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501451

Title:
  Inconsistency in dhcp-agent when filling hosts and opts files

Status in neutron:
  Fix Committed
Status in neutron juno series:
  Fix Released

Bug description:
  We have bunch of subnets created in pre-Icehouse era, that have
  ipv6_address_mode and ipv6_ra_mode unset.  For dhcpv6 functionality we
  rely on enable_dhcp setting for a subnet.  However, in _iter_hosts
  port is skipped iff ipv6_address_mode set to SLAAC, but in
  _generate_opts_per_subnet subnet is skipped when ipv6_address_mode id
  SLAAC or unset.

  Since we can not update ipv6_address_mode attribute in existing
  subnets (allow_put is False), this breaks DHCPv6 for these VMs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501451/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490581] Re: the items will never be deleted from metering_info

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1490581

Title:
  the items will never be deleted from metering_info

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  The function _purge_metering_info of MeteringAgent class has a bug. The items 
of metering_info dictionary will never be deleted:
  if info['last_update'] > ts + report_interval:
  del self.metering_info[label_id]
  I this situation last_update will always be less than current timestamp.
  Also this function is not covered by the unit tests.
  Also again, the purge_metering_info function uses metering_info dict but it 
should use the metering_infos dict.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1490581/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1403068] Re: Tests fail with python 2.7.9

2015-11-19 Thread Alan Pevec
** Changed in: keystone/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1403068

Title:
  Tests fail with python 2.7.9

Status in Cinder:
  Fix Released
Status in Cinder icehouse series:
  Fix Released
Status in Cinder juno series:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Identity (keystone) icehouse series:
  Fix Released
Status in OpenStack Identity (keystone) juno series:
  Fix Released
Status in Manila:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron icehouse series:
  Fix Released
Status in neutron juno series:
  Fix Released

Bug description:
  Tests that require SSL fail on python 2.7.9 due to the change in
  python uses SSL certificates.

  
  ==
  FAIL: cinder.tests.test_wsgi.TestWSGIServer.test_app_using_ipv6_and_ssl
  cinder.tests.test_wsgi.TestWSGIServer.test_app_using_ipv6_and_ssl
  --
  _StringException: Traceback (most recent call last):
File "/tmp/buildd/cinder-2014.2/cinder/tests/test_wsgi.py", line 237, in 
test_app_using_ipv6_and_ssl
  response = open_no_proxy('https://[::1]:%d/' % server.port)
File "/tmp/buildd/cinder-2014.2/cinder/tests/test_wsgi.py", line 47, in 
open_no_proxy
  return opener.open(*args, **kwargs)
File "/usr/lib/python2.7/urllib2.py", line 431, in open
  response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 449, in _open
  '_open', req)
File "/usr/lib/python2.7/urllib2.py", line 409, in _call_chain
  result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 1240, in https_open
  context=self._context)
File "/usr/lib/python2.7/urllib2.py", line 1197, in do_open
  raise URLError(err)
  URLError: 
  Traceback (most recent call last):
  _StringException: Empty attachments:
stderr
stdout

  Traceback (most recent call last):
File "/tmp/buildd/cinder-2014.2/cinder/tests/test_wsgi.py", line 237, in 
test_app_using_ipv6_and_ssl
  response = open_no_proxy('https://[::1]:%d/' % server.port)
File "/tmp/buildd/cinder-2014.2/cinder/tests/test_wsgi.py", line 47, in 
open_no_proxy
  return opener.open(*args, **kwargs)
File "/usr/lib/python2.7/urllib2.py", line 431, in open
  response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 449, in _open
  '_open', req)
File "/usr/lib/python2.7/urllib2.py", line 409, in _call_chain
  result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 1240, in https_open
  context=self._context)
File "/usr/lib/python2.7/urllib2.py", line 1197, in do_open
  raise URLError(err)
  URLError: 

  Traceback (most recent call last):
  _StringException: Empty attachments:
stderr
stdout

  Traceback (most recent call last):
File "/tmp/buildd/cinder-2014.2/cinder/tests/test_wsgi.py", line 237, in 
test_app_using_ipv6_and_ssl
  response = open_no_proxy('https://[::1]:%d/' % server.port)
File "/tmp/buildd/cinder-2014.2/cinder/tests/test_wsgi.py", line 47, in 
open_no_proxy
  return opener.open(*args, **kwargs)
File "/usr/lib/python2.7/urllib2.py", line 431, in open
  response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 449, in _open
  '_open', req)
File "/usr/lib/python2.7/urllib2.py", line 409, in _call_chain
  result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 1240, in https_open
  context=self._context)
File "/usr/lib/python2.7/urllib2.py", line 1197, in do_open
  raise URLError(err)
  URLError: 

  
  ==
  FAIL: cinder.tests.test_wsgi.TestWSGIServer.test_app_using_ssl
  cinder.tests.test_wsgi.TestWSGIServer.test_app_using_ssl
  --
  _StringException: Traceback (most recent call last):
File "/tmp/buildd/cinder-2014.2/cinder/tests/test_wsgi.py", line 212, in 
test_app_using_ssl
  response = open_no_proxy('https://127.0.0.1:%d/' % server.port)
File "/tmp/buildd/cinder-2014.2/cinder/tests/test_wsgi.py", line 47, in 
open_no_proxy
  return opener.open(*args, **kwargs)
File "/usr/lib/python2.7/urllib2.py", line 431, in open
  response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 449, in _open
  '_open', req)
File "/usr/lib/python2.7/urllib2.py", line 409, in _call_chain
  result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 1240, in https_open
  context=self._context)
File "/usr/lib/python2.7/urllib2.py", line 1197, in do_open
  raise URLError(err)
  URLError: 
  Traceback (most recent call last):
  _StringException: 

[Yahoo-eng-team] [Bug 1414532] Re: asserts used in cache.py

2015-11-19 Thread Alan Pevec
** Changed in: glance/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1414532

Title:
  asserts used in cache.py

Status in Glance:
  Fix Released
Status in Glance juno series:
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix
Status in OpenStack Security Notes:
  Fix Released

Bug description:
  The asserts in the snippet below check at #2 to see if the HTTP method
  match the HTTP methods actually specified in the patterns at #1.

  /opt/stack/glance/glance/api/middleware/cache.py

  PATTERNS = {   <--- #1
  ('v1', 'GET'): re.compile(r'^/v1/images/([^\/]+)$'),
  ('v1', 'DELETE'): re.compile(r'^/v1/images/([^\/]+)$'),
  ('v2', 'GET'): re.compile(r'^/v2/images/([^\/]+)/file$'),
  ('v2', 'DELETE'): re.compile(r'^/v2/images/([^\/]+)$')
  }

  ...

  @staticmethod
  def _match_request(request):
  """Determine the version of the url and extract the image id

  :returns tuple of version and image id if the url is a
  cacheable,
   otherwise None
  """
  for ((version, method), pattern) in PATTERNS.items():
  match = pattern.match(request.path_info)
  try:
  assert request.method == method  <--- #2
  image_id = match.group(1)
  # Ensure the image id we got looks like an image id to
  filter
  # out a URI like /images/detail. See LP Bug #879136
  assert image_id != 'detail'
  except (AttributeError, AssertionError):
  continue
  else:
  return (version, method, image_id)

  As stated in the Python documentation assert statements will not be evaluated
  when the Python code is compiled with optimization flags. This means that 
these
  checks will not be properly executed and one can in that case call a specific
  method with a completely different HTTP verb. This can result in security
  issues.

  For example think of having some filtering in place in front of the glance API
  to maybe allow only certain API queries to come from certain IP addresses. For
  example: 'the HTTP verb DELETE may only be executed from this IP range'.  An
  attacker can now specify a completely different HTTP verb such as GET and make
  sure he still matches regular expressions at #1 and then bypass the firewall.

  It's a bit of a hypothetical scenario but in general one should never ever do
  error checking with assert statemetns. This should only be done for things
  which can never realistically fail and that is simply not an assumption one 
can
  hold when it comes to untrusted input from the network.

  For more information see
  https://docs.python.org/2/reference/simple_stmts.html#the-assert-statement and
  https://docs.python.org/2/using/cmdline.html#envvar-PYTHONOPTIMIZE

  
  This seems to be related to https://bugs.launchpad.net/cinder/+bug/1199354  
but it's not fixed and maybe it should even be a security issue hence why I 
reported it again and tagged as a security vulnerability. I am not familiar 
enough with the code base to make that call.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1414532/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1433553] Re: DVR: remove interface fails on NSX-mh

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1433553

Title:
  DVR: remove interface fails on NSX-mh

Status in neutron juno series:
  Fix Released
Status in vmware-nsx:
  Fix Committed

Bug description:
  The DVR mixin, which the MH plugin is now using, assumes that routers
  are deployed on l3 agents, which is not the case for VMware plugins.

  While it is generally wrong that a backend agnostic management layer
  makes assumptions about the backend, the VMware plugins should work
  around this issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/juno/+bug/1433553/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1414218] Re: Remove extraneous trace in linux/dhcp.py

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1414218

Title:
  Remove extraneous trace in linux/dhcp.py

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released

Bug description:
  The debug tracepoint in Dnsmasq._output_hosts_file is extraneous and
  causes unnecessary performance overhead due to string formating when
  creating lots (> 1000) ports at one time.

  The trace point is unnecessary since the data is being written to disk
  and the file can be examined in a worst case scenario. The added
  performance overhead is an order of magnitude in difference (~.5
  seconds versus ~.05 seconds at 1500 ports).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1414218/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398468] Re: Unable to terminate instance from Network Topology screen

2015-11-19 Thread Alan Pevec
** Changed in: horizon/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1398468

Title:
  Unable to terminate instance from Network Topology screen

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) juno series:
  Fix Released

Bug description:
  I get Server error in JS console and the following traceback in web
  server:

  Traceback (most recent call last):
File "/usr/lib/python2.7/wsgiref/handlers.py", line 85, in run
  self.result = application(self.environ, self.start_response)
File 
"/home/timur/develop/horizon/.venv/local/lib/python2.7/site-packages/django/contrib/staticfiles/handlers.py",
 line 67, in __call__
  return self.application(environ, start_response)
File 
"/home/timur/develop/horizon/.venv/local/lib/python2.7/site-packages/django/core/handlers/wsgi.py",
 line 206, in __call__
  response = self.get_response(request)
File 
"/home/timur/develop/horizon/.venv/local/lib/python2.7/site-packages/django/core/handlers/base.py",
 line 194, in get_response
  response = self.handle_uncaught_exception(request, resolver, 
sys.exc_info())
File 
"/home/timur/develop/horizon/.venv/local/lib/python2.7/site-packages/django/core/handlers/base.py",
 line 112, in get_response
  response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/timur/develop/horizon/horizon/decorators.py", line 36, in dec
  return view_func(request, *args, **kwargs)
File "/home/timur/develop/horizon/horizon/decorators.py", line 52, in dec
  return view_func(request, *args, **kwargs)
File "/home/timur/develop/horizon/horizon/decorators.py", line 36, in dec
  return view_func(request, *args, **kwargs)
File "/home/timur/develop/horizon/horizon/decorators.py", line 84, in dec
  return view_func(request, *args, **kwargs)
File 
"/home/timur/develop/horizon/.venv/local/lib/python2.7/site-packages/django/views/generic/base.py",
 line 69, in view
  return self.dispatch(request, *args, **kwargs)
File 
"/home/timur/develop/horizon/.venv/local/lib/python2.7/site-packages/django/views/generic/base.py",
 line 87, in dispatch
  return handler(request, *args, **kwargs)
File "/home/timur/develop/horizon/horizon/tables/views.py", line 157, in get
  handled = self.construct_tables()
File "/home/timur/develop/horizon/horizon/tables/views.py", line 148, in 
construct_tables
  handled = self.handle_table(table)
File "/home/timur/develop/horizon/horizon/tables/views.py", line 120, in 
handle_table
  data = self._get_data_dict()
File "/home/timur/develop/horizon/horizon/tables/views.py", line 185, in 
_get_data_dict
  self._data = {self.table_class._meta.name: self.get_data()}
File 
"/home/timur/develop/horizon/openstack_dashboard/dashboards/project/instances/views.py",
 line 60, in get_data
  search_opts = self.get_filters({'marker': marker, 'paginate': True})
File 
"/home/timur/develop/horizon/openstack_dashboard/dashboards/project/instances/views.py",
 line 124, in get_filters
  filter_field = self.table.get_filter_field()
File "/home/timur/develop/horizon/horizon/tables/base.py", line 1239, in 
get_filter_field
  param_name = '%s_field' % filter_action.get_param_name()
  AttributeError: 'NoneType' object has no attribute 'get_param_name'

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1398468/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1405049] Re: Can't see the router in the network topology page, if neutron l3 agent HA is enabled.

2015-11-19 Thread Alan Pevec
** Changed in: horizon/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1405049

Title:
  Can't see the router in the network topology page, if neutron l3 agent
  HA is enabled.

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) juno series:
  Fix Released

Bug description:
  When I enabled the neutron l3 agent ha, by setting the properties in
  neutron.conf, I create a router from horizon. But I can't see the
  router from the "Network Topology" page.

  But everything works fine, for example adding gateway, adding
  interface.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1405049/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475411] Re: During post_live_migration the nova libvirt driver assumes that the destination connection info is the same as the source, which is not always true

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1475411

Title:
  During post_live_migration the nova libvirt driver assumes that the
  destination connection info is the same as the source, which is not
  always true

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  The post_live_migration step for Nova libvirt driver is currently
  making a bad assumption about the source and destination connector
  information. The destination connection info may be different from the
  source which ends up causing LUNs to be left dangling on the source as
  the BDM has overridden the connection info with that of the
  destination.

  Code section where this problem is occuring:

  
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L6036

  At line 6038 the potentially wrong connection info will be passed to
  _disconnect_volume which then ends up not finding the proper LUNs to
  remove (and potentially removes the LUNs for a different volume
  instead).

  By adding debug logging after line 6036 and then comparing that to the
  connection info of the source host (by making a call to Cinder's
  initialize_connection API) you can see that the connection info does
  not match:

  http://paste.openstack.org/show/TjBHyPhidRuLlrxuGktz/

  Version of nova being used:

  commit 35375133398d862a61334783c1e7a90b95f34cdb
  Merge: 83623dd b2c5542
  Author: Jenkins 
  Date:   Thu Jul 16 02:01:05 2015 +

  Merge "Port crypto to Python 3"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1475411/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474785] Re: NSX-mh: agentless modes are available only for 4.1

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1474785

Title:
  NSX-mh: agentless modes are available only for 4.1

Status in neutron:
  Invalid
Status in neutron juno series:
  Fix Released
Status in vmware-nsx:
  Fix Committed

Bug description:
  DHCP and Metadata agentless modes are unfortunately available only in
  NSX-mh 4.1

  The version requirements for enabling the agentless mode should be
  amended

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1474785/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473556] Re: Error log is generated when API operation is PolicyNotAuthorized and returns 404

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1473556

Title:
  Error log is generated when API operation is PolicyNotAuthorized and
  returns 404

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released

Bug description:
  neutron.policy module can raises webob.exc.HTTPNotFound when
  PolicyNotAuthorized is raised. In this case, neutron.api.resource
  outputs a log with error level. It should be INFO level as it occurs
  by user API requests.

  One of the easiest way is to reproduce this bug is as follows:

  (1) create a shared network by admin user
  (2) try to delete the shared network by regular user

  (A regular user can know a ID of the shared network, so the user can
  request to delete the shared network.)

  As a result we get the following log.
  It is confusing from the point of log monitoring.

  2015-07-11 05:28:33.914 DEBUG neutron.policy 
[req-5aef6df6-1fb7-4187-9980-4e41fc648ad7 demo 
1e942c3c210b42ff8c45f42962da33b4] Enforcing rules: ['delete_network', 
'delete_network:provider:physical_network
  ', 'delete_network:shared', 'delete_network:provider:network_type', 
'delete_network:provider:segmentation_id'] from (pid=1439) log_rule_list 
/opt/stack/neutron/neutron/policy.py:319
  2015-07-11 05:28:33.914 DEBUG neutron.policy 
[req-5aef6df6-1fb7-4187-9980-4e41fc648ad7 demo 
1e942c3c210b42ff8c45f42962da33b4] Failed policy check for 'delete_network' from 
(pid=1439) enforce /opt/stack/n
  eutron/neutron/policy.py:393
  2015-07-11 05:28:33.914 ERROR neutron.api.v2.resource 
[req-5aef6df6-1fb7-4187-9980-4e41fc648ad7 demo 
1e942c3c210b42ff8c45f42962da33b4] delete failed
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 83, in resource
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 146, in wrapper
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource ectxt.value = 
e.inner_exc
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 119, in 
__exit__
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 136, in wrapper
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource return f(*args, 
**kwargs)
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 495, in delete
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource raise 
webob.exc.HTTPNotFound(msg)
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource HTTPNotFound: The 
resource could not be found.
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1473556/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470443] Re: ICMP rules not getting deleted on the hyperv network adapter extended acl set

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1470443

Title:
  ICMP rules not getting deleted on the hyperv network adapter extended
  acl set

Status in networking-hyperv:
  Fix Committed
Status in neutron:
  Invalid
Status in neutron juno series:
  Fix Released

Bug description:
  1. Create a security group with icmp rule
  2. spawn a vm with the above secuirty-grop-rule
  3. ping works from dhcp namespace 
  4. delete the rule from secuirty-group which will trigger the port-update
  5. however the rule is still there on compute for the vm even after 
port-update

  rootcause: icmp rule is created with locacal port as empty('').
  however during remove_security_rule the rule is matched for port "ANY" which 
does not match any rule, hence rule not deleted.
  solution: introduce the check to match empty loalport incase of deleting icmp 
rule.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-hyperv/+bug/1470443/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463363] Re: NSX-mh: Decimal RXTX factor not honoured

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463363

Title:
  NSX-mh: Decimal RXTX factor not honoured

Status in neutron:
  In Progress
Status in neutron juno series:
  Fix Released
Status in neutron kilo series:
  New
Status in vmware-nsx:
  Fix Committed

Bug description:
  A decimal RXTX factor, which is allowed by nova flavors, is not
  honoured by the NSX-mh plugin, but simply truncated to integer.

  To reproduce:

  * Create a neutron queue
  * Create a neutron net / subnet using the queue
  * Create a new flavor which uses an RXTX factor other than an integer value
  * Boot a VM on the net above using the flavor
  * View the NSX queue for the VM's VIF -- notice it does not have the RXTX 
factor applied correctly (for instance if it's 1.2 it does not multiply it at 
all, if it's 3.4 it applies a RXTX factor of 3)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1463363/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485883] Re: NSX-mh: bad retry behaviour on controller connection issues

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1485883

Title:
  NSX-mh: bad retry behaviour on controller connection issues

Status in neutron:
  Invalid
Status in neutron juno series:
  Fix Released
Status in vmware-nsx:
  Fix Committed

Bug description:
  If the connection to a NSX-mh controller fails - for instance because
  there is a network issue or the controller is unreachable - the
  neutron plugin keeps retrying the connection to the same controller
  until it times out, whereas a  correct behaviour would be to try to
  connect to the other controllers in the cluster.

  The issue can be reproduced with the following steps:
  1. Three Controllers in the cluster 10.25.56.223,10.25.101.133,10.25.56.222
  2. Neutron net-create dummy-1 from openstack cli
  3. Vnc into controller-1, ifconfig eth0 down
  4. Do neutron net-create dummy-2 from openstack cli

  The API requests were forwarded to 10.25.56.223 originally. eth0
  interface was shutdown on 10.25.56.223. But the requests continued to
  get forwarded to the same Controllers and timed out.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1485883/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1484738] Re: keyerror when refreshing instance security groups

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1484738

Title:
  keyerror when refreshing instance security groups

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  On a clean kilo install using source security groups I am seeing the
  following trace on boot and delete


  a2413f7] Deallocating network for instance _deallocate_network 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:2098
  2015-08-14 09:46:06.688 11618 ERROR oslo_messaging.rpc.dispatcher 
[req-b8f44d34-96b2-4e40-ac22-15ccc6e44e59 - - - - -] Exception during message 
handling: 'metadata'
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do_dispatch
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher result 
= func(ctxt, **new_args)
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6772, in 
refresh_instance_security_rules
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher return 
self.manager.refresh_instance_security_rules(ctxt, instance)
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 434, in 
decorated_function
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher args = 
(_load_instance(args[0]),) + args[1:]
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 425, in 
_load_instance
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher 
expected_attrs=metas)
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/objects/instance.py", line 506, in 
_from_db_object
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher 
instance['metadata'] = utils.instance_meta(db_inst)
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/utils.py", line 817, in instance_meta
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher if 
isinstance(instance['metadata'], dict):
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher KeyError: 
'metadata'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1484738/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1471050] Re: VLANs are not configured on VM migration

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1471050

Title:
  VLANs are not configured on VM migration

Status in networking-arista:
  New
Status in neutron:
  Invalid
Status in neutron juno series:
  Fix Released

Bug description:
  Whenever a VM migrates from one compute node to the other, the VLAN is
  not provisioned on the new compute node. The correct behaviour should
  be to remove the VLAN on the interface on the old switch interface and
  provision the VLAN on the new switch interface.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-arista/+bug/1471050/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1482699] Re: glance requests from nova fail if there are too many endpoints in the service catalog

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1482699

Title:
  glance requests from nova fail if there are too many endpoints in the
  service catalog

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  Nova sends the entire serialized service catalog in the http header to
  glance requests:

  https://github.com/openstack/nova/blob/icehouse-
  eol/nova/image/glance.py#L136

  If you have a lot of endpoints in your service catalog this can make
  glance fail with "400 Header Line TooLong".

  Per bknudson: "Any service using the auth_token middleware has no use
  for the x-service-catalog header. All that auth_token middleware uses
  is x-auth-token. The auth_token middleware will actually strip the x
  -service-catalog from the request before it sends the request on to
  the rest of the pipeline, so the application will never see it."

  If glance needs the service catalog it will get it from keystone when
  it auths the tokens, so nova shouldn't be sending this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1482699/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483920] Re: NSX-mh: honour distributed_router config flag

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1483920

Title:
  NSX-mh: honour distributed_router config flag

Status in neutron:
  Invalid
Status in neutron juno series:
  Fix Released
Status in vmware-nsx:
  Fix Committed

Bug description:
  The VMware NSX plugin is not honoring the "router_distributed = True"
  flag when set in /etc/neutron.conf.  If the router_distributed
  parameter is set to "True", this should result in all routers that are
  created by tenants to default to distributed routers.  For example,
  the below CLI command should create a distributed logical router, but
  instead it creates a non-distributed router.

  neutron router-create --tenant-id $TENANT tenant-router

  In order to create a distributed router the "--distributed True"
  option must be passed, as show below.

  neutron router-create --tenant-id $TENANT csinfra-router-test
  --distributed True

  This happens because the NSX-mh plugin relies on the default value
  implemented in the backend rather than in the neutron configuration
  and should be changed to ensure this plugin behaves like the reference
  implementation

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1483920/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1423772] Re: During live-migration Nova expects identical IQN from attached volume(s)

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1423772

Title:
  During live-migration Nova expects identical IQN from attached
  volume(s)

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  When attempting to do a live-migration on an instance with one or more
  attached volumes, Nova expects that the IQN will be exactly the same
  as it's attaching the volume(s) to the new host. This conflicts with
  the Cinder settings such as "hp3par_iscsi_ips" which allows for
  multiple IPs for the purpose of load balancing.

  Example:
  An instance on Host A has a volume attached at 
"/dev/disk/by-path/ip-10.10.220.244:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac002a13-lun-2"
  An attempt is made to migrate the instance to Host B.
  Cinder sends the request to attach the volume to the new host.
  Cinder gives the new host 
"/dev/disk/by-path/ip-10.10.120.244:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac002a13-lun-2"
  Nova looks for the volume on the new host at the old location 
"/dev/disk/by-path/ip-10.10.220.244:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac002a13-lun-2"

  The following error appears in n-cpu in this case:

  2015-02-19 17:09:05.574 ERROR nova.virt.libvirt.driver [-] [instance: 
b6fa616f-4e78-42b1-a747-9d081a4701df] Live Migration failure: Failed to open 
file 
'/dev/disk/by-path/ip-10.10.220.244:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac002a13-lun-2':
 No such file or directory
  Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/poll.py", line 
115, in wait
  listener.cb(fileno)
File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
212, in main
  result = function(*args, **kwargs)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5426, in 
_live_migration
  recover_method(context, instance, dest, block_migration)
File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in 
__exit__
  six.reraise(self.type_, self.value, self.tb)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5393, in 
_live_migration
  CONF.libvirt.live_migration_bandwidth)
File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 183, 
in doit
  result = proxy_call(self._autowrap, f, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 141, 
in proxy_call
  rv = execute(f, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 122, 
in execute
  six.reraise(c, e, tb)
File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 80, 
in tworker
  rv = meth(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/libvirt.py", line 1582, in 
migrateToURI2
  if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed', 
dom=self)
  libvirtError: Failed to open file 
'/dev/disk/by-path/ip-10.10.220.244:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac002a13-lun-2':
 No such file or directory
  Removing descriptor: 3

  
  When looking at the nova DB, this is the state of block_device_mapping prior 
to the migration attempt:

  mysql> select * from block_device_mapping where 
instance_uuid='b6fa616f-4e78-42b1-a747-9d081a4701df' and deleted=0;
  
+-+-+++-+---+-+--+-+---+---+--+-+-+--+--+-+--++--+
  | created_at  | updated_at  | deleted_at | id | device_name | 
delete_on_termination | snapshot_id | volume_id| 
volume_size | no_device | connection_info   




 

[Yahoo-eng-team] [Bug 1423427] Re: tempest baremetal client is creating node with wrong property keys

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1423427

Title:
  tempest baremetal client is creating node with wrong property keys

Status in OpenStack Compute (nova):
  Invalid
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in tempest:
  Fix Released

Bug description:
  A new test has been added to tempest to stress the os-baremetal-nodes
  API extension.  The test periodically fails in the gate with traceback
  in n-api log:

  [req-01dcd35b-55f4-4688-ba18-7fe0c6defd52 
BaremetalNodesAdminTestJSON-1864409967 BaremetalNodesAdminTestJSON-1481542636] 
Caught error: 'cpus'
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack Traceback (most recent 
call last):
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/opt/stack/new/nova/nova/api/openstack/__init__.py", line 125, in __call__
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack return 
req.get_response(self.application)
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1320, in send
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1284, in 
call_application
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py", line 
977, in __call__
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack return 
self._call_app(env, start_response)
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py", line 
902, in _call_app
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack return 
self._app(env, _fake_start_response)
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/routes/middleware.py", line 136, in 
__call__
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack return 
self.func(req, *args, **kwargs)
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/opt/stack/new/nova/nova/api/openstack/wsgi.py", line 749, in __call__
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack content_type, body, 
accept)
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/opt/stack/new/nova/nova/api/openstack/wsgi.py", line 814, in _process_stack
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/opt/stack/new/nova/nova/api/openstack/wsgi.py", line 904, in dispatch
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack return 
method(req=request, **action_args)
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/opt/stack/new/nova/nova/api/openstack/compute/contrib/baremetal_nodes.py", 
line 123, in index
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack 'cpus': 
inode.properties['cpus'],
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack KeyError: 'cpus'
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack

  This hits only periodically and only when another tempest baremetal
  test is running in parallel to the new test.  The other tests
  (tempest.api.baremetal.*) create some nodes in Ironic with node
  properties that are not the standard resource properties the
  nova->ironic proxy expects (from
  nova/api/openstack/compute/contrib/baremetal_nodes.py:201):

for inode in ironic_nodes:
  node = {'id': inode.uuid,
  'interfaces': [],
  'host': 'IRONIC MANAGED',
  'task_state': inode.provision_state,
  'cpus': inode.properties['cpus'],
  'memory_mb': inode.properties['memory_mb'],
  

[Yahoo-eng-team] [Bug 1429093] Re: nova allows to boot images with virtual size > root_gb specified in flavor

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1429093

Title:
  nova allows to boot images with virtual size > root_gb specified in
  flavor

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  It's currently possible to boot an instance from a QCOW2 image, which
  has the virtual size larger than root_gb size specified in the given
  flavor.

  Steps to reproduce:

  1. Download a QCOW2 image (e.g. Cirros -
  https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-i386-disk.img)

  2. Resize the image to a reasonable size:

  qemu-img resize cirros-0.3.0-i386-disk.img +9G

  3. Upload the image to Glance:

  glance image-create --file cirros-0.3.0-i386-disk.img --name cirros-
  10GB --is-public True --progress --container-format bare --disk-format
  qcow2

  4. Boot the first VM using a 'correct' flavor (root_gb > virtual size
  of the Cirros image), e.g. m1.small (root_gb = 20)

  nova boot --image cirros-10GB --flavor m1.small demo-ok

  5. Wait until the VM boots.

  6. Boot the second VM using an 'incorrect' flavor (root_gb < virtual
  size of the Cirros image), e.g. m1.tiny (root_gb = 1):

  nova boot --image cirros-10GB --flavor m1.tiny demo-should-fail

  7. Wait until the VM boots.

  Expected result:

  demo-ok is in ACTIVE state
  demo-should-fail is in ERROR state (failed with FlavorDiskTooSmall)

  Actual result:

  demo-ok is in ACTIVE state
  demo-should-fail is in ACTIVE state

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1429093/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   3   4   5   6   7   8   9   10   >