[Yahoo-eng-team] [Bug 1368661] Re: Unit tests sometimes fail because of stale pyc files

2015-12-08 Thread janonymous
** Also affects: swift
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368661

Title:
  Unit tests sometimes fail because of stale pyc files

Status in congress:
  Fix Released
Status in Ironic:
  Fix Released
Status in Magnum:
  Fix Released
Status in Mistral:
  Fix Released
Status in Monasca:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in oslo.service:
  In Progress
Status in python-cinderclient:
  Fix Committed
Status in python-congressclient:
  In Progress
Status in python-glanceclient:
  In Progress
Status in python-heatclient:
  Fix Committed
Status in python-keystoneclient:
  Fix Committed
Status in python-magnumclient:
  Fix Released
Status in python-mistralclient:
  New
Status in python-neutronclient:
  In Progress
Status in Python client library for Sahara:
  Fix Committed
Status in python-solumclient:
  In Progress
Status in python-swiftclient:
  In Progress
Status in python-troveclient:
  Fix Committed
Status in Python client library for Zaqar:
  Fix Committed
Status in Solum:
  In Progress
Status in OpenStack Object Storage (swift):
  New
Status in Trove:
  Fix Released
Status in zaqar:
  In Progress

Bug description:
  Because python creates pyc files during tox runs, certain changes in
  the tree, like deletes of files, or switching branches, can create
  spurious errors. This can be suppressed by PYTHONDONTWRITEBYTECODE=1
  in the tox.ini.

To manage notifications about this bug go to:
https://bugs.launchpad.net/congress/+bug/1368661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368661] Re: Unit tests sometimes fail because of stale pyc files

2015-12-08 Thread hardik
** No longer affects: ceilometer

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368661

Title:
  Unit tests sometimes fail because of stale pyc files

Status in congress:
  Fix Released
Status in Ironic:
  Fix Released
Status in Magnum:
  Fix Released
Status in Mistral:
  Fix Released
Status in Monasca:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in oslo.service:
  In Progress
Status in python-cinderclient:
  Fix Committed
Status in python-congressclient:
  In Progress
Status in python-glanceclient:
  In Progress
Status in python-heatclient:
  Fix Committed
Status in python-keystoneclient:
  Fix Committed
Status in python-magnumclient:
  Fix Released
Status in python-mistralclient:
  New
Status in python-neutronclient:
  In Progress
Status in Python client library for Sahara:
  Fix Committed
Status in python-solumclient:
  In Progress
Status in python-swiftclient:
  In Progress
Status in python-troveclient:
  Fix Committed
Status in Python client library for Zaqar:
  Fix Committed
Status in Solum:
  In Progress
Status in Trove:
  Fix Released
Status in zaqar:
  In Progress

Bug description:
  Because python creates pyc files during tox runs, certain changes in
  the tree, like deletes of files, or switching branches, can create
  spurious errors. This can be suppressed by PYTHONDONTWRITEBYTECODE=1
  in the tox.ini.

To manage notifications about this bug go to:
https://bugs.launchpad.net/congress/+bug/1368661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368661] Re: Unit tests sometimes fail because of stale pyc files

2015-12-08 Thread sonu
** Also affects: solum
   Importance: Undecided
   Status: New

** Changed in: solum
   Status: New => In Progress

** Changed in: solum
 Assignee: (unassigned) => sonu (sonu-bhumca11)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368661

Title:
  Unit tests sometimes fail because of stale pyc files

Status in Ceilometer:
  New
Status in congress:
  Fix Released
Status in Ironic:
  Fix Released
Status in Magnum:
  Fix Released
Status in Mistral:
  Fix Released
Status in Monasca:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in oslo.service:
  In Progress
Status in python-cinderclient:
  Fix Committed
Status in python-congressclient:
  In Progress
Status in python-glanceclient:
  In Progress
Status in python-heatclient:
  Fix Committed
Status in python-keystoneclient:
  Fix Committed
Status in python-magnumclient:
  Fix Released
Status in python-mistralclient:
  New
Status in python-neutronclient:
  In Progress
Status in Python client library for Sahara:
  Fix Committed
Status in python-solumclient:
  In Progress
Status in python-swiftclient:
  In Progress
Status in python-troveclient:
  Fix Committed
Status in Python client library for Zaqar:
  Fix Committed
Status in Solum:
  In Progress
Status in Trove:
  Fix Released
Status in zaqar:
  In Progress

Bug description:
  Because python creates pyc files during tox runs, certain changes in
  the tree, like deletes of files, or switching branches, can create
  spurious errors. This can be suppressed by PYTHONDONTWRITEBYTECODE=1
  in the tox.ini.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1368661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524205] [NEW] To set "image_cache_subdirectory_name=_base$my_ip", and create a instance failed.

2015-12-08 Thread Xiaoyu Wang
Public bug reported:

If I set "mage_cache_subdirectory_name=_base$my_ip" in nova.conf  for
using glance cache function.  Then I create a instance failed.

[DEFAULT]
verbose = True
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
libvirt_use_virtio_for_bridges=True
verbose=True
ec2_private_dns_show_ip=True
api_paste_config=/etc/nova/api-paste.ini
enabled_apis=ec2,osapi_compute,metadata
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.10.10.30
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver

image_cache_subdirectory_name = _base$my_ip
[oslo_messaging_rabbit]
rabbit_host = 10.10.10.10
rabbit_userid = openstack
rabbit_password = 123qwe

[keystone_authtoken]
auth_uri = http://10.10.10.10:5000
auth_url = http://10.10.10.10:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = 123qwe

[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 10.10.10.30
novncproxy_base_url = http://10.10.10.10:6080/vnc_auto.html

[glance]
host = 10.10.10.10

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[neutron]
url = http://10.10.10.10:9696
auth_url = http://10.10.10.10:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = neutron
password = 123qwe

[xenserver]
cache_images = all


I found some error informations in /var/log/nova/nova-compute.log

2015-10-27 10:23:16.719 39365 ERROR nova.compute.manager [instance: 
1e208e90-a6cd-415f-be9c-7f7af3cec69a] Traceback (most recent call last):
2015-10-27 10:23:16.719 39365 ERROR nova.compute.manager [instance: 
1e208e90-a6cd-415f-be9c-7f7af3cec69a]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2155, in 
_build_resources
2015-10-27 10:23:16.719 39365 ERROR nova.compute.manager [instance: 
1e208e90-a6cd-415f-be9c-7f7af3cec69a] yield resources
2015-10-27 10:23:16.719 39365 ERROR nova.compute.manager [instance: 
1e208e90-a6cd-415f-be9c-7f7af3cec69a]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2009, in 
_build_and_run_instance
2015-10-27 10:23:16.719 39365 ERROR nova.compute.manager [instance: 
1e208e90-a6cd-415f-be9c-7f7af3cec69a] block_device_info=block_device_info)
2015-10-27 10:23:16.719 39365 ERROR nova.compute.manager [instance: 
1e208e90-a6cd-415f-be9c-7f7af3cec69a]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2444, in 
spawn
2015-10-27 10:23:16.719 39365 ERROR nova.compute.manager [instance: 
1e208e90-a6cd-415f-be9c-7f7af3cec69a] block_device_info=block_device_info)
2015-10-27 10:23:16.719 39365 ERROR nova.compute.manager [instance: 
1e208e90-a6cd-415f-be9c-7f7af3cec69a]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 4519, in 
_create_domain_and_network
2015-10-27 10:23:16.719 39365 ERROR nova.compute.manager [instance: 
1e208e90-a6cd-415f-be9c-7f7af3cec69a] xml, pause=pause, power_on=power_on)
2015-10-27 10:23:16.719 39365 ERROR nova.compute.manager [instance: 
1e208e90-a6cd-415f-be9c-7f7af3cec69a]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 4449, in 
_create_domain
2015-10-27 10:23:16.719 39365 ERROR nova.compute.manager [instance: 
1e208e90-a6cd-415f-be9c-7f7af3cec69a] guest.launch(pause=pause)
2015-10-27 10:23:16.719 39365 ERROR nova.compute.manager [instance: 
1e208e90-a6cd-415f-be9c-7f7af3cec69a]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/guest.py", line 141, in 
launch
2015-10-27 10:23:16.719 39365 ERROR nova.compute.manager [instance: 
1e208e90-a6cd-415f-be9c-7f7af3cec69a] self._encoded_xml, errors='ignore')
2015-10-27 10:23:16.719 39365 ERROR nova.compute.manager [instance: 
1e208e90-a6cd-415f-be9c-7f7af3cec69a]   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 195, in __exit__
2015-10-27 10:23:16.719 39365 ERROR nova.compute.manager [instance: 
1e208e90-a6cd-415f-be9c-7f7af3cec69a] six.reraise(self.type_, self.value, 
self.tb)
2015-10-27 10:23:16.719 39365 ERROR nova.compute.manager [instance: 
1e208e90-a6cd-415f-be9c-7f7af3cec69a]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/guest.py", line 136, in 
launch
2015-10-27 10:23:16.719 39365 ERROR nova.compute.manager [instance: 
1e208e90-a6cd-415f-be9c-7f7af3cec69a] return 
self._domain.createWithFlags(flags)
2015-10-27 10:23:16.719 39365 ERROR nova.compute.manager [instance: 
1e208e90-a6cd-415f-be9c-7f7af3cec69a]   File 
"/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 183, in doit
2015-10-27 10:23:16.719 39365 ERROR nova.compute.manager [instance: 
1e208e90-a6cd-415f-be9c-7f7af3cec69a] result = proxy_ca

[Yahoo-eng-team] [Bug 1520358] Re: Remove iso8601 dependency

2015-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/246565
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=d0ad3e6d7dca22fcab82baa0ebfb7ec467bc4458
Submitter: Jenkins
Branch:master

commit d0ad3e6d7dca22fcab82baa0ebfb7ec467bc4458
Author: Bertrand Lallau 
Date:   Tue Nov 17 20:30:12 2015 +0100

Remove iso8601 dependency

Glance does not import and use this module directly, no need to list
it in the requirements and no need to set debug level.

Closes-Bug: #1520358
Change-Id: Idc7581903b8b6962c3fb6c51dace3142c707d420


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1520358

Title:
  Remove iso8601 dependency

Status in Glance:
  Fix Released

Bug description:
  Glance does not import and use this module directly, no need to list
  it in the requirements and no need to set debug level.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1520358/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368661] Re: Unit tests sometimes fail because of stale pyc files

2015-12-08 Thread sonu
** Changed in: oslo.service
   Status: New => In Progress

** Also affects: python-solumclient
   Importance: Undecided
   Status: New

** Changed in: python-solumclient
   Status: New => In Progress

** Changed in: python-solumclient
 Assignee: (unassigned) => sonu (sonu-bhumca11)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368661

Title:
  Unit tests sometimes fail because of stale pyc files

Status in Ceilometer:
  New
Status in congress:
  Fix Released
Status in Ironic:
  Fix Released
Status in Magnum:
  Fix Released
Status in Mistral:
  Fix Released
Status in Monasca:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in oslo.service:
  In Progress
Status in python-cinderclient:
  Fix Committed
Status in python-congressclient:
  In Progress
Status in python-glanceclient:
  In Progress
Status in python-heatclient:
  Fix Committed
Status in python-keystoneclient:
  Fix Committed
Status in python-magnumclient:
  Fix Released
Status in python-mistralclient:
  New
Status in python-neutronclient:
  In Progress
Status in Python client library for Sahara:
  Fix Committed
Status in python-solumclient:
  In Progress
Status in python-swiftclient:
  In Progress
Status in python-troveclient:
  Fix Committed
Status in Python client library for Zaqar:
  Fix Committed
Status in Trove:
  Fix Released
Status in zaqar:
  In Progress

Bug description:
  Because python creates pyc files during tox runs, certain changes in
  the tree, like deletes of files, or switching branches, can create
  spurious errors. This can be suppressed by PYTHONDONTWRITEBYTECODE=1
  in the tox.ini.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1368661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496135] Re: libvirt live-migration will not honor destination vcpu_pin_set config

2015-12-08 Thread Nan Zhang
in cases:
compute01
vcpu_pin_set=4-31
compute02
vcpu_pin_set=8-31

boot instance01 on compute01, grep virsh xml:
virsh dumpxml instance-006b | grep cpu 
  2

live-migration instance01 to compute02, grep virsh xml:
virsh dumpxml instance-006b | grep cpu 
  2

after live-migration instance's cpuset is not changed by compute02
vcpu_pin_set config, I thin it is a bug.


** Changed in: nova
   Status: Invalid => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1496135

Title:
  libvirt live-migration will not honor destination vcpu_pin_set config

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  Reporting this based on code inspection of the current master (commit:
  9f61d1eb642785734f19b5b23365f80f033c3d9a)

  When we attempt to live-migrate an instance onto a host that has a
  different vcpu_pin_set than the one that was on the source host, we
  may either break the policy set by the destination host or fail (as we
  will not recalculate the vcpu cpuset attribute to match that of the
  destination host, so we may end up with an invalid range).

  The first solution that jumps out is to make sure the XML is updated
  in
  
https://github.com/openstack/nova/blob/6d68462c4f20a0b93a04828cb829e86b7680d8a4/nova/virt/libvirt/driver.py#L5422

  However that would mean passing over the requested info from the
  destination host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1496135/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524187] [NEW] Volume is not automatically detached after destroy instance

2015-12-08 Thread yangbo
Public bug reported:

stack@ubuntu:~$ nova show 2313788b-f5cf-4183-be65-9ac93015c6dc
ERROR (CommandError): No server with a name or ID of 
'2313788b-f5cf-4183-be65-9ac93015c6dc' exists.
stack@ubuntu:~$ cinder list |grep 2313788b-f5cf-4183-be65-9ac93015c6dc
| 7ef3590d-2ecb-4e32-8623-1e26ead8c889 |  in-use |- |   
   a  |  1   |  -  |   true   |False| 
2313788b-f5cf-4183-be65-9ac93015c6dc |


Volume is still "in-use" state after the instance was destroyed.

screen-n-cpu.log:2015-12-08 05:29:23.448 DEBUG nova.compute.manager 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] [instance: 
2313788b-f5cf-4183-be65-9ac93015c6dc] Start destroying the instance on the 
hypervisor. _shutdown_instance /opt/stack/nova/nova/compute/manager.py:2281
screen-n-cpu.log:2015-12-08 05:29:23.460 4324 INFO nova.virt.libvirt.driver [-] 
[instance: 2313788b-f5cf-4183-be65-9ac93015c6dc] During wait destroy, instance 
disappeared.
screen-n-cpu.log:2015-12-08 05:29:23.461 DEBUG nova.virt.libvirt.vif 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] vif_type=ovs 
instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone=None,cell_name=None,cleaned=False,config_drive='',created_at=2015-12-08T04:33:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,disable_terminate=False,display_description='instance_1_144956711519',display_name='instance_1_144956711519',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(2),host='ubuntu',hostname='instance-1-144956711519',id=8,image_ref='9121970b-4af5-4538-a659-0d16711efafb',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,launch_index=0,launched_at=None,launched_on='ubuntu',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=,new_flavor=None,node='ubuntu',numa_topology=,old_flavor=None,os_type=N
 
one,pci_devices=PciDeviceList,pci_requests=,power_state=0,progress=0,project_id='10ad966fa76c4536ab0d85d41a260631',ramdisk_id='',reservation_id='r-wagg3ytl',root_device_name='/dev/vda1',root_gb=1,security_groups=SecurityGroupList,shutdown_terminate=False,system_metadata={image_base_image_ref='9121970b-4af5-4538-a659-0d16711efafb',image_container_format='bare',image_disk_format='qcow2',image_min_disk='1',image_min_ram='0',network_allocated='True'},tags=,task_state='deleting',terminated_at=None,updated_at=2015-12-08T05:21:07Z,user_data=None,user_id='4c0600df133d44feae965ec6ae44a6e6',uuid=2313788b-f5cf-4183-be65-9ac93015c6dc,vcpu_model=,vcpus=1,vm_mode=None,vm_state='error')
 vif=VIF({'profile': {}, 'ovs_interfaceid': 
u'aca27420-ef84-45a6-9d90-0d37d0069338', 'preserve_on_delete': False, 
'network': Network({'bridge': u'br-int', 'subnets': [Subnet({'ips': 
[FixedIP({'meta': {}, 'version': 6, 'type': u'fixed', 'floating_ips': [], 
'address': u'2001:db8::a'})], 'version': 6, 'meta': 
 {}, 'dns': [], 'routes': [], 'cidr': u'2001:db8::/64', 'gateway': IP({'meta': 
{}, 'version': 6, 'type': u'gateway', 'address': u'2001:db8::2'})}), 
Subnet({'ips': [FixedIP({'meta': {}, 'version': 4, 'type': u'fixed', 
'floating_ips': [], 'address': u'172.24.4.9'})], 'version': 4, 'meta': {}, 
'dns': [], 'routes': [], 'cidr': u'172.24.4.0/24', 'gateway': IP({'meta': {}, 
'version': 4, 'type': u'gateway', 'address': u'172.24.4.1'})})], 'meta': 
{u'injected': False, u'tenant_id': u'10ad966fa76c4536ab0d85d41a260631'}, 'id': 
u'3677c113-d47b-44ff-ab10-d3a59357413d', 'label': u'public'}), 'devname': 
u'tapaca27420-ef', 'vnic_type': u'normal', 'qbh_params': None, 'meta': {}, 
'details': {u'port_filter': True, u'ovs_hybrid_plug': True}, 'address': 
u'fa:16:3e:a2:64:6b', 'active': False, 'type': u'ovs', 'id': 
u'aca27420-ef84-45a6-9d90-0d37d0069338', 'qbg_params': None}) unplug 
/opt/stack/nova/nova/virt/libvirt/vif.py:877
screen-n-cpu.log:2015-12-08 05:29:37.648 DEBUG nova.compute.manager 
[req-09c067d3-2e8c-492b-bf7e-09a9db593a25 None None] Triggering sync for uuid 
2313788b-f5cf-4183-be65-9ac93015c6dc _sync_power_states 
/opt/stack/nova/nova/compute/manager.py:5997
screen-n-cpu.log:2015-12-08 05:29:51.744 DEBUG oslo_concurrency.processutils 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] Running cmd 
(subprocess): mv 
/opt/stack/data/nova/instances/2313788b-f5cf-4183-be65-9ac93015c6dc 
/opt/stack/data/nova/instances/2313788b-f5cf-4183-be65-9ac93015c6dc_del execute 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:267
screen-n-cpu.log:2015-12-08 05:29:51.766 DEBUG oslo_concurrency.processutils 
[req-c574b833-e506-414d-ae25-ff5d56c62b1b admin admin] CMD "mv 
/opt/stack/data/nova/instances/2313788b-f5cf-4183-be65-9ac93015c6dc 
/opt/stack/data/nova/instances/2313788b-f5cf-4183-be65-9ac93015c6dc_del" 
returned: 0 in 0.022s execute 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:297
screen-n-cpu.log:2015-12-08 05:29:51.768 INFO nova.virt.libvirt.driver 
[req-c574b833-

[Yahoo-eng-team] [Bug 1489059] Re: "db type could not be determined" running py34

2015-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/252857
Committed: 
https://git.openstack.org/cgit/openstack/python-glanceclient/commit/?id=22ef7be20d84d81be08796629a0c2067f5b247a8
Submitter: Jenkins
Branch:master

commit 22ef7be20d84d81be08796629a0c2067f5b247a8
Author: kairat_kushaev 
Date:   Thu Dec 3 12:54:42 2015 +0300

Run py34 env first when launching tests

To resolve "db type could not be determined" when running tox
for the first time py34 environment need to laucnhed before py27.

Change-Id: Id46e9e9400974fcb6eebb4516bf4b3e69a6570e2
Closes-Bug: #1489059


** Changed in: python-glanceclient
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489059

Title:
  "db type could not be determined" running py34

Status in Aodh:
  Fix Released
Status in Barbican:
  In Progress
Status in cloudkitty:
  Fix Committed
Status in Glance:
  Fix Committed
Status in glance_store:
  Fix Committed
Status in heat:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in Ironic:
  Fix Released
Status in ironic-lib:
  In Progress
Status in OpenStack Identity (keystone):
  Fix Committed
Status in keystoneauth:
  Fix Released
Status in keystonemiddleware:
  Fix Committed
Status in Manila:
  Fix Released
Status in networking-midonet:
  In Progress
Status in networking-ofagent:
  New
Status in neutron:
  Fix Released
Status in python-glanceclient:
  Fix Released
Status in python-keystoneclient:
  Fix Committed
Status in Sahara:
  Fix Released
Status in senlin:
  Fix Committed
Status in tap-as-a-service:
  New
Status in tempest:
  In Progress
Status in zaqar:
  In Progress

Bug description:
  When running tox for the first time, the py34 execution fails with an
  error saying "db type could not be determined".

  This issue is know to be caused when the run of py27 preceeds py34 and
  can be solved erasing the .testrepository and running "tox -e py34"
  first of all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/aodh/+bug/1489059/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338938] Re: dhcp scheduler should stop redundant agent

2015-12-08 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1338938

Title:
  dhcp scheduler should stop redundant agent

Status in neutron:
  Expired

Bug description:
  we initiate the counter of dhcp agents between active host and
  cfg.CONF.dhcp_agents_per_network, suppose that we start dhcp agents
  correctly, then some dhcp agents are down(host down or kill the dhcp-
  agent), during this period, we will reschedule and recover the normal
  dhcp agents.  but when down dhcp agents restart, some dhcp agents are
  redundant.

  if len(dhcp_agents) >= agents_per_network:
  LOG.debug(_('Network %s is hosted already'),
network['id'])
  return

  IMO, we need stop the redundant agents  In above case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1338938/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342961] Re: Exception during message handling: Pool FOO could not be found

2015-12-08 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1342961

Title:
  Exception during message handling: Pool  FOO could not be found

Status in neutron:
  Expired

Bug description:
  $subjecyt style exception appears both in successful and failed jobs.

  message: "Exception during message handling" AND message:"Pool" AND
  message:"could not be found" AND filename:"logs/screen-q-svc.txt"

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIkV4Y2VwdGlvbiBkdXJpbmcgbWVzc2FnZSBoYW5kbGluZ1wiIEFORCBtZXNzYWdlOlwiUG9vbFwiIEFORCBtZXNzYWdlOlwiY291bGQgbm90IGJlIGZvdW5kXCIgQU5EIGZpbGVuYW1lOlwibG9ncy9zY3JlZW4tcS1zdmMudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6Ijg2NDAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwNTUzMzU3ODE2NCwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

[req-201dcd14-dc9d-4fb5-8eb5-c66c35991cb3 ] Exception during message 
handling: Pool 7fa63738-9030-4136-9b4e-9eb7ffb79f68 could not be found
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
134, in _dispatch_and_reply
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _dispatch
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/neutron/neutron/services/loadbalancer/drivers/common/agent_driver_base.py",
 line 232, in update_pool_stats
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher 
self.plugin.update_pool_stats(context, pool_id, data=stats)
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/neutron/neutron/db/loadbalancer/loadbalancer_db.py", line 512, 
in update_pool_stats
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher pool_db 
= self._get_resource(context, Pool, pool_id)
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/neutron/neutron/db/loadbalancer/loadbalancer_db.py", line 218, 
in _get_resource
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher raise 
loadbalancer.PoolNotFound(pool_id=id)
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher 
PoolNotFound: Pool 7fa63738-9030-4136-9b4e-9eb7ffb79f68 could not be found
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1342961/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334798] Re: Gate test is masking failure details

2015-12-08 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1334798

Title:
  Gate test is masking failure details

Status in neutron:
  Expired

Bug description:
  Both the 2.6 and 2.7 gate tests are failing, console log indicating
  two failures, but only showing one failure 'process-retcode'. When
  looking at testr_results it shows a fail and only mentions:

  ft1.12873:
  
neutron.tests.unit.services.vpn.service_drivers.test_ipsec.TestValidatorSelection.test_reference_driver_used_StringException

  Here is 2.7 logs: http://logs.openstack.org/51/102351/4/check/gate-
  neutron-python27/a757b36/

  Upon future investigation, it was found that there was non-printable
  characters in an oslo file. With manual testing, it shows the error:

  $ python -m neutron.openstack.common.lockutils python -m unittest 
neutron.tests.unit.services.vpn.service_drivers.test_ipsec.TestValidatorSelection.test_reference_driver_used
  F
  ==
  FAIL: test_reference_driver_used 
(neutron.tests.unit.services.vpn.service_drivers.test_ipsec.TestValidatorSelection)
  
neutron.tests.unit.services.vpn.service_drivers.test_ipsec.TestValidatorSelection.test_reference_driver_used
  --
  _StringException: Empty attachments:
pythonlogging:''
pythonlogging:'neutron.api.extensions'

  traceback-1: {{{
  Traceback (most recent call last):
File "neutron/common/rpc.py", line 63, in cleanup
  assert TRANSPORT is not None
  AssertionError
  }}}

  Traceback (most recent call last):
File "neutron/tests/unit/services/vpn/service_drivers/test_ipsec.py", line 
52, in setUp
  super(TestValidatorSelection, self).setUp()
File "neutron/tests/base.py", line 188, in setUp
  n_rpc.init(CONF)
File "neutron/common/rpc.py", line 56, in init
  aliases=TRANSPORT_ALIASES)
File "/opt/stack/oslo.messaging/oslo/messaging/transport.py", line 185, in 
get_transport
  invoke_kwds=kwargs)
File "/opt/stack/stevedore/stevedore/driver.py", line 45, in __init__
  verify_requirements=verify_requirements,
File "/opt/stack/stevedore/stevedore/named.py", line 55, in __init__
  verify_requirements)
File "/opt/stack/stevedore/stevedore/extension.py", line 170, in 
_load_plugins
  self._on_load_failure_callback(self, ep, err)
File "/opt/stack/stevedore/stevedore/driver.py", line 50, in 
_default_on_load_failure
  raise err
File "/opt/stack/oslo.messaging/oslo/messaging/_drivers/impl_fake.py", line 
48
  SyntaxError: Non-ASCII character '\xc2' in file 
/opt/stack/oslo.messaging/oslo/messaging/_drivers/impl_fake.py on line 48, but 
no encoding declared; see http://www.python.org/peps/pep-0263.html for details

  A fix will be done for the oslo error, but we need to investigate why
  the gate test does not show any information on the error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1334798/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334323] Re: Check ips availability before adding network to DHCP agent

2015-12-08 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1334323

Title:
  Check ips availability before adding network to DHCP agent

Status in neutron:
  Expired

Bug description:
  Hi there,

  How to reproduce ?
  ===

  First of all it's better to use HA DHCP agent, i.e. running more than
  one DHCP agent, and setting the dhcp_agents_per_network to the number
  of DHCP agent that you're running.

  Now for the sake of this example let's say that
  dhcp_agents_per_network=3.

  Now create a network with a subnet /30 for example or big subnet e..g
  /24 but with a smaller allocation pool e.g that contain only 1 or 2
  ips..

  
  What happen ?
  

  A lot of exception start showing up in the logs in the form:

 IpAddressGenerationFailure: No more IP addresses available on
  network

  
  What happen really ?
  

  Our small network was basically scheduled to all DHCP agents that are
  up and active and each one of them will try to create a port for him
  self, but because our small network has less IPs than
  dhcp_agents_per_network, then some of this port will fail to be
  created, and this will happen each iteration of the DHCP agent main
  loop.

  Another case where if you have more than one subnet in a network, and
  one of them is pretty small e.g.

  net1 -> subnet1 10.0.0.0/24
subnet2 10.10.0.0/30

  Than errors also start to happen in every iteration of the dhcp agent.

  What is expected ?
  ===

  IMHO only agent that can handle the network should hold this later,
  and a direct call to add a network to a DHCP agent should also fail if
  there is no IPs left to satisfy the new DHCP port creation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1334323/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372305] Re: Haproxy restart leads to incorrect Ceilometer LBaas related statistics

2015-12-08 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1372305

Title:
  Haproxy restart leads to incorrect Ceilometer LBaas related statistics

Status in neutron:
  Expired

Bug description:
  Ceilometer uses LBaaS API to collect load balance related statistics
  like bytes-in and bytes-out, then LBaaS plugin collects such counters
  from haproxy process via stats socket. However, when LBaaS object is
  updated, LBaaS agent will reconfigure haproxy and then restart haproxy
  process. All the counters will be cleared, which leads to incorrect
  statistics.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1372305/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369266] Re: HA router priority should be according to configurable priority of L3-agent

2015-12-08 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1369266

Title:
  HA router priority should be according to configurable priority of
  L3-agent

Status in neutron:
  Expired

Bug description:
  Currently all instances have the same priority (hard coded 50)
  Admin should be able to assign priority to l3-agents so that Master will be 
chosen accordingly (suppose that you have an agent with smaller bandwidth than 
others, you would like it to have the least amount possible of active (Master) 
instances.
  This will require extending the L3-agent API

  This is blocked by bug:
  https://bugs.launchpad.net/neutron/+bug/1365429

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1369266/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1418786] Re: more than one port got created for VM

2015-12-08 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1418786

Title:
  more than one port got created for VM

Status in neutron:
  Expired

Bug description:
  If server with neutron-server service have not enough resources for
  fast processing of requests then there is a high risk of multiple port
  created for VM during scheduling/networking process.

  How to reproduce:

  Just get some environment with not so fast neutron-server service node and/or 
mysql server. Try to spawn bunch of VMs.
  Some of them will got two ports created (in neutron DB they will have same 
device_id). If the system is very slow they could get three of them. If VMs 
would be spawned it will have only last one which nova got from neutron and 
this port will be the only active one.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1418786/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372792] Re: Inconsistent timestamp formats in ceilometer metering messages

2015-12-08 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1372792

Title:
  Inconsistent timestamp formats in ceilometer metering messages

Status in neutron:
  Expired

Bug description:
  The messages generated by neutron-metering-agent contain timestamps in
  a different format than the other messages received through UDP from
  ceilometer-agent-notification. This creates unnecessary troubles for
  whoever is trying to decode the messages and do something useful with
  them.

  I particular, up to now, I found out about the timestamp field in the
  bandwidth message.

  They contain UTC dates (I hope), but there is no Z at the end, and
  they contain a space instead of a T between date and time. In short,
  they are not in ISO8601 as the timestamps in the other messages. I
  found out about them because elasticsearch tries to parse them and
  fails, throwing away the message.

  This bug was filed against Ceilometer, but I have been redirected here:
  https://bugs.launchpad.net/ceilometer/+bug/1370607

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1372792/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1413231] Re: Traceback when creating VxLAN network using CSR plugin

2015-12-08 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1413231

Title:
  Traceback when creating VxLAN network using CSR plugin

Status in neutron:
  Expired

Bug description:
  OpenStack Version: Kilo

  localadmin@qa1:~$ nova-manage version
  2015.1
  localadmin@qa1:~$ neutron --version
  2.3.10

  I’m trying to run the vxlan tests on my multi node setup and I’m
  seeing the following  error/traceback in the
  screen-q-ciscocfgagent.log when creating a network with a vxlan
  profile.

  The error complains that it can’t find the nrouter-56f2cf VRF but it
  is present on the CSR.

  VRF is configured on the CSR – regular VLAN works fine

  csr#show run | inc vrf
  vrf definition Mgmt-intf
  vrf definition nrouter-56f2cf
   vrf forwarding nrouter-56f2cf
   vrf forwarding nrouter-56f2cf
   vrf forwarding nrouter-56f2cf
  ip nat inside source list acl_756 interface GigabitEthernet3.757 vrf 
nrouter-56f2cf overload
  ip nat inside source list acl_758 interface GigabitEthernet3.757 vrf 
nrouter-56f2cf overload
  ip nat inside source static 10.11.12.2 172.29.75.232 vrf nrouter-56f2cf 
match-in-vrf
  ip nat inside source static 10.11.12.4 172.29.75.233 vrf nrouter-56f2cf 
match-in-vrf
  ip nat inside source static 10.11.12.5 172.29.75.234 vrf nrouter-56f2cf 
match-in-vrf
  ip nat inside source static 210.168.1.2 172.29.75.235 vrf nrouter-56f2cf 
match-in-vrf
  ip nat inside source static 210.168.1.4 172.29.75.236 vrf nrouter-56f2cf 
match-in-vrf
  ip nat inside source static 210.168.1.5 172.29.75.237 vrf nrouter-56f2cf 
match-in-vrf
  ip route vrf nrouter-56f2cf 0.0.0.0 0.0.0.0 172.29.75.225
  csr#


  2015-01-19 12:22:09.896 DEBUG neutron.agent.linux.utils [-] 
  Command: ['ping', '-c', '5', '-W', '1', '-i', '0.2', '10.0.100.10']
  Exit code: 0
  Stdout: 'PING 10.0.100.10 (10.0.100.10) 56(84) bytes of data.\n64 bytes from 
10.0.100.10: icmp_seq=1 ttl=255 time=1.74 ms\n64 bytes from 10.0.100.10: 
icmp_seq=2 ttl=255 time=1.09 ms\n64 bytes from 10.0.100.10: icmp_seq=3 ttl=255 
time=0.994 ms\n64 bytes from 10.0.100.10: icmp_seq=4 ttl=255 time=0.852 ms\n64 
bytes 
  from 10.0.100.10: icmp_seq=5 ttl=255 time=0.892 ms\n\n--- 10.0.100.10 ping 
statistics ---\n5 packets transmitted, 5 received, 0% packet loss, time 
801ms\nrtt min/avg/max/mdev = 0.852/1.116/1.748/0.328 ms\n'
  Stderr: '' from (pid=13719) execute 
/opt/stack/neutron/neutron/agent/linux/utils.py:79
  2015-01-19 12:22:09.897 DEBUG neutron.plugins.cisco.cfg_agent.device_status 
[-] Hosting device: 27b14fc6-b1c9-4deb-8abe-ae3703a4af2d@10.0.100.10 is 
reachable. from (pid=13719) is_hosting_device_reachable 
/opt/stack/neutron/neutron/plugins/cisco/cfg_agent/device_status.py:115
  2015-01-19 12:22:10.121 INFO 
neutron.plugins.cisco.cfg_agent.device_drivers.csr1kv.csr1kv_routing_driver [-] 
VRFs:[]
  2015-01-19 12:22:10.122 ERROR 
neutron.plugins.cisco.cfg_agent.device_drivers.csr1kv.csr1kv_routing_driver [-] 
VRF nrouter-56f2cf not present
  2015-01-19 12:22:10.237 DEBUG 
neutron.plugins.cisco.cfg_agent.device_drivers.csr1kv.csr1kv_routing_driver [-] 
RPCReply for CREATE_SUBINTERFACE is protocoloperation-failederror
 from (pid=13719) _check_response 
/opt/stack/neutron/neutron/plugins/cisco/cfg_agent/device_drivers/csr1kv/csr1kv_routing_driver.py:676
  2015-01-19 12:22:10.238 ERROR 
neutron.plugins.cisco.cfg_agent.service_helpers.routing_svc_helper [-] Error 
executing snippet:CREATE_SUBINTERFACE. ErrorType:protocol 
ErrorTag:operation-failed.
  2015-01-19 12:22:10.238 ERROR 
neutron.plugins.cisco.cfg_agent.service_helpers.routing_svc_helper [-] Driver 
Exception on router:56f2cfbc-61c6-45dc-94d5-0cbb08b05053. Error is Error 
executing snippet:CREATE_SUBINTERFACE. ErrorType:protocol 
ErrorTag:operation-failed.

  
  2015-01-19 12:22:10.238 TRACE 
neutron.plugins.cisco.cfg_agent.service_helpers.routing_svc_helper Traceback 
(most recent call last):
  2015-01-19 12:22:10.238 TRACE 
neutron.plugins.cisco.cfg_agent.service_helpers.routing_svc_helper   File 
"/opt/stack/neutron/neutron/plugins/cisco/cfg_agent/service_helpers/routing_svc_helper.py",
 line 379, in _process_routers
  2015-01-19 12:22:10.238 TRACE 
neutron.plugins.cisco.cfg_agent.service_helpers.routing_svc_helper 
self._process_router(ri)
  2015-01-19 12:22:10.238 TRACE 
neutron.plugins.cisco.cfg_agent.service_helpers.routing_svc_helper   File 
"/opt/stack/neutron/neutron/plugins/cisco/cfg_agent/service_helpers/routing_svc_helper.py",
 line 452, in _process_router
  2015-01-19 12:22:10.238 TRACE 
neutron.plugins.cisco.cfg_agent.service_helpers.routing_svc_helper 
LOG.error(e)
  2015-01-19 12:22:10.238 TRACE 
neutron.plugins.cisco.cfg_agent.service_helpers.routing_svc_helper   File 
"/usr/local/lib/python2.7/dist-packages/oslo/utils/excutils.py", line 82, in 
__exit

[Yahoo-eng-team] [Bug 1437762] Re: portbindingsport does not have all ports causing ml2 migration failure

2015-12-08 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1437762

Title:
  portbindingsport does not have all ports causing ml2 migration failure

Status in neutron:
  Expired

Bug description:
  I am trying to move from havana to icehouse on ubuntu 14.0.4. The
  migration is failing because the ml2 migration is expecting the
  portbindingsport table to contain all the ports. However my record
  count in ports is 460 and in portsportbinding just 192. Thus only 192
  records get added to ml2_port_bindings.

  The consequence of this is that the network-node and the compute nodes
  are adding "unbound" interfaces in the ml2_port_bindings table.
  Additionally nova-compute is update its network info with wrong
  information causing a subsequent restart of nova-compute to fail with
  an error of "vif_type=unbound". Besides that the instances on the
  nodes do not get network connectivity.

  Let's say I am happy that I made a backup, because the DB gets into a
  inconsistent state every time now.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1437762/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1503428] Re: Support keystone V3 API

2015-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/253782
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=9670dbd93ce7246282ca4343c3f701e8c20a232b
Submitter: Jenkins
Branch:master

commit 9670dbd93ce7246282ca4343c3f701e8c20a232b
Author: Monty Taylor 
Date:   Fri Dec 4 23:54:22 2015 -0500

Pull project out of request in addition to tenant

Keystone V3 renamed tenant to project. In order to deal with keystone
V3, start pulling X-Project-Id from the headers.

Since keystonemiddleware authtoken sets both X-Project-* and
X-Tenant-*, we don't need to look up X-Tenant-*.

Don't do anything with renaming the internal variables - that will come
later.

Change-Id: I5e27cf6a54fb603b81d41b8b4f085d59354627fb
Depends-On: I1f754a9a949ef92f4e427a91bbd1b1e73e86c8c4
Closes-Bug: #1503428


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1503428

Title:
  Support keystone V3 API

Status in neutron:
  Fix Released

Bug description:
  The python-neutronclient has keystone V3 support as of Juno [1], it's
  unclear if we need to do anything for the server. Filing this bug to
  track this so we don't forget to address this during Mitaka.

  [1] https://github.com/openstack/neutron-
  specs/blob/master/specs/juno/keystone-v3-api-support.rst

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1503428/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368661] Re: Unit tests sometimes fail because of stale pyc files

2015-12-08 Thread hardik
** Also affects: ceilometer
   Importance: Undecided
   Status: New

** Changed in: ceilometer
 Assignee: (unassigned) => hardik (hardik-parekh047)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368661

Title:
  Unit tests sometimes fail because of stale pyc files

Status in Ceilometer:
  New
Status in congress:
  Fix Released
Status in Ironic:
  Fix Released
Status in Magnum:
  Fix Released
Status in Mistral:
  Fix Released
Status in Monasca:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in oslo.service:
  New
Status in python-cinderclient:
  Fix Committed
Status in python-congressclient:
  In Progress
Status in python-glanceclient:
  In Progress
Status in python-heatclient:
  Fix Committed
Status in python-keystoneclient:
  Fix Committed
Status in python-magnumclient:
  Fix Released
Status in python-mistralclient:
  New
Status in python-neutronclient:
  In Progress
Status in Python client library for Sahara:
  Fix Committed
Status in python-swiftclient:
  In Progress
Status in python-troveclient:
  Fix Committed
Status in Python client library for Zaqar:
  Fix Committed
Status in Trove:
  Fix Released
Status in zaqar:
  In Progress

Bug description:
  Because python creates pyc files during tox runs, certain changes in
  the tree, like deletes of files, or switching branches, can create
  spurious errors. This can be suppressed by PYTHONDONTWRITEBYTECODE=1
  in the tox.ini.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1368661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1511167] Re: Cinder client incorrectly list the volume size units in GB, the API is actually in GiB

2015-12-08 Thread Sean McGinnis
** Changed in: python-cinderclient
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1511167

Title:
  Cinder client incorrectly list the volume size units in GB, the API is
  actually in GiB

Status in OpenStack Dashboard (Horizon):
  In Progress
Status in python-cinderclient:
  Fix Released

Bug description:
  Both Horizon and the cinder client documents the size paramater in
  gigabyes(GB) but the API docs(both v1 and v2) list the size units as
  gibibytes(GiBs). The correct unit should be gibibytes(GiBs)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1511167/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368674] Re: Horizon multi regions but different service endpoints will cause Invalid service catalog service exception

2015-12-08 Thread Lin Hua Cheng
this is already fixed by the bug mentioned by Justin

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1368674

Title:
  Horizon multi regions but different service endpoints will cause
  Invalid service catalog service exception

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Horizon multi regions but different service endpoints will cause
  Invalid service catalog service exception.

  Description:
  1.There are two regions.
  2. RegionOne has service keystone, glance, nova, neutron, cinder, ceilometer, 
heat endpoints.
  RegionTwo only has keystone, glance, nova. ( Acturally this 3 endpoints are 
same as RegionOne, the difference is only the region).

  3.Then login the dashboard
  4.Switch the region, you will notice that the RegionTwo dashboard  also have 
neutron panel group, cinder panel, ceilometer panel, heat panel group.
  5.Click one of these panels, e.g. network panel group --> network , you will 
get an exception "Invalid service catalog service: network"

  Potential solutions:
  Loading the horizon_dashboard_nav according to available endpoint of current 
user's active region

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1368674/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1494682] Re: l3 agent avoid unnecessary full_sync

2015-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/224019
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=4957b5b43521a61873a041fe3e8989ed399903d9
Submitter: Jenkins
Branch:master

commit 4957b5b43521a61873a041fe3e8989ed399903d9
Author: Sudhakar Babu Gariganti 
Date:   Wed Sep 16 15:53:57 2015 +0530

Avoid full_sync in l3_agent for router updates

While processing a router update in _process_router_update method,
if an exception occurs, we try to do a full_sync.

We only need to re-sync the router whose update failed.

Addressed a TODO in the same method, which falls in similar lines.

Change-Id: I7c43a508adf46d8524f1cc48b83f1e1c276a2de0
Closes-Bug: #1494682


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1494682

Title:
  l3 agent avoid unnecessary full_sync

Status in neutron:
  Fix Released

Bug description:
  In _process_router_update method, we set full_sync to true a couple of
  places which can be avoided.

  There is even a TODO from Carl saying so.

  # TODO(Carl) Stop this fullsync non-sense.  Just retry this
  # one router by sticking the update at the end of the queue
  # at a lower priority.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1494682/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524124] [NEW] unscalable database schema design

2015-12-08 Thread Cristian Sava
Public bug reported:

I have noticed the keystone SQL database schema design is not scalable.
It can hold maybe a few hundreds or maximum thousands of entries, but
beyond this, it is going to certainly create very serious efficiency
problems, both in terms storage space and query response time. Here are
the main problem points I have spotted:

i) most of the tables use primary keys of varchar(64) type: role,
domain, project, token, user, group etc., supposed to contain unique hex
identifiers. I am not exactly sure about the rationale behind this
design? If the idea is to be able to accommodate up to 16**64=10**77
distinct records, than this is clearly flawed, as the tables won't hold
more than a few thousand entries given the current length of the primary
key (and foreign keys, for those minor entity tables that refer to the
major entity).

ii) some tables have composite keys on multiple varchar(64) fields:

Create Table: CREATE TABLE `assignment` (
  `type` enum('UserProject','GroupProject','UserDomain','GroupDomain') NOT NULL,
  `actor_id` varchar(64) NOT NULL,
  `target_id` varchar(64) NOT NULL,
  `role_id` varchar(64) NOT NULL,
  `inherited` tinyint(1) NOT NULL,
  PRIMARY KEY (`type`,`actor_id`,`target_id`,`role_id`),
  KEY `ix_actor_id` (`actor_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
1 row in set (0.00 sec)

iii) some tables have unique keys defined on varchar(255) columns:

Create Table: CREATE TABLE `role` (
  `id` varchar(64) NOT NULL,
  `name` varchar(255) NOT NULL,
  `extra` text,
  PRIMARY KEY (`id`),
  UNIQUE KEY `name` (`name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8

iv) the generated public id for (user,domain) entities is currently 64
hex chars, while only 32 hex chars are needed to ensure uniqueness up to
16**16=2**64=10**19 entries, which should be more than sufficient for
any practical installation.

In order to remedy these problems, I propose the following improvements:

i) replace the varchar(64) hex primary key by an auto-incremented
integer(4) column. This will hold up to 4 billion records and will
greatly reduce the  storage requirements and improve query performance.

ii) reduce the generated public id for (user, domain) entities to 32 hex
chars, stored in binary form as two bigint(8) columns.

iii) reduce the "name" field length to more manageable length or reduce
index size using a hash function.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1524124

Title:
  unscalable database schema design

Status in OpenStack Identity (keystone):
  New

Bug description:
  I have noticed the keystone SQL database schema design is not
  scalable. It can hold maybe a few hundreds or maximum thousands of
  entries, but beyond this, it is going to certainly create very serious
  efficiency problems, both in terms storage space and query response
  time. Here are the main problem points I have spotted:

  i) most of the tables use primary keys of varchar(64) type: role,
  domain, project, token, user, group etc., supposed to contain unique
  hex identifiers. I am not exactly sure about the rationale behind this
  design? If the idea is to be able to accommodate up to 16**64=10**77
  distinct records, than this is clearly flawed, as the tables won't
  hold more than a few thousand entries given the current length of the
  primary key (and foreign keys, for those minor entity tables that
  refer to the major entity).

  ii) some tables have composite keys on multiple varchar(64) fields:

  Create Table: CREATE TABLE `assignment` (
`type` enum('UserProject','GroupProject','UserDomain','GroupDomain') NOT 
NULL,
`actor_id` varchar(64) NOT NULL,
`target_id` varchar(64) NOT NULL,
`role_id` varchar(64) NOT NULL,
`inherited` tinyint(1) NOT NULL,
PRIMARY KEY (`type`,`actor_id`,`target_id`,`role_id`),
KEY `ix_actor_id` (`actor_id`)
  ) ENGINE=InnoDB DEFAULT CHARSET=utf8
  1 row in set (0.00 sec)

  iii) some tables have unique keys defined on varchar(255) columns:

  Create Table: CREATE TABLE `role` (
`id` varchar(64) NOT NULL,
`name` varchar(255) NOT NULL,
`extra` text,
PRIMARY KEY (`id`),
UNIQUE KEY `name` (`name`)
  ) ENGINE=InnoDB DEFAULT CHARSET=utf8

  iv) the generated public id for (user,domain) entities is currently 64
  hex chars, while only 32 hex chars are needed to ensure uniqueness up
  to 16**16=2**64=10**19 entries, which should be more than sufficient
  for any practical installation.

  In order to remedy these problems, I propose the following
  improvements:

  i) replace the varchar(64) hex primary key by an auto-incremented
  integer(4) column. This will hold up to 4 billion records and will
  greatly reduce the  storage requirements and improve query
  performance.

  ii) reduce the generated public id for (user, domain) entities to 32
  h

[Yahoo-eng-team] [Bug 1508879] Re: glance-artifacts doesn't exist

2015-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/252378
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=a0846362d70a24b4609d55504c57883c44d535bd
Submitter: Jenkins
Branch:master

commit a0846362d70a24b4609d55504c57883c44d535bd
Author: kairat_kushaev 
Date:   Wed Dec 2 16:12:50 2015 +0300

Remove artifact entry point

Artifact entry point doesn't exist in glance. So it can be safely
removed.

Change-Id: If7baac6b0e8685843900157717ff1c6c7c8032c0
Closes-bug: #1508879


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1508879

Title:
  glance-artifacts doesn't exist

Status in Glance:
  Fix Released

Bug description:
  Using Liberty release, if I run

  
  $ 
/nix/store/rgmjikgzkvcrpd7w5v28p6w01bhb9yq1-glance-11.0.0/bin/glance-artifacts
  Traceback (most recent call last):
File 
"/nix/store/rgmjikgzkvcrpd7w5v28p6w01bhb9yq1-glance-11.0.0/bin/glance-artifacts",
 line 6, in 
  from glance.cmd.artifacts import main
  ImportError: No module named artifacts

  
  I can't spot artifacts module inside glance/cmd/ folder. Should it be removed?

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1508879/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1516021] Re: Breadcrumb header styling should be theme-specific

2015-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/245182
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=ebd2768350a3e2b3a32b2b7c4436336ebc267e08
Submitter: Jenkins
Branch:master

commit ebd2768350a3e2b3a32b2b7c4436336ebc267e08
Author: Rob Cresswell 
Date:   Fri Nov 13 14:38:09 2015 +

Move Detail page styling into theme

This patch creates a breadcrumb header component and moves the detail
styling into the theme. This should make it easier for themes to
override.

- Alter the breadcrumb header padding, so that everything is aligned 
correctly
  after the bootstrap tabs patch: https://review.openstack.org/#/c/246004/
- Add a small margin under tabs, to account for both details and
  workflows; remove the previous workflows-only padding.

Change-Id: I11034d30de900eda50bbc92c275194b6ec731c0d
Closes-Bug: 1516021


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1516021

Title:
  Breadcrumb header styling should be theme-specific

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The breadcrumb styling used in the Details page headers should be
  theme-specific, so it doesn't interfere with other themes

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1516021/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523759] Re: Nova (liberty): Unexpected API Error.

2015-12-08 Thread aginwala
This is a config issue. please verify the passwords in db and in config
files for glance ,nova and keystone. This is a duplicate bug as many of
the similar bugs have been already updated in the past. Please verify
nova image-list, glance image-list and other commands to see if it gets
the details properly.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1523759

Title:
  Nova (liberty): Unexpected API Error.

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Nova unexpected API Error when launching an instance from image:
   (HTTP 500) (Request-ID:
  req-b0bb0881-64a2-44be-8edb-832b1e2e5abc)

  yum list installed | grep nova
  openstack-nova-api.noarch   1:12.0.0-1.el7 
@centos-openstack-liberty
  openstack-nova-cert.noarch  1:12.0.0-1.el7 
@centos-openstack-liberty
  openstack-nova-common.noarch1:12.0.0-1.el7 
@centos-openstack-liberty
  openstack-nova-conductor.noarch 1:12.0.0-1.el7 
@centos-openstack-liberty
  openstack-nova-console.noarch   1:12.0.0-1.el7 
@centos-openstack-liberty
  openstack-nova-novncproxy.noarch
  openstack-nova-scheduler.noarch 1:12.0.0-1.el7 
@centos-openstack-liberty
  python-nova.noarch  1:12.0.0-1.el7 
@centos-openstack-liberty
  python-novaclient.noarch1:2.30.1-1.el7 
@centos-openstack-liberty

  nova-api.log:
  [req-b0bb0881-64a2-44be-8edb-832b1e2e5abc d12d5a6f008b4291a56564c74f5d0731 
0db49a1c163e4deca9d7d86b403df394 - - -] Unexpected exception in API method
  2015-12-08 05:58:44.749 17421 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
  2015-12-08 05:58:44.749 17421 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/extensions.py", line 478, 
in wrapped
  2015-12-08 05:58:44.749 17421 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
  2015-12-08 05:58:44.749 17421 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
  2015-12-08 05:58:44.749 17421 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2015-12-08 05:58:44.749 17421 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
  2015-12-08 05:58:44.749 17421 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2015-12-08 05:58:44.749 17421 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/servers.py", line 
611, in create
  2015-12-08 05:58:44.749 17421 ERROR nova.api.openstack.extensions 
**create_kwargs)
  2015-12-08 05:58:44.749 17421 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/hooks.py", line 149, in inner
  2015-12-08 05:58:44.749 17421 ERROR nova.api.openstack.extensions rv = 
f(*args, **kwargs)
  2015-12-08 05:58:44.749 17421 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 1581, in create
  2015-12-08 05:58:44.749 17421 ERROR nova.api.openstack.extensions 
check_server_group_quota=check_server_group_quota)
  2015-12-08 05:58:44.749 17421 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 1181, in 
_create_instance
  2015-12-08 05:58:44.749 17421 ERROR nova.api.openstack.extensions 
auto_disk_config, reservation_id, max_count)
  2015-12-08 05:58:44.749 17421 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 955, in 
_validate_and_build_base_options
  2015-12-08 05:58:44.749 17421 ERROR nova.api.openstack.extensions 
pci_request_info, requested_networks)
  2015-12-08 05:58:44.749 17421 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 1059, in 
create_pci_requests_for_sriov_ports
  2015-12-08 05:58:44.749 17421 ERROR nova.api.openstack.extensions neutron 
= get_client(context, admin=True)
  2015-12-08 05:58:44.749 17421 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 237, in 
get_client
  2015-12-08 05:58:44.749 17421 ERROR nova.api.openstack.extensions 
auth_token = _ADMIN_AUTH.get_token(_SESSION)
  2015-12-08 05:58:44.749 17421 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/keystoneclient/auth/identity/base.py", line 
200, in get_token
  2015-12-08 05:58:44.749 17421 ERROR nova.api.openstack.extensions return 
self.get_access(session).auth_token
  2015-12-08 05:58:44.749 17421 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/keystoneclient/auth/identity/base.py", line 
240

[Yahoo-eng-team] [Bug 1523889] Re: nova image-list say class 'glanceclient.exc.HTTPInternalServerError'

2015-12-08 Thread aginwala
Please make sure all the passwords in glance configs, nova configs and
databases are correct. This is a config issue. Please see duplicate bugs
for this 500 error .

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1523889

Title:
  nova image-list say class 'glanceclient.exc.HTTPInternalServerError'

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  When I use nova image-list 
  THe shell show

  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-770ce041-380d-4dcf-95d2-1c844c70945d)

  I get the log ,like this

  2015-12-08 06:31:11.501 2112 INFO nova.osapi_compute.wsgi.server 
[req-d2d814a0-d9bc-4aba-a34b-e7fc4bce0363 db6f17743bd342fe9615dce76aff1556 
089850858b4049a589bf00a6cd2373fd - - -] 192.168.60.122 "GET /v2/ HTTP/1.1" 
status: 200 len: 576 time: 3.4160471
  2015-12-08 06:31:12.355 2112 INFO nova.osapi_compute.wsgi.server [-] 
192.168.60.122 "OPTIONS / HTTP/1.0" status: 200 len: 505 time: 0.0006258
  2015-12-08 06:31:14.358 2112 INFO nova.osapi_compute.wsgi.server [-] 
192.168.60.122 "OPTIONS / HTTP/1.0" status: 200 len: 505 time: 0.0006728
  2015-12-08 06:31:16.361 2112 INFO nova.osapi_compute.wsgi.server [-] 
192.168.60.122 "OPTIONS / HTTP/1.0" status: 200 len: 505 time: 0.0005569
  2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions 
[req-770ce041-380d-4dcf-95d2-1c844c70945d db6f17743bd342fe9615dce76aff1556 
089850858b4049a589bf00a6cd2373fd - - -] Unexpected exception in API method
  2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
  2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/extensions.py", line 478, 
in wrapped
  2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
  2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/images.py", line 
145, in detail
  2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions 
**page_params)
  2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/image/api.py", line 68, in get_all
  2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions return 
session.detail(context, **kwargs)
  2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/image/glance.py", line 284, in detail
  2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions for 
image in images:
  2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/glanceclient/v1/images.py", line 254, in list
  2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions for 
image in paginate(params, return_request_id):
  2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/glanceclient/v1/images.py", line 238, in 
paginate
  2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions images, 
resp = self._list(url, "images")
  2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/glanceclient/v1/images.py", line 63, in _list
  2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions resp, 
body = self.client.get(url)
  2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 280, in get
  2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions return 
self._request('GET', url, **kwargs)
  2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 272, in 
_request
  2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions resp, 
body_iter = self._handle_response(resp)
  2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 93, in 
_handle_response
  2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions raise 
exc.from_response(resp, resp.content)
  2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions 
HTTPInternalServerError: 500 Internal Server Error: The server has either erred 
or is incapable of performing the requested operation. (HTTP 500)
  2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions 
  2015-12-08 06:31:18.161 2112 INFO nova.api.openstack.wsgi 
[req-770ce041-380d-4dcf-95d2-1c844c70945d db6f17743bd342fe9615dce76aff1556 
089850858b4049a589bf00a6cd2373fd - - -] HTTP exception th

[Yahoo-eng-team] [Bug 1501202] Re: Error spelling of "explicitely"

2015-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/232827
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=712366f7614f4d85bf36515c13a0e5b7a5b038c8
Submitter: Jenkins
Branch:master

commit 712366f7614f4d85bf36515c13a0e5b7a5b038c8
Author: JuPing 
Date:   Fri Oct 9 00:06:35 2015 +0800

Fix the bug of "Error spelling of 'explicitely'"

The word "explicitely" should be spelled as "explicitly".
So it is changed.

(df) Ran spell-check on entire file and fixed a few more
misspellings.

Change-Id: I25d2accc2e0a1aaff71011944a62dadceb508b47
Closes-Bug: #1501202


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1501202

Title:
  Error spelling of "explicitely"

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  There are some incorrect spellings in the below files:
nova/etc/nova/rootwrap.conf
  #LIne10: # explicitely specify a full path (separated by ',')

nova/nova/virt/xenapi/vmops.py
  #Line1961: Missing paths are ignored, unless explicitely stated not to...
   
nova/nova/objects/instance_group.py
  #Line138: # field explicitely, we prefer to raise an Exception so the 
developer...

nova/nova/objects/pci_device.py
  #Line137: # obj_what_changed, set it explicitely...

  I think the word "explicitely" should be spelled as "explicitly".

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1501202/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524115] [NEW] Split Buttons dropdowns stopped working

2015-12-08 Thread Rajat Vig
Public bug reported:

The split button template needs to have the 'dropdown-toggle' directive
to allow it to be clicked to show the menu options.

** Affects: horizon
 Importance: Undecided
 Assignee: Rajat Vig (rajatv)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1524115

Title:
  Split Buttons dropdowns stopped working

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  The split button template needs to have the 'dropdown-toggle'
  directive to allow it to be clicked to show the menu options.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1524115/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524114] [NEW] nova-scheduler also loads deleted instances at startup

2015-12-08 Thread Belmiro Moreira
Public bug reported:

nova-scheduler is loading all instances (including deleted) at startup.

Experienced problems when each node has >6000 deleted instances, even when 
using batches of 10 nodes.
Each query can take several minutes and transfer several GB of data.
This prevented nova-scheduler connect to rabbitmq.


###
When nova-scheduler starts it calls "_async_init_instance_info()" and it does 
an "InstanceList.get_by_filters" that uses batches of 10 nodes. This uses 
"instance_get_all_by_filters_sort", however "Deleted instances will be returned 
by default, unless there's a filter that says otherwise".
Adding the filter: {"deleted": False} fixes the problem.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1524114

Title:
  nova-scheduler also loads deleted instances at startup

Status in OpenStack Compute (nova):
  New

Bug description:
  nova-scheduler is loading all instances (including deleted) at
  startup.

  Experienced problems when each node has >6000 deleted instances, even when 
using batches of 10 nodes.
  Each query can take several minutes and transfer several GB of data.
  This prevented nova-scheduler connect to rabbitmq.

  
  ###
  When nova-scheduler starts it calls "_async_init_instance_info()" and it does 
an "InstanceList.get_by_filters" that uses batches of 10 nodes. This uses 
"instance_get_all_by_filters_sort", however "Deleted instances will be returned 
by default, unless there's a filter that says otherwise".
  Adding the filter: {"deleted": False} fixes the problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1524114/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1522709] Re: external network can always be seen by other tenant users even not shared or rbac rules allowed

2015-12-08 Thread Kevin Benton
External networks should always be visible. How do you expect users to
attach routers to them otherwise?

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1522709

Title:
  external network can always be seen by other tenant users even not
  shared or rbac rules allowed

Status in neutron:
  Invalid

Bug description:
  ### env ###
  upstream code
  two tenant users: admin, demo

  
  ### steps(default by admin) ###
  1, create an external network with shared at beginning;
  2, verify external network can be seen in non-admin tenant, demo user run 
"neutron net-list";
  3, update external network from shared to non;
  4, verify external network no longer can be seen in non-admin tenant, demo 
user run "neutron net-list";
  expected: output of "net-list" doesn't contain external network;
  observed: output of "net-list" contains external network;

  5, additional, verify external network is not shared, demo user run "neutron 
net-show EXTERNAL-NETWORK";
  network info will show external network shared field is False.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1522709/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524081] [NEW] rbac extension has a bad updated timestamp

2015-12-08 Thread Kevin Benton
Public bug reported:

The timestamp the rbac extension provides has a bad field for the
timezone:
https://github.com/openstack/neutron/blob/c51f56f57b5cf67fc5e174b2d7a219990b666809/neutron/extensions/rbac.py#L100

This could cause issues for clients that try to parse that date.

** Affects: neutron
 Importance: Low
 Assignee: Kevin Benton (kevinbenton)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

** Changed in: neutron
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1524081

Title:
  rbac extension has a bad updated timestamp

Status in neutron:
  In Progress

Bug description:
  The timestamp the rbac extension provides has a bad field for the
  timezone:
  
https://github.com/openstack/neutron/blob/c51f56f57b5cf67fc5e174b2d7a219990b666809/neutron/extensions/rbac.py#L100

  This could cause issues for clients that try to parse that date.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1524081/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1218682] Re: User's email format hasn't been checked

2015-12-08 Thread Steve Martinelli
marked this as invalid for keystoneclient since the server should
respond with validation.

this can be completed by extending the current json schema for user
create and update:
https://github.com/openstack/keystone/blob/master/keystone/identity/schema.py#L24-L33

this should be for V3 only.

** Changed in: keystone
   Status: In Progress => Triaged

** Changed in: python-keystoneclient
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1218682

Title:
  User's email format hasn't been checked

Status in OpenStack Identity (keystone):
  Triaged
Status in oslo-incubator:
  Won't Fix
Status in python-keystoneclient:
  Invalid

Bug description:
  When a user is created, the email attribute can be in any format.
  I think it's necessary to do the format check of email.

  $ keystone user-list
  
+--++-+---+
  |id|  name  | enabled | email 
|
  
+--++-+---+
  | 4d0458857e604f0e9ceeefdea61f92a2 |testuser|   True  |  
test_user@@  |

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1218682/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521788] Fix merged to nova (master)

2015-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/252659
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=bf027800f7badf547681d64e124b60523712b4be
Submitter: Jenkins
Branch:master

commit bf027800f7badf547681d64e124b60523712b4be
Author: Matt Riedemann 
Date:   Wed Dec 2 13:54:50 2015 -0800

neutron: only list ports if there is a quota limit when validating

The list_ports call can take awhile if the project has a lot of ports.
If it turns out that there is unlimited quota, then we don't even need
to list the ports, so move that after the show_quota call.

Partial-Bug: #1521788

Change-Id: I4d128f182283ffa4479934f640a67d9c536824b5


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1521788

Title:
  nova.network.neutronv2.api.validate_networks could be smarter when
  listing ports

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  There are two things we can do to make this more efficient:

  
https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L1182

  1. Move the list_ports call after the unlimited quota check - if the
  quota is unlimited, we don't need to list ports.

  2. Filter the list_ports response to only return the port id, we don't
  need the other port details in the response since we don't use those
  fields, we're just getting a count.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1521788/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1522349] Re: FixedIpNotFoundForAddress should be catched when create instance

2015-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/252879
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=e1cd13f862b6937458782666a8ebcc7cd4f351d8
Submitter: Jenkins
Branch:master

commit e1cd13f862b6937458782666a8ebcc7cd4f351d8
Author: jichenjc 
Date:   Tue Nov 24 01:54:40 2015 +0800

Catch FixedIpNotFoundForAddress when create server

FixedIpNotFoundForAddress is not catched during server creation,
so 500 error is reported to client side.

Change-Id: I3cd45cd63c962225d6a893ad0fdeb01ecb94e784
Closes-Bug: 1522349


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1522349

Title:
  FixedIpNotFoundForAddress should be catched when create instance

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  
  nova --debug boot --image 9eee793a-25e5-4f42-bd9e-b869e60d3dbd --flavor 
m1.micro --user-data 't.py' --nic 
net-id=7b2133b4-57a7-4010-ad60-68e07ceb298a,v4-fixed-ip=10.10.20.20 t2

  
  RESP BODY: {"computeFault": {"message": "Unexpected API Error. Please report 
this at http://bugs.launchpad.net/nova/ and attach the Nova API log if 
possible.\n", "code": 500}}

  DEBUG (shell:916) Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-eee88d6a-a602-4893-b232-842981d6bd6e)
  Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/novaclient/shell.py", line 
914, in main
  OpenStackComputeShell().main(argv)
File "/usr/local/lib/python2.7/dist-packages/novaclient/shell.py", line 
841, in main
  args.func(self.cs, args)
File "/usr/local/lib/python2.7/dist-packages/novaclient/v2/shell.py", line 
521, in do_boot
  server = cs.servers.create(*boot_args, **boot_kwargs)
File "/usr/local/lib/python2.7/dist-packages/novaclient/v2/servers.py", 
line 1011, in create
  **boot_kwargs)
File "/usr/local/lib/python2.7/dist-packages/novaclient/v2/servers.py", 
line 544, in _boot
  return_raw=return_raw, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/novaclient/base.py", line 172, 
in _create
  _resp, body = self.api.client.post(url, body=body)
File "/usr/local/lib/python2.7/dist-packages/keystoneclient/adapter.py", 
line 176, in post
  return self.request(url, 'POST', **kwargs)
File "/usr/local/lib/python2.7/dist-packages/novaclient/client.py", line 
93, in request
  raise exceptions.from_response(resp, body, url, method)
  ClientException: Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-eee88d6a-a602-4893-b232-842981d6bd6e)
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-eee88d6a-a602-4893-b232-842981d6bd6e)
  (reverse-i-search)`vim /': sudo ^Cm 
/usr/local/lib/python2.7/dist-packages/novaclient/__init__.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1522349/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1522681] Re: neutron-usage-audit doesn't work properly

2015-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/254054
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=5b6992db49cf6e69b05411ed7d170198434f39e9
Submitter: Jenkins
Branch:master

commit 5b6992db49cf6e69b05411ed7d170198434f39e9
Author: niusmallnan 
Date:   Mon Dec 7 07:47:02 2015 +

move usage_audit to cmd/eventlet package

setup_rpc func need the eventlet monkey_patch,
otherwise the main process will be blocked.

Change-Id: I9f4a0b7c957b7dc7740e3cf6e75f18778ad562d0
Closes-Bug: #1522681


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1522681

Title:
  neutron-usage-audit doesn't work properly

Status in neutron:
  Fix Released

Bug description:
  neutron-usage-audit code:

  cxt = context.get_admin_context()
  plugin = manager.NeutronManager.get_plugin()
  l3_plugin = manager.NeutronManager.get_service_plugins().get(
  constants.L3_ROUTER_NAT)

  when it gets l3_plugin  instance,  setup_rpc func will be executed:

  @log_helpers.log_method_call
  def setup_rpc(self):
  # RPC support
  self.topic = topics.L3PLUGIN
  self.conn = n_rpc.create_connection(new=True)
  self.agent_notifiers.update(
  {n_const.AGENT_TYPE_L3: l3_rpc_agent_api.L3AgentNotifyAPI()})
  self.endpoints = [l3_rpc.L3RpcCallback()]
  self.conn.create_consumer(self.topic, self.endpoints,
fanout=False)
  self.conn.consume_in_threads()

  consume_in_threads  func need the eventlet monkey_patch, otherwise the
  main process will be blocked.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1522681/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523849] Re: Incorrect ID for theme preview navigation

2015-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/254651
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=c932f9d00097ef75455a9cdb8696f0feee0ce6c2
Submitter: Jenkins
Branch:master

commit c932f9d00097ef75455a9cdb8696f0feee0ce6c2
Author: Rob Cresswell 
Date:   Tue Dec 8 10:01:58 2015 +

Fix Dialogs section ID in theme preview

'dialos' -> 'dialogs'

Change-Id: I80c07641058b18cae11b369ce9d3b02696a641fa
Closes-Bug: 1523849


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1523849

Title:
  Incorrect ID for theme preview navigation

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The "Dialogs" section has an ID of 'dialos', meaning the nav doesn't
  work correctly for that section.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1523849/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506701] Re: metadata service security-groups doesn't work with neutron

2015-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/235788
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=8a9a457743396055e790a708aa5c533951410b79
Submitter: Jenkins
Branch:master

commit 8a9a457743396055e790a708aa5c533951410b79
Author: Hans Lindgren 
Date:   Fri Oct 16 10:28:33 2015 +0200

Fix metadata service security-groups when using Neutron

The metadata service was hard-coded to lookup security groups from the
DB when it really should go through the security group API.

Change-Id: I8ab4a0e091de1e7ca02eec6417e1cff5f1d0260f
Closes-Bug: #1506701


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1506701

Title:
  metadata service security-groups doesn't work with neutron

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Using the metadata to get the security groups for an instance by

  curl http://169.254.169.254/latest/meta-data/security-groups

  Doesn't work when you are using neutron. This is because the metadata
  server is hard coded to look for security groups in the nova DB.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1506701/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523503] Re: Scheduling does not work with a hybid cloud

2015-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/254206
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=f33dfa0647f0cd289bce19f7c06b3d0a88070a92
Submitter: Jenkins
Branch:master

commit f33dfa0647f0cd289bce19f7c06b3d0a88070a92
Author: Gary Kotton 
Date:   Mon Dec 7 05:48:37 2015 -0800

Scheduler: honor the glance metadata for hypervisor details

The scheduler would not honor the glance image metadata for
'hypervisor_type' and 'hypervisor_version_requires'. This was due
to the fact that this was not part if the image_meta object.

Change-Id: I03742cc9330814e2758184d8189a5be93a2b978b
Closes-bug: #1523503


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1523503

Title:
  Scheduling does not work with a hybid cloud

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  Confirmed

Bug description:
  The glance image metadata 'hypervisor_type' and 'hypervisor_version_requires' 
are not honored. The reason is that these are replaced by img_hv_type and 
img_hv_requested_version
  So the scheduler will not take these into account. This break scheduling ina 
hybrid cloud

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1523503/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524061] [NEW] Ceilometer - resource grid - hover plot point box is floating left shrinking box

2015-12-08 Thread German Rivera
Public bug reported:

On Resource usage > stats, the hover plot point box is floated left
shrinking the background box.

Expected: box background should cover content

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: ceilometer ux

** Tags added: ceilometer ux

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1524061

Title:
  Ceilometer - resource grid - hover plot point box is floating left
  shrinking box

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  On Resource usage > stats, the hover plot point box is floated left
  shrinking the background box.

  Expected: box background should cover content

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1524061/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524060] [NEW] Ceilometer - time period - hide range field until user select other

2015-12-08 Thread German Rivera
Public bug reported:

On the Resource usage > Stats tag the range field appears when the
period option "other" it is not selected.

Expected: the from to fields should only appear on other.

Actual: the from to fields appear

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: ceilometer ux

** Attachment added: "ceilometer-ranges.PNG"
   
https://bugs.launchpad.net/bugs/1524060/+attachment/4531600/+files/ceilometer-ranges.PNG

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1524060

Title:
  Ceilometer - time period - hide range field until user select other

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  On the Resource usage > Stats tag the range field appears when the
  period option "other" it is not selected.

  Expected: the from to fields should only appear on other.

  Actual: the from to fields appear

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1524060/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524044] [NEW] Deleting a VM with the name "vm_ln_[1" leads to a traceback

2015-12-08 Thread Prinika
Public bug reported:

1. Exact version of Nova/OpenStack you are running:
ii  nova-api 2:12.0.0~rc1-0ubuntu1~cloud0  all  
OpenStack Compute - API frontend
ii  nova-cert2:12.0.0~rc1-0ubuntu1~cloud0  all  
OpenStack Compute - certificate management
ii  nova-common  2:12.0.0~rc1-0ubuntu1~cloud0  all  
OpenStack Compute - common files
ii  nova-conductor   2:12.0.0~rc1-0ubuntu1~cloud0  all  
OpenStack Compute - conductor service
ii  nova-consoleauth 2:12.0.0~rc1-0ubuntu1~cloud0  all  
OpenStack Compute - Console Authenticator
ii  nova-novncproxy  2:12.0.0~rc1-0ubuntu1~cloud0  all  
OpenStack Compute - NoVNC proxy
ii  nova-scheduler   2:12.0.0~rc1-0ubuntu1~cloud0  all  
OpenStack Compute - virtual machine scheduler
ii  python-nova  2:12.0.0~rc1-0ubuntu1~cloud0  all  
OpenStack Compute Python libraries
ii  python-novaclient2:2.29.0-1~cloud0 all  
client library for OpenStack Compute API


2. Relevant log files:


2015-12-08 10:54:46.428 5561 INFO nova.osapi_compute.wsgi.server 
[req-9e819344-85a6-4bf7-a225-88cdf59e235e bfb9feab857949bda1d1a40d5da4350d 
61909ec3cd0f4d7fb6c641d71d01e106 - - -] 10.100.100.20 "GET /v2/ HTTP/1.1" 
status: 200 len: 575 time: 0.0552752
2015-12-08 10:54:46.559 5561 ERROR oslo_db.sqlalchemy.exc_filters 
[req-6ca54a98-8477-4207-8aa6-c7c2e84de3bb bfb9feab857949bda1d1a40d5da4350d 
61909ec3cd0f4d7fb6c641d71d01e106 - - -] DBAPIError exception wrapped from 
(pymysql.err.InternalError) (1139, u"Got error 'brackets ([ ]) not balanced' 
from regexp") [SQL: u'SELECT anon_1.instances_created_at AS 
anon_1_instances_created_at, anon_1.instances_updated_at AS 
anon_1_instances_updated_at, anon_1.instances_deleted_at AS 
anon_1_instances_deleted_at, anon_1.instances_deleted AS 
anon_1_instances_deleted, anon_1.instances_id AS anon_1_instances_id, 
anon_1.instances_user_id AS anon_1_instances_user_id, 
anon_1.instances_project_id AS anon_1_instances_project_id, 
anon_1.instances_image_ref AS anon_1_instances_image_ref, 
anon_1.instances_kernel_id AS anon_1_instances_kernel_id, 
anon_1.instances_ramdisk_id AS anon_1_instances_ramdisk_id, 
anon_1.instances_hostname AS anon_1_instances_hostname, 
anon_1.instances_launch_index AS anon_1_instances_lau
 nch_index, anon_1.instances_key_name AS anon_1_instances_key_name, 
anon_1.instances_key_data AS anon_1_instances_key_data, 
anon_1.instances_power_state AS anon_1_instances_power_state, 
anon_1.instances_vm_state AS anon_1_instances_vm_state, 
anon_1.instances_task_state AS anon_1_instances_task_state, 
anon_1.instances_memory_mb AS anon_1_instances_memory_mb, 
anon_1.instances_vcpus AS anon_1_instances_vcpus, anon_1.instances_root_gb AS 
anon_1_instances_root_gb, anon_1.instances_ephemeral_gb AS 
anon_1_instances_ephemeral_gb, anon_1.instances_ephemeral_key_uuid AS 
anon_1_instances_ephemeral_key_uuid, anon_1.instances_host AS 
anon_1_instances_host, anon_1.instances_node AS anon_1_instances_node, 
anon_1.instances_instance_type_id AS anon_1_instances_instance_type_id, 
anon_1.instances_user_data AS anon_1_instances_user_data, 
anon_1.instances_reservation_id AS anon_1_instances_reservation_id, 
anon_1.instances_launched_at AS anon_1_instances_launched_at, 
anon_1.instances_terminated_at AS anon
 _1_instances_terminated_at, anon_1.instances_availability_zone AS 
anon_1_instances_availability_zone, anon_1.instances_display_name AS 
anon_1_instances_display_name, anon_1.instances_display_description AS 
anon_1_instances_display_description, anon_1.instances_launched_on AS 
anon_1_instances_launched_on, anon_1.instances_locked AS 
anon_1_instances_locked, anon_1.instances_locked_by AS 
anon_1_instances_locked_by, anon_1.instances_os_type AS 
anon_1_instances_os_type, anon_1.instances_architecture AS 
anon_1_instances_architecture, anon_1.instances_vm_mode AS 
anon_1_instances_vm_mode, anon_1.instances_uuid AS anon_1_instances_uuid, 
anon_1.instances_root_device_name AS anon_1_instances_root_device_name, 
anon_1.instances_default_ephemeral_device AS 
anon_1_instances_default_ephemeral_device, anon_1.instances_default_swap_device 
AS anon_1_instances_default_swap_device, anon_1.instances_config_drive AS 
anon_1_instances_config_drive, anon_1.instances_access_ip_v4 AS 
anon_1_instances_access_ip
 _v4, anon_1.instances_access_ip_v6 AS anon_1_instances_access_ip_v6, 
anon_1.instances_auto_disk_config AS anon_1_instances_auto_disk_config, 
anon_1.instances_progress AS anon_1_instances_progress, 
anon_1.instances_shutdown_terminate AS anon_1_instances_shutdown_terminate, 
anon_1.instances_disable_terminate AS anon_1_instances_disable_terminate, 
anon_1.instances_cell_name AS anon_1_instances_cell_name, 
anon_1.instances_internal_id AS anon_1_instanc

[Yahoo-eng-team] [Bug 1468236] Re: enable neutron support distributed DHCP agents

2015-12-08 Thread Armando Migliaccio
** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1468236

Title:
  enable neutron support distributed DHCP agents

Status in neutron:
  Won't Fix

Bug description:
  the current DHCP service in neutron is centralized, it suffers from several 
ailments in the large-scale scenarios
  1. VM can't get IP at booting time, the most serious is that leads to 
metadata service   can't work.
  2. DHCP agent need much time to reboot if it has served for a large VMs.
  3. network node has a large number of namespaces, especially in public cloud, 
there are so many tenants and private networks.

  I think we can run dhcp-agent across all compute nodes, but it does not 
completely like current DVR, the main difference are as below:
  1. it simplify the dhcp-agent scheduler in neutron-server, when we create a 
VM, neutron-server just send the RPC message by port host id.
  2. the dhcp-agent running on a compute node just serve the VMs on this 
compute node, if this dhcp-agent down, it just affect the VMs running on this 
node.
  3. move the network-binding-dhcp-agent from neutron-server to dhcp-agent, it 
can remove the race hapens between neutron-server multi-workers.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1468236/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524038] [NEW] Determining glance version fails with https

2015-12-08 Thread Samuel Matzek
Public bug reported:

The nova.image.glance.py method _determine_curr_major_version fails when
using https with certificate validation to communicate with the glance
server.  The stack looks like this:

2015-12-08 12:26:57.336 31751 ERROR nova.image.glance Traceback (most recent 
call last):
2015-12-08 12:26:57.336 31751 ERROR nova.image.glance   File 
"/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 170, in 
_determine_curr_major_version
2015-12-08 12:26:57.336 31751 ERROR nova.image.glance response, content = 
http_client.get('/versions')
2015-12-08 12:26:57.336 31751 ERROR nova.image.glance   File 
"/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 280, in get
2015-12-08 12:26:57.336 31751 ERROR nova.image.glance return 
self._request('GET', url, **kwargs)
2015-12-08 12:26:57.336 31751 ERROR nova.image.glance   File 
"/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 261, in 
_request
2015-12-08 12:26:57.336 31751 ERROR nova.image.glance raise 
exc.CommunicationError(message=message)
2015-12-08 12:26:57.336 31751 ERROR nova.image.glance CommunicationError: Error 
finding address for https://my.glance.server:9292/versions: [SSL: 
CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)

The root cause is that this method creates an HttpClient to fetch the
versions URI  and it does not pass in the cert validation information.

** Affects: nova
 Importance: Undecided
 Assignee: Samuel Matzek (smatzek)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Samuel Matzek (smatzek)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1524038

Title:
  Determining glance version fails with https

Status in OpenStack Compute (nova):
  New

Bug description:
  The nova.image.glance.py method _determine_curr_major_version fails
  when using https with certificate validation to communicate with the
  glance server.  The stack looks like this:

  2015-12-08 12:26:57.336 31751 ERROR nova.image.glance Traceback (most recent 
call last):
  2015-12-08 12:26:57.336 31751 ERROR nova.image.glance   File 
"/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 170, in 
_determine_curr_major_version
  2015-12-08 12:26:57.336 31751 ERROR nova.image.glance response, content = 
http_client.get('/versions')
  2015-12-08 12:26:57.336 31751 ERROR nova.image.glance   File 
"/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 280, in get
  2015-12-08 12:26:57.336 31751 ERROR nova.image.glance return 
self._request('GET', url, **kwargs)
  2015-12-08 12:26:57.336 31751 ERROR nova.image.glance   File 
"/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 261, in 
_request
  2015-12-08 12:26:57.336 31751 ERROR nova.image.glance raise 
exc.CommunicationError(message=message)
  2015-12-08 12:26:57.336 31751 ERROR nova.image.glance CommunicationError: 
Error finding address for https://my.glance.server:9292/versions: [SSL: 
CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)

  The root cause is that this method creates an HttpClient to fetch the
  versions URI  and it does not pass in the cert validation information.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1524038/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524035] [NEW] nova.virt.block_device.DriverBlockDevice cannot save to DB if bdm passed in was not already an object

2015-12-08 Thread Matt Riedemann
Public bug reported:

This code doesn't work:

https://github.com/openstack/nova/blob/master/nova/virt/block_device.py#L128-L131

That will create a BlockDeviceMapping object to be wrapped in the
DriverBlockDevice object (if a BDM object was not passed in).  The
problem is on self._bdm_obj.save() will fail with something like this:

Captured traceback:
~~~
Traceback (most recent call last):
  File "nova/tests/unit/virt/test_block_device.py", line 909, in 
test_blank_attach_volume_cinder_cross_az_attach_false
self.virt_driver)
  File "nova/virt/block_device.py", line 456, in attach
self.save()
  File "nova/virt/block_device.py", line 363, in save
super(DriverVolumeBlockDevice, self).save()
  File "nova/virt/block_device.py", line 176, in save
self._bdm_obj.save()
  File 
"/home/mriedem/git/nova/.tox/py27/local/lib/python2.7/site-packages/oslo_versionedobjects/base.py",
 line 203, in wrapper
objtype=self.obj_name())
oslo_versionedobjects.exception.OrphanedObjectError: Cannot call save on 
orphaned BlockDeviceMapping object

That's because the BDM object that was created doesn't have a context
set on it.

And we can't pass context to self._bdm_obj.save() because we removed
that here: https://review.openstack.org/#/c/164268/

We've apparently never had a problem with this in runtime because we
must always be constructing the DriverBlockDevice with a real BDM object
in the compute code, we just weren't doing it properly in the tests -
and the tests mock out nova.objects.BlockDeviceMapping.save() so we
never knew it was a problem.

** Affects: nova
 Importance: Medium
 Assignee: Matt Riedemann (mriedem)
 Status: Triaged

** Changed in: nova
   Status: New => Triaged

** Changed in: nova
   Importance: Undecided => Medium

** Changed in: nova
 Assignee: (unassigned) => Matt Riedemann (mriedem)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1524035

Title:
  nova.virt.block_device.DriverBlockDevice cannot save to DB if bdm
  passed in was not already an object

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  This code doesn't work:

  
https://github.com/openstack/nova/blob/master/nova/virt/block_device.py#L128-L131

  That will create a BlockDeviceMapping object to be wrapped in the
  DriverBlockDevice object (if a BDM object was not passed in).  The
  problem is on self._bdm_obj.save() will fail with something like this:

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "nova/tests/unit/virt/test_block_device.py", line 909, in 
test_blank_attach_volume_cinder_cross_az_attach_false
  self.virt_driver)
File "nova/virt/block_device.py", line 456, in attach
  self.save()
File "nova/virt/block_device.py", line 363, in save
  super(DriverVolumeBlockDevice, self).save()
File "nova/virt/block_device.py", line 176, in save
  self._bdm_obj.save()
File 
"/home/mriedem/git/nova/.tox/py27/local/lib/python2.7/site-packages/oslo_versionedobjects/base.py",
 line 203, in wrapper
  objtype=self.obj_name())
  oslo_versionedobjects.exception.OrphanedObjectError: Cannot call save on 
orphaned BlockDeviceMapping object

  That's because the BDM object that was created doesn't have a context
  set on it.

  And we can't pass context to self._bdm_obj.save() because we removed
  that here: https://review.openstack.org/#/c/164268/

  We've apparently never had a problem with this in runtime because we
  must always be constructing the DriverBlockDevice with a real BDM
  object in the compute code, we just weren't doing it properly in the
  tests - and the tests mock out nova.objects.BlockDeviceMapping.save()
  so we never knew it was a problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1524035/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1481154] Re: Glace return HTTPInternalServerError when use postgresql

2015-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/208851
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=d13f088cac92988f3360363d9d34b60b518d18e9
Submitter: Jenkins
Branch:master

commit d13f088cac92988f3360363d9d34b60b518d18e9
Author: wangxiyuan 
Date:   Tue Aug 4 14:35:46 2015 +0800

Fix default value with postgreSQL

When use 'sort-key=size' for multi-pages searching with postgreSQL
DB, it will raise an error: DBError: (psycopg2.DataError). And return
HTTPInternalServerError (HTTP 500) to users. The reason is that
postgreSQL doesn't support the size's default value to be string.
This patch set the size's default value to zero.

Change-Id: Ib3b5f3f57be3683ba274f0122e6314978a79e75f
Closes-bug:#1481154


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1481154

Title:
  Glace return HTTPInternalServerError when use postgresql

Status in Glance:
  Fix Released

Bug description:
  version: master,  db: postgreSQL

  reproduce:

  1.create one or more images without data, like:
  glance image-create --name xxx  or  glance image-create

  then we will get image information though 'glance image-list', like:

  
+--+--+-+--+-++
  | ID   | Name | Disk 
Format | Container Format | Size| Status |
  
+--+--+-+--+-++
  | 67b47035-3e70-4dd0-a5a7-3fc3350a26ee | cirros-0.3.0-x86_64-disk | qcow2 
  | bare | 9761280 | active |
  | bf8022ef-3e3a-4bef-8d13-ccea9000828f | test |   
  |  | | queued |
  | 0180e9f2-6506-46c7-be07-e67ee6d5dab8 | test2|   
  |  | | queued |
  | 505bbc97-f470-44b7-bb59-315a5d92d8e0 | test3|   
  |  | | queued |
  | 03ef4387-f276-40dc-9aa6-ad811eb3c466 | test5|   
  |  | | queued |
  | 5af5d95d-8c2f-4bec-9fdd-74ce9ce3a4ca |  |   
  |  | | queued |
  
+--+--+-+--+-++

  2.list image like : glance image-list --sort-key=size --page-size=3
  (sort-key must be 'size', and page-size must be less than or equal to
  image amount )

  then, glance will return an error:
  HTTPInternalServerError (HTTP 500)

  P.S.:When use mysql db, I didn't encounter this problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1481154/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524030] [NEW] Reduce revocation events for performance improvement

2015-12-08 Thread Jorge Munoz
Public bug reported:

Keystone performance is reduce as revocation events grow. In an effort
to reduce the number of revocation events written to the
revocation_event table, keystone has to explicitly check if a domain or
project associated to the token are enabled.

Follow up patches:
1. Remove revocation events for deleted domains or projects
2. Remove revocation events for deleted grants
3. Bug 1511775
4. Delete unused columns (revocation_table - project and domains)

** Affects: keystone
 Importance: Undecided
 Assignee: Jorge Munoz (jorge-munoz)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1524030

Title:
  Reduce revocation events for performance improvement

Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  Keystone performance is reduce as revocation events grow. In an effort
  to reduce the number of revocation events written to the
  revocation_event table, keystone has to explicitly check if a domain
  or project associated to the token are enabled.

  Follow up patches:
  1. Remove revocation events for deleted domains or projects
  2. Remove revocation events for deleted grants
  3. Bug 1511775
  4. Delete unused columns (revocation_table - project and domains)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1524030/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524020] [NEW] DVRImpact: dvr_vmarp_table_update and dvr_update_router_add_vm is called for every port update instead of only when host binding or mac-address changes occur

2015-12-08 Thread Swaminathan Vasudevan
Public bug reported:

DVR arp update (dvr_vmarp_table_update) and dvr_update_router_add_vm
called for every update_port if the mac_address changes or when
update_devic_up is true.

These functions should be called from _notify_l3_agent_port_update, only
when a host binding for a service port changes or when a mac_address for
the service port changes.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-dvr-backlog

** Summary changed:

- DVR Arp update and dvr_update_router_add_vm is called for every port update 
instead of only when host binding or mac-address changes occur
+ DVRImpact:  dvr_vmarp_table_update and dvr_update_router_add_vm is called for 
every port update instead of only when host binding or mac-address changes occur

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1524020

Title:
  DVRImpact:  dvr_vmarp_table_update and dvr_update_router_add_vm is
  called for every port update instead of only when host binding or mac-
  address changes occur

Status in neutron:
  New

Bug description:
  DVR arp update (dvr_vmarp_table_update) and dvr_update_router_add_vm
  called for every update_port if the mac_address changes or when
  update_devic_up is true.

  These functions should be called from _notify_l3_agent_port_update,
  only when a host binding for a service port changes or when a
  mac_address for the service port changes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1524020/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1514869] Re: Missing documentation for SCSS code design

2015-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/244652
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=738cc3473545e929f8a86b2e3bc40d0d42f24d3c
Submitter: Jenkins
Branch:master

commit 738cc3473545e929f8a86b2e3bc40d0d42f24d3c
Author: Rob Cresswell 
Date:   Tue Nov 10 15:04:05 2015 +

Add dev docs for SCSS/and styling in Horizon

This patch adds a topic guide explaining how developers should write and
organise SCSS within Horizon.

Change-Id: I4de2d1da73f6c7977a16fba9d09678923d7c443c
Closes-Bug: 1514869


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1514869

Title:
  Missing documentation for SCSS code design

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  There is a patch proposed for deployer documentation[1] around the
  styling in Horizon, but still little explaining how developers should
  approach the SCSS.

  1. https://review.openstack.org/#/c/238723/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1514869/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524006] Re: NotImplementedError raised while create network using Nova REST API

2015-12-08 Thread venkatamahesh
** Project changed: openstack-manuals => openstack-api-site

** Tags added: compute

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1524006

Title:
  NotImplementedError raised while create network using Nova REST API

Status in OpenStack Compute (nova):
  New
Status in openstack-api-site:
  New

Bug description:
  Hi,

  I've stucked with error:
  {"computeFault":
  {"message": "Unexpected API Error. Please report this at
  http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.\n
  ", "code": 500}}

  It happened when I try create network for non-admin project using Nova REST 
API
  Source: http://developer.openstack.org/api-ref-compute-v2.1.html

  POST
  http://10.1.244.10:8774/v2.1/090aee2684a04e8397193a118a6e91b0/os-networks

  POST data:
  {
  "network": {
  "label": "scale-1-net",
  "cidr": "172.1.0.0/24",
  "mtu": 9000,
  "dhcp_server": "172.1.0.2",
  "enable_dhcp": false,
  "share_address": true,
  "allowed_start": "172.1.0.10",
  "allowed_end": "172.1.0.200"
  }
  }

  What's wrong?
  or
  What's the solution?

  Thanks a lot.
  ---
  Release: 2.1.0 on 2015-12-08 06:12
  SHA: 60c5c2798004984738d171055dbc2a6fd37a85fe
  Source: 
http://git.openstack.org/cgit/openstack/nova/tree/api-guide/source/index.rst
  URL: http://developer.openstack.org/api-guide/compute/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1524006/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463800] Re: Confusing errors appears after running netns-cleanup with --force attribute

2015-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/254422
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=cfc9578148985f117c230be263d9faab4c1bb57e
Submitter: Jenkins
Branch:master

commit cfc9578148985f117c230be263d9faab4c1bb57e
Author: Assaf Muller 
Date:   Mon Dec 7 17:36:06 2015 -0500

Don't emit confusing error in netns-cleanup

If we're trying to delete a dhcp/qrouter device with use_veth
= False (Which is the default for some time), we'll first
try to 'ip link del %s', which will fail and emit a confusing
error, then try 'ovs-vsctl del-port'. There's no need to
log an error in such a case.

The patch attempts to future proof by setting the
set_log_fail_as_error(False) to be as tight as possible, so we
do log errors in case the device is somehow used in the future.

Change-Id: I1954bde3ee9a2e43d7615717134b61c5fa7cfbb1
Closes-Bug: #1463800


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463800

Title:
  Confusing errors appears after running netns-cleanup with --force
  attribute

Status in neutron:
  Fix Released

Bug description:
  The setup: Controller, Compute and 2 Network nodes 
  KILO - VRRP on RHEL7.1

  Trying to delete all "alive" namespaces (alive - router with attached 
interface to the network)
  The command succeeded but  a lot of error messages appears.

   neutron-netns-cleanup --config-file=/etc/neutron/neutron.conf 
--config-file=/etc/neutron/dhcp_agent.ini --force
  2015-06-07 11:41:14.760 2623 INFO neutron.common.config [-] Logging enabled!
  2015-06-07 11:41:14.761 2623 INFO neutron.common.config [-] 
/usr/bin/neutron-netns-cleanup version 2015.1.0
  2015-06-07 11:41:16.777 2623 WARNING oslo_config.cfg [-] Option 
"use_namespaces" from group "DEFAULT" is deprecated for removal.  Its value may 
be silently ignored in the future.
  2015-06-07 11:41:17.193 2623 ERROR neutron.agent.linux.utils [-] 
  Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'qrouter-0239082e-0817-430f-a183-581cc995da28', 'ip', 'link', 
'delete', 'qr-c3e98790-f6']
  Exit code: 2
  Stdin: 
  Stdout: 
  Stderr: RTNETLINK answers: Operation not supported

  2015-06-07 11:41:17.592 2623 ERROR neutron.agent.linux.utils [-] 
  Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'qrouter-0239082e-0817-430f-a183-581cc995da28', 'ip', 'link', 
'delete', 'ha-1a34b88b-13']
  Exit code: 2
  Stdin: 
  Stdout: 
  Stderr: RTNETLINK answers: Operation not supported

  2015-06-07 11:41:18.655 2623 ERROR neutron.agent.linux.utils [-] 
  Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'qrouter-c3fb97a3-8547-4008-9360-daa940906da3', 'ip', 'link', 
'delete', 'qr-ffdaf269-a2']
  Exit code: 2
  Stdin: 
  Stdout: 
  Stderr: RTNETLINK answers: Operation not supported

  2015-06-07 11:41:19.052 2623 ERROR neutron.agent.linux.utils [-] 
  Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'qrouter-c3fb97a3-8547-4008-9360-daa940906da3', 'ip', 'link', 
'delete', 'ha-e4a43b4c-79']
  Exit code: 2
  Stdin: 
  Stdout: 
  Stderr: RTNETLINK answers: Operation not supported

  2015-06-07 11:41:22.065 2623 ERROR neutron.agent.linux.utils [-] 
  Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'qrouter-082e1275-c4d5-445b-a9c3-6bb0b7fe0b6a', 'ip', 'link', 
'delete', 'qr-14f1c00c-6a']
  Exit code: 2
  Stdin: 
  Stdout: 
  Stderr: RTNETLINK answers: Operation not supported

  2015-06-07 11:41:22.490 2623 ERROR neutron.agent.linux.utils [-] 
  Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'qrouter-082e1275-c4d5-445b-a9c3-6bb0b7fe0b6a', 'ip', 'link', 
'delete', 'qg-72ded7f8-ec']
  Exit code: 2
  Stdin: 
  Stdout: 
  Stderr: RTNETLINK answers: Operation not supported

  2015-06-07 11:41:22.881 2623 ERROR neutron.agent.linux.utils [-] 
  Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'qrouter-082e1275-c4d5-445b-a9c3-6bb0b7fe0b6a', 'ip', 'link', 
'delete', 'ha-9630f5a6-73']
  Exit code: 2
  Stdin: 
  Stdout: 
  Stderr: RTNETLINK answers: Operation not supported

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1463800/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1516684] Re: Some volume related quotas are not translatable on admin->defaults

2015-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/251934
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=6b06195a3f09a9c30edcf279d8343fcd2e661579
Submitter: Jenkins
Branch:master

commit 6b06195a3f09a9c30edcf279d8343fcd2e661579
Author: Itxaka 
Date:   Tue Dec 1 16:55:25 2015 +0100

Make some volume related quotas translatable

One static quota (Per volume size (GiB)) and some dynamic quotas
were not being translated due to their dynamic nature.
This patch tries to address that.

Change-Id: I219d36676e21431f434accfdd99dbfb6191519e1
Closes-Bug: #1516684


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1516684

Title:
  Some volume related quotas are not translatable on admin->defaults

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  After preparing my Horizon environment as described in
  http://docs.openstack.org/developer/horizon/contributing.html#running-
  the-pseudo-translation-tool I observe several non-translatable quotas:

  Per Volume Gigabytes 
  Volumes Lvmdriver-1 
  Gigabytes Lvmdriver-1
  Snapshots Lvmdriver-1

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1516684/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524006] Re: NotImplementedError raised while create network using Nova REST API

2015-12-08 Thread Yuri Obshansky
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1524006

Title:
  NotImplementedError raised while create network using Nova REST API

Status in OpenStack Compute (nova):
  New
Status in openstack-api-site:
  New

Bug description:
  Hi,

  I've stucked with error:
  {"computeFault":
  {"message": "Unexpected API Error. Please report this at
  http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.\n
  ", "code": 500}}

  It happened when I try create network for non-admin project using Nova REST 
API
  Source: http://developer.openstack.org/api-ref-compute-v2.1.html

  POST
  http://10.1.244.10:8774/v2.1/090aee2684a04e8397193a118a6e91b0/os-networks

  POST data:
  {
  "network": {
  "label": "scale-1-net",
  "cidr": "172.1.0.0/24",
  "mtu": 9000,
  "dhcp_server": "172.1.0.2",
  "enable_dhcp": false,
  "share_address": true,
  "allowed_start": "172.1.0.10",
  "allowed_end": "172.1.0.200"
  }
  }

  What's wrong?
  or
  What's the solution?

  Thanks a lot.
  ---
  Release: 2.1.0 on 2015-12-08 06:12
  SHA: 60c5c2798004984738d171055dbc2a6fd37a85fe
  Source: 
http://git.openstack.org/cgit/openstack/nova/tree/api-guide/source/index.rst
  URL: http://developer.openstack.org/api-guide/compute/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1524006/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524004] [NEW] linuxbridge agent does not wire ports for non-traditional device owners

2015-12-08 Thread Mark McClain
Public bug reported:

A recent change [1] made the wiring super restrictive to network: and
neutron: this resulted in external systems that use other device owners
from getting wired.

[1] https://review.openstack.org/#/c/193485/

** Affects: neutron
 Importance: Medium
 Assignee: Mark McClain (markmcclain)
 Status: In Progress


** Tags: linuxbridge

** Description changed:

- A recent change made the wiring super restrictive to network: and
+ A recent change [1] made the wiring super restrictive to network: and
  neutron: this resulted in external systems that use other device owners
  from getting wired.
+ 
+ [1] https://review.openstack.org/#/c/193485/

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1524004

Title:
  linuxbridge agent does not wire ports for non-traditional device
  owners

Status in neutron:
  In Progress

Bug description:
  A recent change [1] made the wiring super restrictive to network: and
  neutron: this resulted in external systems that use other device
  owners from getting wired.

  [1] https://review.openstack.org/#/c/193485/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1524004/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523999] [NEW] Any error in L3 agent after external gateway is configured but before the local cache is updated results in errors in subsequent router updates

2015-12-08 Thread Assaf Muller
Public bug reported:

Reproduction:
* Create a new router
* Attach external interface - Execute external_gateway_added successfully but 
fail some time before self.ex_gw_port = self.get_ex_gw_port() (An example of a 
failure would be an RPC error when trying to update FIP statuses. In such a 
case extra routes would not be configured either, and post-router creation 
events would not be sent, which means that for example the metadata proxy 
wouldn't be started).

Any follow up update to the router (Add/remove interface, add/remove
FIP) will fail non-idempotent operations on the external device. This is
because any update will try to add the gateway again (Because
self.ex_gw_port = None). Even without a specific failure, reconfiguring
the external device is wasteful.

HA routers in particular will fail by throwing
VIPDuplicateAddressException for the external device's VIP. This
behavior was actually changed in a recent Mitaka patch
(https://review.openstack.org/#/c/196893/50/neutron/agent/l3/ha_router.py),
so this affects Juno to Liberty but not master and future releases.

The impact on legacy or distributed routers is less severe as their
process_external and routes_updated seem to be idempotent - Verified
against master via a makeshift functional test, I could not vouch for
previous releases.

Severity: It's severe for HA routers from Juno to Liberty, but not as
much for other routes types or HA routers on master.

** Affects: neutron
 Importance: Medium
 Status: New


** Tags: l3-dvr-backlog l3-ha l3-ipam-dhcp

** Description changed:

  Reproduction:
- Create a new router
- Attach external interface - Execute external_gateway_added successfully but 
fail some time before self.ex_gw_port = self.get_ex_gw_port() (An example of a 
failure would be an RPC error when trying to update FIP statuses. In such a 
case extra routes would not be configured either, and post-router creation 
events would not be sent, which means that for example the metadata proxy 
wouldn't be started).
+ * Create a new router
+ * Attach external interface - Execute external_gateway_added successfully but 
fail some time before self.ex_gw_port = self.get_ex_gw_port() (An example of a 
failure would be an RPC error when trying to update FIP statuses. In such a 
case extra routes would not be configured either, and post-router creation 
events would not be sent, which means that for example the metadata proxy 
wouldn't be started).
  
  Any follow up update to the router (Add/remove interface, add/remove
  FIP) will fail non-idempotent operations on the external device. This is
  because any update will try to add the gateway again (Because
  self.ex_gw_port = None). Even without a specific failure, reconfiguring
  the external device is wasteful.
  
  HA routers in particular will fail by throwing
  VIPDuplicateAddressException for the external device's VIP. This
  behavior was actually changed in a recent Mitaka patch
  (https://review.openstack.org/#/c/196893/50/neutron/agent/l3/ha_router.py),
  so this affects Juno to Liberty.
  
  The impact on legacy or distributed routers is less severe as their
  process_external and routes_updated seem to be idempotent - Verified
  against master via a makeshift functional test, I could not vouch for
  previous releases.

** Description changed:

  Reproduction:
  * Create a new router
  * Attach external interface - Execute external_gateway_added successfully but 
fail some time before self.ex_gw_port = self.get_ex_gw_port() (An example of a 
failure would be an RPC error when trying to update FIP statuses. In such a 
case extra routes would not be configured either, and post-router creation 
events would not be sent, which means that for example the metadata proxy 
wouldn't be started).
  
  Any follow up update to the router (Add/remove interface, add/remove
  FIP) will fail non-idempotent operations on the external device. This is
  because any update will try to add the gateway again (Because
  self.ex_gw_port = None). Even without a specific failure, reconfiguring
  the external device is wasteful.
  
  HA routers in particular will fail by throwing
  VIPDuplicateAddressException for the external device's VIP. This
  behavior was actually changed in a recent Mitaka patch
  (https://review.openstack.org/#/c/196893/50/neutron/agent/l3/ha_router.py),
- so this affects Juno to Liberty.
+ so this affects Juno to Liberty but not master and future releases.
  
  The impact on legacy or distributed routers is less severe as their
  process_external and routes_updated seem to be idempotent - Verified
  against master via a makeshift functional test, I could not vouch for
  previous releases.

** Description changed:

  Reproduction:
  * Create a new router
  * Attach external interface - Execute external_gateway_added successfully but 
fail some time before self.ex_gw_port = self.get_ex_gw_port() (An example of a 
failure would be an RPC error when trying to update FIP statuses. In s

[Yahoo-eng-team] [Bug 1335640] Re: Neutron doesn't support OSprofiler

2015-12-08 Thread Ryan Moats
** Changed in: neutron
   Status: Expired => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1335640

Title:
  RFE: Neutron doesn't support OSprofiler

Status in neutron:
  New

Bug description:
  To be able to improve OpenStack, and as Neutron we should have cross
  service/project profiler that will build one trace that goes through
  all services/projects and measures most important parts.

  So I build special for OpenStack library that allows us to do this. More 
about it:
  https://github.com/stackforge/osprofiler

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1335640/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523957] Re: Domain field is not pre-filled by OPENSTACK_KEYSTONE_DEFAULT_DOMAIN

2015-12-08 Thread Timur Sufiev
** Project changed: horizon => django-openstack-auth

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1523957

Title:
  Domain field is not pre-filled by OPENSTACK_KEYSTONE_DEFAULT_DOMAIN

Status in django-openstack-auth:
  Invalid

Bug description:
  Domain field on login page is not pre-filled by 
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN. 
  Horizon uses OPENSTACK_KEYSTONE_DEFAULT_DOMAIN value for creating entities 
when running on single-domain model, but on multi-domain model it would be 
useful to pre-fill Domain field on the login page with default domain too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1523957/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1510345] Re: [SRU] Cloud Images do not bring up networking w/ certain virtual NICs due to device naming rules

2015-12-08 Thread Scott Moser
** Changed in: cloud-init (Ubuntu Xenial)
   Importance: Undecided => High

** Changed in: cloud-init (Ubuntu Xenial)
 Assignee: (unassigned) => Scott Moser (smoser)

** Also affects: cloud-init
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1510345

Title:
  [SRU] Cloud Images do not bring up networking w/ certain virtual NICs
  due to device naming rules

Status in cloud-init:
  New
Status in Ubuntu on EC2:
  Fix Released
Status in cloud-init package in Ubuntu:
  Triaged
Status in livecd-rootfs package in Ubuntu:
  Fix Released
Status in livecd-rootfs source package in Wily:
  Fix Released
Status in cloud-init source package in Xenial:
  Triaged
Status in livecd-rootfs source package in Xenial:
  Fix Released

Bug description:
  SRU Justification

  [IMPACT] Cloud images produced by livecd-rootfs are not accessable
  when presented with certain NICS such as ixgbevf used on HVM instances
  for AWS.

  [CAUSE] Changes in default device naming in 15.10 causes some devices
  to be named at boot time and are not predicatable, i.e. instead of
  "eth0" being the first NIC, "ens3" might be used.

  [FIX] Boot instances with "net.ifnames=0". This change reverts to the
  old device naming conventions. As a fix, this is the most appropriate
  since the cloud images configure the first NIC for DHCP.

  [TEST CASE1]:
  - Build image from -proposed
  - Boot image in KVM, i.e:
$ qemu-system-x86_64 \
 -smp 2 -m 1024 -machine accel=kvm \
 -drive file=build.img,if=virtio,bus=0,cache=unsafe,unit=0,snapshot=on \
 -net nic,model=rtl8139
  - Confirm that image has "eth0"

  [TEST CASE2]:
  - Build image from -proposed
  - Publish image to AWS as HVM w/ SRIOV enabled
  - Confirm that instance boots and is accessable via SSH

  [ORIGINAL REPORT]

  I've made several attempts to launch a c4.xlarge and c4.8xlarge
  instances using Ubuntu 15.10 Wily but am unable to ping the instance
  after it has started running. The console shows that the instance
  reachability check failed.

  I am able to successfully launch c4.xlarge instances using Ubuntu
  14.04 and t2.large instances using Ubuntu 15.10.

  I've tried with both of these instance AMIs:

  ubuntu/images/hvm-ssd/ubuntu-wily-15.10-amd64-server-20151021 - ami-225ebd11
  ubuntu/images-testing/hvm-ssd/ubuntu-wily-daily-amd64-server-20151026 - 
ami-ea20cdd9

  Might there be a problem with the Ubuntu Kernel in 15.10 for the c4
  instances?

  Looking at the system log it seems that the network never comes up:

  [  140.699509] cloud-init[1469]: 2015-10-26 20:45:49,887 -
  url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04
  /meta-data/instance-id' failed [0/120s]: request error [('Connection
  aborted.', OSError(101, 'Network is unreachable'))]

  Thread at AWS forums:
  https://forums.aws.amazon.com/thread.jspa?threadID=218656

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1510345/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523031] Re: Neighbor table entry for router missing with Linux bridge + L3HA + L2 population

2015-12-08 Thread Jesse Pretorius
** Changed in: openstack-ansible
   Status: New => Confirmed

** Changed in: openstack-ansible
   Importance: Undecided => Medium

** Also affects: openstack-ansible/liberty
   Importance: Undecided
   Status: New

** Also affects: openstack-ansible/trunk
   Importance: Medium
   Status: Confirmed

** Changed in: openstack-ansible/liberty
   Status: New => Confirmed

** Changed in: openstack-ansible/liberty
   Importance: Undecided => Medium

** Changed in: openstack-ansible/liberty
Milestone: None => 12.1.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1523031

Title:
  Neighbor table entry for router missing with Linux bridge + L3HA + L2
  population

Status in neutron:
  New
Status in openstack-ansible:
  Confirmed
Status in openstack-ansible liberty series:
  Confirmed
Status in openstack-ansible trunk series:
  Confirmed

Bug description:
  Using Linux bridge, L3HA, and L2 population on Liberty, the neighbor
  table (ip neigh show) on the compute node lacks an entry for the
  router IP address. For example, using a router with 172.16.1.1 and
  instance with 172.16.1.4:

  On the node with the L3 agent containing the router:

  # ip neigh show
  169.254.192.1 dev vxlan-476 lladdr fa:16:3e:9b:d5:6f PERMANENT
  10.4.30.11 dev eth1 lladdr bc:76:4e:04:3c:59 REACHABLE
  10.4.11.11 dev eth0 lladdr bc:76:4e:04:d0:75 REACHABLE
  172.16.1.4 dev vxlan-466 lladdr fa:16:3e:ad:44:df PERMANENT
  10.4.30.31 dev eth1 lladdr bc:76:4e:05:1f:5f STALE
  10.4.11.31 dev eth0 lladdr bc:76:4e:04:38:4c STALE
  10.4.30.1 dev eth1 lladdr bc:76:4e:04:41:62 STALE
  10.4.11.1 dev eth0 lladdr bc:76:4e:04:77:72 DELAY
  172.16.1.2 dev vxlan-466 lladdr fa:16:3e:a0:83:a5 PERMANENT

  # ip netns exec qrouter-1521b4b1-7de9-4ed0-be19-69ac02ccf520 ping 172.16.1.4
  PING 172.16.1.4 (172.16.1.4) 56(84) bytes of data.
  ...

  On the node with the instance:

  # ip neigh show
  172.16.1.2 dev vxlan-466 lladdr fa:16:3e:a0:83:a5 PERMANENT
  10.4.11.1 dev eth0 lladdr bc:76:4e:04:77:72 DELAY
  172.16.1.3 dev vxlan-466 lladdr fa:16:3e:41:3b:de PERMANENT
  10.4.30.1 dev eth1 lladdr bc:76:4e:04:41:62 STALE
  10.4.11.12 dev eth0 lladdr bc:76:4e:05:e2:f8 STALE
  10.4.30.12 dev eth1 lladdr bc:76:4e:05:76:d1 STALE
  10.4.11.41 dev eth0 lladdr bc:76:4e:05:e3:6a STALE
  10.4.11.11 dev eth0 lladdr bc:76:4e:04:d0:75 REACHABLE
  10.4.30.11 dev eth1 lladdr bc:76:4e:04:3c:59 STALE

  172.16.1.2 and 172.16.1.3 belong to DHCP agents. I can access the
  instance from within both DHCP agent namespaces.

  On the node with the instance, I manually add a neighbor entry for the
  router:

  # ip neigh replace 172.16.1.1 lladdr fa:16:3e:0a:d4:39 dev vxlan-466
  nud permanent

  On the node with the L3 agent containing the router:

  # ip netns exec qrouter-1521b4b1-7de9-4ed0-be19-69ac02ccf520 ping 172.16.1.4
  64 bytes from 172.16.1.4: icmp_seq=1 ttl=64 time=2.21 ms
  64 bytes from 172.16.1.4: icmp_seq=2 ttl=64 time=45.9 ms
  64 bytes from 172.16.1.4: icmp_seq=3 ttl=64 time=1.23 ms
  64 bytes from 172.16.1.4: icmp_seq=4 ttl=64 time=0.975 ms

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1523031/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443421] Re: After VM migration, tunnels not getting removed with L2Pop ON, when using multiple api_workers in controller

2015-12-08 Thread Jesse Pretorius
** Changed in: openstack-ansible
   Status: New => Confirmed

** Changed in: openstack-ansible
   Importance: Undecided => High

** Changed in: openstack-ansible
Milestone: None => mitaka-2

** Also affects: openstack-ansible/liberty
   Importance: Undecided
   Status: New

** Also affects: openstack-ansible/trunk
   Importance: High
   Status: Confirmed

** Changed in: openstack-ansible/liberty
   Importance: Undecided => High

** Changed in: openstack-ansible/liberty
   Status: New => Confirmed

** Changed in: openstack-ansible/liberty
Milestone: None => 12.1.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1443421

Title:
  After VM migration, tunnels not getting removed with L2Pop ON, when
  using multiple api_workers in controller

Status in neutron:
  In Progress
Status in openstack-ansible:
  Confirmed
Status in openstack-ansible liberty series:
  Confirmed
Status in openstack-ansible trunk series:
  Confirmed

Bug description:

  Setup : Neutron server  HA (3 nodes).
  Hypervisor – ESX with OVsvapp
  l2 POP is on Network node and off on Ovsvapp.

  Condition:
  Make L2 pop on OVs agent, api workers =10 in the controller. 

  On network node,the VXLAN tunnel is created with ESX2 and the Tunnel
  with ESX1 is not removed after migrating VM from ESX1 to ESX2.

  Attaching the logs of servers and agent logs.

  stack@OSC-NS1:/opt/stack/logs/screen$ sudo ovs-vsctl show
  662d03fb-c784-498e-927c-410aa6788455
  Bridge br-ex
  Port phy-br-ex
  Interface phy-br-ex
  type: patch
  options: {peer=int-br-ex}
  Port "eth2"
  Interface "eth2"
  Port br-ex
  Interface br-ex
  type: internal
  Bridge br-tun
  Port patch-int
  Interface patch-int
  type: patch
  options: {peer=patch-tun}
  Port "vxlan-6447007a"
  Interface "vxlan-6447007a"
  type: vxlan
  options: {df_default="true", in_key=flow, 
local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.122"} 
This should have been deleted after MIGRATION.
  Port "vxlan-64470082"
  Interface "vxlan-64470082"
  type: vxlan
  options: {df_default="true", in_key=flow, 
local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.130"}
  Port br-tun
  Interface br-tun
  type: internal
  Port "vxlan-6447002a"
  Interface "vxlan-6447002a"
  type: vxlan
  options: {df_default="true", in_key=flow, 
local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.42"}
  Bridge "br-eth1"
  Port "br-eth1"
  Interface "br-eth1"
  type: internal
  Port "phy-br-eth1"
  Interface "phy-br-eth1"
  type: patch
  options: {peer="int-br-eth1"}
  Bridge br-int
  fail_mode: secure
  Port patch-tun
  Interface patch-tun
  type: patch
  options: {peer=patch-int}
  Port "int-br-eth1"
  Interface "int-br-eth1"
  type: patch
  options: {peer="phy-br-eth1"}
  Port br-int
  Interface br-int
  type: internal
  Port int-br-ex
  Interface int-br-ex
  type: patch
  options: {peer=phy-br-ex}
  Port "tap9515e5b3-ec"
  tag: 11
  Interface "tap9515e5b3-ec"
  type: internal
  ovs_version: "2.0.2"

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1443421/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523968] [NEW] LBaaS v2 - LB update with admin-state-down fails

2015-12-08 Thread Evgeny Fedoruk
Public bug reported:

Updating loadbalancer instance with admin-state-down fails


CLI Command:
neutron lbaas-loadbalancer-update --admin-state-down NFV_LB

Output:
Unrecognized attribute(s) 'admin_state_down'

** Affects: neutron
 Importance: Undecided
 Assignee: Evgeny Fedoruk (evgenyf)
 Status: New


** Tags: lbaas

** Changed in: neutron
 Assignee: (unassigned) => Evgeny Fedoruk (evgenyf)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1523968

Title:
  LBaaS v2 - LB update with admin-state-down fails

Status in neutron:
  New

Bug description:
  Updating loadbalancer instance with admin-state-down fails

  
  CLI Command:
  neutron lbaas-loadbalancer-update --admin-state-down NFV_LB

  Output:
  Unrecognized attribute(s) 'admin_state_down'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1523968/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441400] Re: Move N1kv section from neutron tree's ml2_conf_cisco.ini to stackforge repo

2015-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/246547
Committed: 
https://git.openstack.org/cgit/openstack/networking-cisco/commit/?id=7b1eb2b6d5e55563c084f60e44adc1d32706eb17
Submitter: Jenkins
Branch:master

commit d9b9a6421d7ff92e920ed21b01ebc7bf49e38bd6
Author: Sam Betts 
Date:   Tue Sep 29 09:18:10 2015 +0100

Set default branch for stable/kilo

Change-Id: I31f51ff60f95639f459839f4c7d929d5ec7c458d

commit f08fb31f20c2d8cc1e6b71784cdfd9604895e16d
Author: Rich Curran 
Date:   Thu Sep 3 13:23:52 2015 -0400

ML2 cisco_nexus MD: VLAN not created on switch

As described in DE588,
"With neutron multiworkers configured, there is a potential race condition
issue where some of the VLANs will not be configured on one or more N9k
switches.

/etc/neutron/neutron.conf
-
api_workers=3
rpc_workers=3"

Fix is to allow the vlan create command to be sent down to a switch
under most event conditions. Long term fix will be to introduce a new
column in the port binding DB table that indicates the true state of the
entry/row.

Closes-Bug: #1491940
Change-Id: If1da1fcf16a450c1a4107da9970b18fc64936896
(cherry picked from commit 0e48a16e77fc5ec5fd485a85f97f3650126fb6fe)

commit d400749e43e9d5a1fc92683b40159afce81edc95
Author: Carol Bouchard 
Date:   Thu Sep 3 15:19:48 2015 -0400

Create knob to prevent caching ssh connection

Create a new initialization knob named never_cache_ssh_connection.
This boolean is False by default allowing multiple ssh connections
to the Nexus switch to be cached as it behaves today.  When there
are multiple neutron processes/controllers and/or non-neutron ssh(xml)
connections, this is an issue since processes hold onto a connection
while the Nexus devices supports a maximum of 8 sessions.  As a result,
further ssh connections will fail.  In this case, the boolean should be
set to True causing each connection to be closed when a neutron event
is complete.

Change-Id: I61ec303856b757dd8d9d43110fec8e7844ab7c6d
Closes-bug:  #1491108
(cherry picked from commit 23551a4198c61e2e25a6382f27d47b0665f054b8)

commit 0050ea7f1fb3c22214d7ca49cfe641da86123e2c
Author: Carol Bouchard 
Date:   Wed Sep 2 11:10:42 2015 -0400

Bubble up exceptions when Nexus replay enabled

There are several changes made surrounding this bug.

1) When replay is enabled, we should bubble exceptions
   for received port create/update/delete post_commit
   transactions.  This was suppressed earlier by
   1422738.

2) When an exception is encountered during a
   post_commit transaction, the driver will no longer
   mark the switch state to inactive to force a replay.
   This is no longer needed since 1481856 was introduced.
   So from this point on, only the replay thread will
   determine the state of the connection to the switch.

3) In addition to accommodating 1 & 2 above, more detail
   data verification was added to the test code.

Change-Id: I97e4de2d5f56522eb0212cef4f804aa4a83957ec
Closes-bug:  #1481859
(cherry picked from commit 6b86a859db5fcf931df263b01e3bf1bd68f0797f)

commit 54fca8a047810304c69990dce03052e45f21cc23
Author: Carol Bouchard 
Date:   Tue Aug 11 13:14:17 2015 -0400

Quick retry connect to resolve stale ncclient handle

When the Nexus switch reboots, OpenStack Nexus Driver will be left
with a stale ncclient handle resulting in an exception.  When an
exception is encountered, another retry is performed within the driver.
The first exception information is saved so if the second exception
fails again, we report on the first exception.
As part of this fix, I've removed code which supports 2 versions
of ncclient since the older version no longer meets the version
configured in our requirements.txt.

Change-Id: I182440c3a19e7c57e4dfe69e4013ea7bf87aa7ab
Closes-bug:  #1481856
(cherry picked from commit 46d364dc9f3b24e654939f822ea048e3cbf227c6)

commit 0c496e1d7425984bf9686b11b5c0c9c8ece23bf3
Author: Carol Bouchard 
Date:   Wed Aug 19 16:56:21 2015 -0400

Update requirements.txt for ML2 Nexus

The version number to the lxml library is not correct.
The ML2 Nexus driver requires version 3.3.3 NOT 0.3.3

Change-Id: I25d21b9b470517caaffce44561b85bdb0c4e8649
Closes-bug:  #1486164
(cherry picked from commit 64aaa417f40dc10e321906764e70c924b6178e64)

commit 393254fcfbe3165e4253801bc3be03e15201c36d
Author: Andrew Boik 
Date:   Mon Jun 29 09:58:51 2015 -0400

Update requirements.txt

Add ncclient and lxml versioned dependencies for ML2 Nexus,
UCS SDK for ML2 UCSM Mech Driver.

Change-Id: Ie7d4c8e173d91c1c3a9e8c77fb333b923094f258
(cherry picked from commit d62b1c8663ea40852056924c21fe7cd79a26b0fd)

commit 75fd522b36f7b67dc4152e461f4e5dfa26b4ff31
Author: bdemers 
Dat

[Yahoo-eng-team] [Bug 1523957] [NEW] Domain field is not pre-filled by OPENSTACK_KEYSTONE_DEFAULT_DOMAIN

2015-12-08 Thread Paul Karikh
Public bug reported:

Domain field on login page is not pre-filled by 
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN. 
Horizon uses OPENSTACK_KEYSTONE_DEFAULT_DOMAIN value for creating entities when 
running on single-domain model, but on multi-domain model it would be useful to 
pre-fill Domain field on the login page with default domain too.

** Affects: horizon
 Importance: Undecided
 Assignee: Paul Karikh (pkarikh)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Paul Karikh (pkarikh)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1523957

Title:
  Domain field is not pre-filled by OPENSTACK_KEYSTONE_DEFAULT_DOMAIN

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Domain field on login page is not pre-filled by 
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN. 
  Horizon uses OPENSTACK_KEYSTONE_DEFAULT_DOMAIN value for creating entities 
when running on single-domain model, but on multi-domain model it would be 
useful to pre-fill Domain field on the login page with default domain too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1523957/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1508997] Re: Reusable firewall rules

2015-12-08 Thread Nate Johnston
Determined that the requirements for this request are a duplicate of the
FWaaS API v2.0 spec: https://review.openstack.org/#/c/243873

** Changed in: neutron
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1508997

Title:
  Reusable firewall rules

Status in neutron:
  Invalid

Bug description:
  At Comcast we provide a very large private cloud. Each tenant uses
  firewall rules to filter traffic in order to accept traffic only from
  a given list of IPs. This can be done with security groups.   However
  there are two shortcomings with that approach.

  First, in my environment the list of IPs on which to manage ingress
  rules is very large due to non-contiguous IP space, so educating all
  tenants what these IP addresses are problematic at best.

  Second, notifying all tenants when IPs change is not a sustainable
  model.

  We would like to find a solution whereby rules much like security
  groups (that is, filtering by a combination of IP, protocol, and port)
  can be defined and tenants can apply these rules to a given port or
  network. This would allow an admin to define these rules to encompass
  different IP spaces and the tenants could apply them to their VM or
  network as they see fit.

  We would like to model the authorization of these rules so one role
  (such as admin) could create update or remove.  And then the rule
  could be shared with a Tenant or all Tenants to consume.

  Use Cases:

  - As a tenant, I have a heavy CPU workload for a large report. I want
  to spin up 40 instances and apply the "Reporting Infrastructure" rule
  to them.  This and would allow access only to the internal reporting
  infrastructure.

  - As a network admin, when the reporting team needs more IP space,and
  I want to add more subnets So I want to update the "Reporting
  Infrastructure" rule so that any VM that is already using that rule
  can access the new IP space.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1508997/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523503] Re: Scheduling does not work with a hybid cloud

2015-12-08 Thread Matt Riedemann
** Also affects: nova/liberty
   Importance: Undecided
   Status: New

** Changed in: nova/liberty
   Status: New => Confirmed

** Changed in: nova/liberty
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1523503

Title:
  Scheduling does not work with a hybid cloud

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) liberty series:
  Confirmed

Bug description:
  The glance image metadata 'hypervisor_type' and 'hypervisor_version_requires' 
are not honored. The reason is that these are replaced by img_hv_type and 
img_hv_requested_version
  So the scheduler will not take these into account. This break scheduling ina 
hybrid cloud

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1523503/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523955] [NEW] (Import Refactor) Implement `import` call

2015-12-08 Thread Flavio Percoco
Public bug reported:

This is a sub-task for the image import process work:
https://review.openstack.org/#/c/232371/

The final `import` call, as it was discussed in the spec so far, will
finalize the image import process and trigger the task engine. Most of
this triggering logic has been implemented already during Kilo but it
will have to be tight to the import process.

This work depends on: https://bugs.launchpad.net/glance/+bug/1523937

** Affects: glance
 Importance: Wishlist
 Status: New


** Tags: mitaka-new-import-process

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1523955

Title:
  (Import Refactor) Implement `import` call

Status in Glance:
  New

Bug description:
  This is a sub-task for the image import process work:
  https://review.openstack.org/#/c/232371/

  The final `import` call, as it was discussed in the spec so far, will
  finalize the image import process and trigger the task engine. Most of
  this triggering logic has been implemented already during Kilo but it
  will have to be tight to the import process.

  This work depends on: https://bugs.launchpad.net/glance/+bug/1523937

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1523955/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523949] [NEW] horizon.tables.actions LinkAction referencing a non-existant template

2015-12-08 Thread Itxaka Serrano
Public bug reported:

on horizon.tables.actions, class LinkAction, there is a reference to a
template that no longer exists:

https://github.com/openstack/horizon/blob/master/horizon/tables/actions.py#L371

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1523949

Title:
  horizon.tables.actions LinkAction referencing a non-existant template

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  on horizon.tables.actions, class LinkAction, there is a reference to a
  template that no longer exists:

  
https://github.com/openstack/horizon/blob/master/horizon/tables/actions.py#L371

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1523949/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523944] [NEW] (Import Refactor) Implement discoverability calls

2015-12-08 Thread Flavio Percoco
Public bug reported:

This is a sub-task for the image import process work:
https://review.openstack.org/#/c/232371/

In order to make this import process discoverable, we need to change
some of our API schemas to reflect the supported upload methods. We also
need to add some new schemas/calls.

Please, refer to the spec for more info, this is a tracking bug and it
likely depends on: https://bugs.launchpad.net/glance/+bug/1523937

** Affects: glance
 Importance: Wishlist
 Status: New


** Tags: mitaka-new-import-process

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1523944

Title:
  (Import Refactor) Implement discoverability calls

Status in Glance:
  New

Bug description:
  This is a sub-task for the image import process work:
  https://review.openstack.org/#/c/232371/

  In order to make this import process discoverable, we need to change
  some of our API schemas to reflect the supported upload methods. We
  also need to add some new schemas/calls.

  Please, refer to the spec for more info, this is a tracking bug and it
  likely depends on: https://bugs.launchpad.net/glance/+bug/1523937

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1523944/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523941] [NEW] (Import Refactor) Header changes

2015-12-08 Thread Flavio Percoco
Public bug reported:

This is a sub-task for the image import process work:
https://review.openstack.org/#/c/232371/

This spec requires new headers to be added to some of our calls. For
example, the image-create call should return the upload path for the
image data as it was discussed in the spec.

This bug likely depends on this one:
https://bugs.launchpad.net/glance/+bug/1523937

** Affects: glance
 Importance: Wishlist
 Status: New


** Tags: mitaka-new-import-process

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1523941

Title:
  (Import Refactor) Header changes

Status in Glance:
  New

Bug description:
  This is a sub-task for the image import process work:
  https://review.openstack.org/#/c/232371/

  This spec requires new headers to be added to some of our calls. For
  example, the image-create call should return the upload path for the
  image data as it was discussed in the spec.

  This bug likely depends on this one:
  https://bugs.launchpad.net/glance/+bug/1523937

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1523941/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523937] [NEW] (Import Refactor) Policy Rules

2015-12-08 Thread Flavio Percoco
Public bug reported:

This is a sub-task for the image import process work:
https://review.openstack.org/#/c/232371/

This spec requires new policy rules to protect the API and
enable/disable it when needed.

** Affects: glance
 Importance: Wishlist
 Status: New


** Tags: mitaka-new-import-process

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1523937

Title:
  (Import Refactor) Policy Rules

Status in Glance:
  New

Bug description:
  This is a sub-task for the image import process work:
  https://review.openstack.org/#/c/232371/

  This spec requires new policy rules to protect the API and
  enable/disable it when needed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1523937/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513102] Re: Useless deprecation message for driver import

2015-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/241403
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=c73907485a3d7c5319378d2e3a5e11593f66a930
Submitter: Jenkins
Branch:master

commit c73907485a3d7c5319378d2e3a5e11593f66a930
Author: Brant Knudson 
Date:   Tue Nov 3 17:03:20 2015 -0600

More useful message when using direct driver import

In the Liberty release, we switched to using entrypoints for
specifying the driver or auth plugin. The deprecation message
didn't mention the driver name that was specified, nor did it
mention where to look for the expected names, so it was not
user friendly.

Closes-Bug: 1513102
Change-Id: I02e265684b26686523da9d648b37675feb052978


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1513102

Title:
  Useless deprecation message for driver import

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  
  When a driver is specified using the full name (as in, the old config file is 
used), for example if I have:

   driver = keystone.contrib.federation.backends.sql.Federation

  I get a deprecation warning:

  31304 WARNING oslo_log.versionutils [-] Deprecated: direct import of
  driver is deprecated as of Liberty in favor of entrypoints and may be
  removed in N.

  The deprecation warning is pretty useless. It should at least include
  the string that was used so that I can figure out what to change.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1513102/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523934] [NEW] API Version Bump

2015-12-08 Thread Flavio Percoco
Public bug reported:

This is a sub-task for the image import process work:
https://review.openstack.org/#/c/232371/

In order to implement this spec, we need to bump the API version. This
bug will track that work

** Affects: glance
 Importance: Wishlist
 Status: New


** Tags: mitaka-new-import-process

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1523934

Title:
  API Version Bump

Status in Glance:
  New

Bug description:
  This is a sub-task for the image import process work:
  https://review.openstack.org/#/c/232371/

  In order to implement this spec, we need to bump the API version. This
  bug will track that work

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1523934/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523935] [NEW] API Version Bump

2015-12-08 Thread Flavio Percoco
Public bug reported:

This is a sub-task for the image import process work:
https://review.openstack.org/#/c/232371/

In order to implement this spec, we need to bump the API version. This
bug will track that work

** Affects: glance
 Importance: Wishlist
 Status: New


** Tags: mitaka-new-import-process

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1523935

Title:
  API Version Bump

Status in Glance:
  New

Bug description:
  This is a sub-task for the image import process work:
  https://review.openstack.org/#/c/232371/

  In order to implement this spec, we need to bump the API version. This
  bug will track that work

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1523935/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523930] [NEW] Javascript message catalogs need to be retreived from plugins

2015-12-08 Thread Doug Fish
Public bug reported:

It's not possible for plugins to contribute translations to the
javascript message catalog. Right now our files are hardcoded to allow
only contributions from horizon and openstack_dashboard applications

openstack_dashboard/templates/horizon/_script_i18n.html

I believe the solution will be to look through all of the applications, 
possibly filtering by a setting in the enabled file in order to dynamically 
build and use this page:
horizon/templates/horizon/_script_i18n.html

** Affects: horizon
 Importance: Undecided
 Assignee: Doug Fish (drfish)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Doug Fish (drfish)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1523930

Title:
  Javascript message catalogs need to be retreived from plugins

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  It's not possible for plugins to contribute translations to the
  javascript message catalog. Right now our files are hardcoded to allow
  only contributions from horizon and openstack_dashboard applications

  openstack_dashboard/templates/horizon/_script_i18n.html

  I believe the solution will be to look through all of the applications, 
possibly filtering by a setting in the enabled file in order to dynamically 
build and use this page:
  horizon/templates/horizon/_script_i18n.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1523930/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523716] Re: oslo.utils upgrade breaks unit tests

2015-12-08 Thread gordon chung
** Changed in: ceilometer
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1523716

Title:
  oslo.utils upgrade breaks unit tests

Status in Ceilometer:
  Invalid
Status in Magnum:
  New
Status in networking-odl:
  New
Status in neutron:
  New
Status in oslo.utils:
  Fix Released

Bug description:
  Upgraded oslo.utils (3.1.0) is breaking ceilometer unit tests:

  https://jenkins07.openstack.org/job/gate-ceilometer-
  python34/897/console

  Stack Trace:

  2015-12-07 20:17:19.296 | __import__(module_str)
  2015-12-07 20:17:19.296 |   File "ceilometer/notification.py", line 24, in 

  2015-12-07 20:17:19.296 | from ceilometer import coordination
  2015-12-07 20:17:19.296 |   File "ceilometer/coordination.py", line 20, in 

  2015-12-07 20:17:19.297 | import tooz.coordination
  2015-12-07 20:17:19.297 |   File 
"/home/jenkins/workspace/gate-ceilometer-python27/.tox/py27/local/lib/python2.7/site-packages/tooz/coordination.py",
 line 21, in 
  2015-12-07 20:17:19.297 | from oslo_utils import netutils
  2015-12-07 20:17:19.297 |   File 
"/home/jenkins/workspace/gate-ceilometer-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/netutils.py",
 line 25, in 
  2015-12-07 20:17:19.297 | import netifaces
  2015-12-07 20:17:19.297 | ImportError: No module named netifaces

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1523716/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523860] Re: no return link in network detail page

2015-12-08 Thread Masco
** Changed in: horizon
   Status: In Progress => Invalid

** Changed in: horizon
 Assignee: Masco (masco) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1523860

Title:
  no return link in network detail page

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  from network detail page no link provided to return back to the networks list.
  breadcrumb need to added in the network detail page.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1523860/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523889] [NEW] nova image-list say class 'glanceclient.exc.HTTPInternalServerError'

2015-12-08 Thread dingyu
Public bug reported:

When I use nova image-list 
THe shell show

ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-770ce041-380d-4dcf-95d2-1c844c70945d)

I get the log ,like this

2015-12-08 06:31:11.501 2112 INFO nova.osapi_compute.wsgi.server 
[req-d2d814a0-d9bc-4aba-a34b-e7fc4bce0363 db6f17743bd342fe9615dce76aff1556 
089850858b4049a589bf00a6cd2373fd - - -] 192.168.60.122 "GET /v2/ HTTP/1.1" 
status: 200 len: 576 time: 3.4160471
2015-12-08 06:31:12.355 2112 INFO nova.osapi_compute.wsgi.server [-] 
192.168.60.122 "OPTIONS / HTTP/1.0" status: 200 len: 505 time: 0.0006258
2015-12-08 06:31:14.358 2112 INFO nova.osapi_compute.wsgi.server [-] 
192.168.60.122 "OPTIONS / HTTP/1.0" status: 200 len: 505 time: 0.0006728
2015-12-08 06:31:16.361 2112 INFO nova.osapi_compute.wsgi.server [-] 
192.168.60.122 "OPTIONS / HTTP/1.0" status: 200 len: 505 time: 0.0005569
2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions 
[req-770ce041-380d-4dcf-95d2-1c844c70945d db6f17743bd342fe9615dce76aff1556 
089850858b4049a589bf00a6cd2373fd - - -] Unexpected exception in API method
2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/extensions.py", line 478, 
in wrapped
2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/images.py", line 
145, in detail
2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions 
**page_params)
2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/image/api.py", line 68, in get_all
2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions return 
session.detail(context, **kwargs)
2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/image/glance.py", line 284, in detail
2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions for image 
in images:
2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/glanceclient/v1/images.py", line 254, in list
2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions for image 
in paginate(params, return_request_id):
2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/glanceclient/v1/images.py", line 238, in 
paginate
2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions images, 
resp = self._list(url, "images")
2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/glanceclient/v1/images.py", line 63, in _list
2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions resp, body 
= self.client.get(url)
2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 280, in get
2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions return 
self._request('GET', url, **kwargs)
2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 272, in 
_request
2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions resp, 
body_iter = self._handle_response(resp)
2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 93, in 
_handle_response
2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions raise 
exc.from_response(resp, resp.content)
2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions 
HTTPInternalServerError: 500 Internal Server Error: The server has either erred 
or is incapable of performing the requested operation. (HTTP 500)
2015-12-08 06:31:18.141 2112 ERROR nova.api.openstack.extensions 
2015-12-08 06:31:18.161 2112 INFO nova.api.openstack.wsgi 
[req-770ce041-380d-4dcf-95d2-1c844c70945d db6f17743bd342fe9615dce76aff1556 
089850858b4049a589bf00a6cd2373fd - - -] HTTP exception thrown: Unexpected API 
Error. Please report this at http://bugs.launchpad.net/nova/ and attach the 
Nova API log if possible.

2015-12-08 06:31:18.161 2112 INFO nova.osapi_compute.wsgi.server 
[req-770ce041-380d-4dcf-95d2-1c844c70945d db6f17743bd342fe9615dce76aff1556 
089850858b4049a589bf00a6cd2373fd - - -] 192.168.60.122 "GET 
/v2/089850858b4049a589bf00a6cd2373fd/images/detail HTTP/1.1" status: 500 len: 
445 time: 6.5481448

when I use ,like this

[root@controller2 ~]# nova --debug image-list
DEBUG (session:198) REQ: curl -g -i -X GET http://192.168.60.120:35357/v3 -H 
"Accept: app

[Yahoo-eng-team] [Bug 1523863] [NEW] Tutorial for customising horizon

2015-12-08 Thread Karthik
Public bug reported:

The Step 1 present in the title of Branding Horizon may look better if
we rename title to foundation step.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1523863

Title:
  Tutorial for customising horizon

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The Step 1 present in the title of Branding Horizon may look better if
  we rename title to foundation step.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1523863/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523859] [NEW] Failing router interface add changes port device_id/device_owner attributes

2015-12-08 Thread Bob Melander
Public bug reported:

If a router is a attached to a neutron port and that operation fails at
a late stage in the processing then the 'device_id' and 'device_owner'
attributes of the port are still changed. The failed operation should
not change them.

The reason is that, as of commit
0947458018725b241603139f4ec6f92e84b2f29b, the update of those attributes
happens outside of the DB transaction.

** Affects: neutron
 Importance: Undecided
 Assignee: Bob Melander (bob-melander)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Bob Melander (bob-melander)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1523859

Title:
  Failing router interface add changes port device_id/device_owner
  attributes

Status in neutron:
  New

Bug description:
  If a router is a attached to a neutron port and that operation fails
  at a late stage in the processing then the 'device_id' and
  'device_owner' attributes of the port are still changed. The failed
  operation should not change them.

  The reason is that, as of commit
  0947458018725b241603139f4ec6f92e84b2f29b, the update of those
  attributes happens outside of the DB transaction.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1523859/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523860] [NEW] no return link in network detail page

2015-12-08 Thread Masco
Public bug reported:

from network detail page no link provided to return back to the networks list.
breadcrumb need to added in the network detail page.

** Affects: horizon
 Importance: Undecided
 Assignee: Masco (masco)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Masco (masco)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1523860

Title:
  no return link in network detail page

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  from network detail page no link provided to return back to the networks list.
  breadcrumb need to added in the network detail page.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1523860/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493350] Re: Attach Volume fail : Cinder - ISCSI device symlinks under /dev/disk/by-path in hex.

2015-12-08 Thread Vinay Prasad M
** Also affects: os-brick
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1493350

Title:
  Attach Volume fail : Cinder - ISCSI device symlinks under /dev/disk
  /by-path in hex.

Status in Cinder:
  Confirmed
Status in OpenStack Compute (nova):
  Opinion
Status in os-brick:
  New

Bug description:
  As part of POC for an enterprise storage we have implemented an ISCSIDriver.
  Volume operation works as expected.

  Facing an issue with attach volume. Request your help.

  1. If the volume lun id in backend storage is less than 255 attach volume 
works fine. Symlinks in /dev/disk/by-path is as below
  lrwxrwxrwx 1 root root   9 Jun 18 15:18 
ip-192.168.5.100:3260-iscsi-iqn.1989-10.jp.co.xyz:storage.ttsp.2ec7fdda.ff5c4a16.0-lun-169
 -> ../../sde
  lrwxrwxrwx 1 root root   9 Aug 26 14:47 
ip-192.168.5.100:3260-iscsi-iqn.1989-10.jp.co.xyz:storage.ttsp.2ec7fdda.ff5c4a16.0-lun-172
 -> ../../sdi

  2. If volume lun id is more than 255, /dev/disk/by-path is an hexadecimal 
number and hence attach volume is failing with message Volume path not found.  
Symlinks in /dev/disk/by-path is as below [hexadecimal lun id  according to 
SCSI standard ( REPORT LUNS)]
  lrwxrwxrwx 1 root root   9 Aug 26 14:47 
ip-192.168.5.100:3260-iscsi-iqn.1989-10.jp.co.xxx:storage.ttsp.2ec7fdda.ff5c4a16.0-lun-0x016b
 -> ../../sdh
  lrwxrwxrwx 1 root root   9 Jun 18 15:18 
ip-192.168.5.100:3260-iscsi-iqn.1989-10.jp.co.xxx:storage.ttsp.2ec7fdda.ff5c4a16.0-lun-0x0200
 -> ../../sdc

  Please provide your suggession.

  I would suggest cinder utility to have a check of both with normal
  lun-id in /dev/disk/by-path as well as hexadecimal lunid.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1493350/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523849] [NEW] Incorrect ID for theme preview navigation

2015-12-08 Thread Rob Cresswell
Public bug reported:

The "Dialogs" section has an ID of 'dialos', meaning the nav doesn't
work correctly for that section.

** Affects: horizon
 Importance: Low
 Assignee: Rob Cresswell (robcresswell)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Rob Cresswell (robcresswell)

** Changed in: horizon
Milestone: None => mitaka-2

** Changed in: horizon
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1523849

Title:
  Incorrect ID for theme preview navigation

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  The "Dialogs" section has an ID of 'dialos', meaning the nav doesn't
  work correctly for that section.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1523849/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368661] Re: Unit tests sometimes fail because of stale pyc files

2015-12-08 Thread sonu
** Also affects: oslo.service
   Importance: Undecided
   Status: New

** Changed in: oslo.service
 Assignee: (unassigned) => sonu (sonu-bhumca11)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368661

Title:
  Unit tests sometimes fail because of stale pyc files

Status in congress:
  Fix Released
Status in Ironic:
  Fix Released
Status in Magnum:
  Fix Released
Status in Mistral:
  Fix Released
Status in Monasca:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in oslo.service:
  New
Status in python-cinderclient:
  Fix Committed
Status in python-congressclient:
  In Progress
Status in python-glanceclient:
  In Progress
Status in python-heatclient:
  Fix Committed
Status in python-keystoneclient:
  Fix Committed
Status in python-magnumclient:
  Fix Released
Status in python-mistralclient:
  New
Status in python-neutronclient:
  In Progress
Status in Python client library for Sahara:
  Fix Committed
Status in python-swiftclient:
  In Progress
Status in python-troveclient:
  Fix Committed
Status in Python client library for Zaqar:
  Fix Committed
Status in Trove:
  Fix Released
Status in zaqar:
  In Progress

Bug description:
  Because python creates pyc files during tox runs, certain changes in
  the tree, like deletes of files, or switching branches, can create
  spurious errors. This can be suppressed by PYTHONDONTWRITEBYTECODE=1
  in the tox.ini.

To manage notifications about this bug go to:
https://bugs.launchpad.net/congress/+bug/1368661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523845] [NEW] Pip package 'ovs' needed but not present in requirements.txt

2015-12-08 Thread John Schwarz
Public bug reported:

As the title mentions, the 'ovs' pip package is needed for [1], but is
not present in the requirements.txt [2] and it should be changed to
reflect this dependency.

[1]: 
https://github.com/openstack/neutron/blob/7a5ebc171f9ff342d7526808b1063b58cc631fec/neutron/agent/ovsdb/impl_idl.py#L21
[2]: 
https://github.com/openstack/neutron/blob/7a5ebc171f9ff342d7526808b1063b58cc631fec/requirements.txt

** Affects: neutron
 Importance: Undecided
 Assignee: John Schwarz (jschwarz)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => John Schwarz (jschwarz)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1523845

Title:
  Pip package 'ovs' needed but not present in requirements.txt

Status in neutron:
  In Progress

Bug description:
  As the title mentions, the 'ovs' pip package is needed for [1], but is
  not present in the requirements.txt [2] and it should be changed to
  reflect this dependency.

  [1]: 
https://github.com/openstack/neutron/blob/7a5ebc171f9ff342d7526808b1063b58cc631fec/neutron/agent/ovsdb/impl_idl.py#L21
  [2]: 
https://github.com/openstack/neutron/blob/7a5ebc171f9ff342d7526808b1063b58cc631fec/requirements.txt

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1523845/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523716] Re: oslo.utils upgrade breaks unit tests

2015-12-08 Thread Kai Qiang Wu(Kennan)
** Also affects: magnum
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1523716

Title:
  oslo.utils upgrade breaks unit tests

Status in Ceilometer:
  New
Status in Magnum:
  New
Status in networking-odl:
  New
Status in neutron:
  New
Status in oslo.utils:
  Fix Released

Bug description:
  Upgraded oslo.utils (3.1.0) is breaking ceilometer unit tests:

  https://jenkins07.openstack.org/job/gate-ceilometer-
  python34/897/console

  Stack Trace:

  2015-12-07 20:17:19.296 | __import__(module_str)
  2015-12-07 20:17:19.296 |   File "ceilometer/notification.py", line 24, in 

  2015-12-07 20:17:19.296 | from ceilometer import coordination
  2015-12-07 20:17:19.296 |   File "ceilometer/coordination.py", line 20, in 

  2015-12-07 20:17:19.297 | import tooz.coordination
  2015-12-07 20:17:19.297 |   File 
"/home/jenkins/workspace/gate-ceilometer-python27/.tox/py27/local/lib/python2.7/site-packages/tooz/coordination.py",
 line 21, in 
  2015-12-07 20:17:19.297 | from oslo_utils import netutils
  2015-12-07 20:17:19.297 |   File 
"/home/jenkins/workspace/gate-ceilometer-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/netutils.py",
 line 25, in 
  2015-12-07 20:17:19.297 | import netifaces
  2015-12-07 20:17:19.297 | ImportError: No module named netifaces

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1523716/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1519591] Re: manage security groups missing back button

2015-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/251407
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=f51eab1a7ac78d71eab28fd61e689a0d0bacbdf4
Submitter: Jenkins
Branch:master

commit f51eab1a7ac78d71eab28fd61e689a0d0bacbdf4
Author: Itxaka 
Date:   Mon Nov 30 15:49:28 2015 +0100

Make breadcrumb appear on sec groups and keypair details

when editing security groups rules there is no button to
go back to the original page.
Same when looking at a keypair details.
This adds the breadcrumb to both pages so you can go
back to the original page.
Also modifies the test KeyPairViewTests.test_keypair_detail_get
to take into account the breadcrumbs

Change-Id: I2e804c46c744f2f6a30dc02026aa4801c69b06ac
Closes-Bug: #1519591


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1519591

Title:
  manage security groups missing back button

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  As demo user, navigate to Project - Compute - Access & Security, pick
  whatever row with existing security group and click to 'Manage rules'.
  At this point, there is no direct 'Back' link as same as it's done in
  example by Admin - System - Volumes, View Extra Specs page
  (...volume_types/uuid-of-vt/extras).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1519591/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523835] [NEW] Call fun _ip_prefix_arg twice but only one play a role

2015-12-08 Thread yujie
Public bug reported:

In file neutron/agent/linux/iptables_firewall.py, when exec fun
_generate_plain_rule_args(self, sg_rule) for sg_rule using remote_ip,
the fun _ip_prefix_arg was called twice for 'source_ip_prefix' and
'source_ip_prefix'.

But for a sg_rule with direction, the ingress sg_rule only need:
_ip_prefix_arg('s', 
sg_rule.get('source_ip_prefix'))
The egress_rule only need:
   _ip_prefix_arg('d', 
sg_rule.get('dest_ip_prefix'))

** Affects: neutron
 Importance: Undecided
 Assignee: yujie (16189455-d)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => yujie (16189455-d)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1523835

Title:
  Call fun _ip_prefix_arg twice but only one play a role

Status in neutron:
  New

Bug description:
  In file neutron/agent/linux/iptables_firewall.py, when exec fun
  _generate_plain_rule_args(self, sg_rule) for sg_rule using remote_ip,
  the fun _ip_prefix_arg was called twice for 'source_ip_prefix' and
  'source_ip_prefix'.

  But for a sg_rule with direction, the ingress sg_rule only need:
  _ip_prefix_arg('s', 
sg_rule.get('source_ip_prefix'))
  The egress_rule only need:
 _ip_prefix_arg('d', 
sg_rule.get('dest_ip_prefix'))

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1523835/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1418702] Re: Project admin fails to list role assignments for his project with Project Scoped Token

2015-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/248892
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=7f3158a6d4b5df78dfde9f281cf82dd6e4fe02f4
Submitter: Jenkins
Branch:master

commit 7f3158a6d4b5df78dfde9f281cf82dd6e4fe02f4
Author: Priti Desai 
Date:   Mon Nov 23 11:59:07 2015 -0800

Fix for GET project by project admin

The issue is project admin in default policy file
(policy.v3cloudsample.json) does not have access to get details
of his project.

This change updates the default policy file to let project
administrators to retrieve their own project details.

Change-Id: I60995db12a90c8ce6090099dee79ed1e5ee5caed
Closes-Bug: 1418702


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1418702

Title:
  Project admin fails to list role assignments for his project with
  Project Scoped Token

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  I am facing issues listing role assignments as project administrator
  with project scoped token.

  OS_AUTH_URL=http://10.0.2.15:35357/v3 
  OS_USERNAME=user-a 
  OS_PASSWORD=password 
  OS_USER_DOMAIN_NAME=domain-a 
  OS_PROJECT_NAME=project-a 
  OS_PROJECT_DOMAIN_NAME=domain-a 
  OS_IDENTITY_API_VERSION=3

  $ openstack role assignment list  --project=7c305333795944e48b54874c911c1c2b
  ERROR: openstack You are not authorized to perform the requested action: 
identity:list_projects (Disable debug mode to suppress these details.) (HTTP 
403)

  
  Log messages from Keystone log file:

  [Thu Feb 05 19:16:00 2015] [error] Rule Method
  [Thu Feb 05 19:16:00 2015] [error] (rule:cloud_admin or 
rule:admin_and_matching_target_project_domain_id)
  [Thu Feb 05 19:16:00 2015] [error] Rule
  [Thu Feb 05 19:16:00 2015] [error] identity:get_project
  [Thu Feb 05 19:16:00 2015] [error] Target
  [Thu Feb 05 19:16:00 2015] [error] {'target.project.name': u'project-a', 
'target.project.description': u'', 'target.project.enabled': True, 
'project_id': u'7c305333795944e48b54874c911c1c2b', 'target.project.domain_id': 
u'b5da5584e14148f7a305e0f22a9b3a2c', 'target.project.id': 
u'7c305333795944e48b54874c911c1c2b'}
  [Thu Feb 05 19:16:00 2015] [error] Creds
  [Thu Feb 05 19:16:00 2015] [error] {'is_delegated_auth': False, 
'access_token_id': None, 'user_id': u'77194b22fb6e4ac2839c1d93c46e82fd', 
'roles': [u'admin'], 'trustee_id': None, 'trustor_id': None, 'consumer_id': 
None, 'token': , 'project_id': 
u'7c305333795944e48b54874c911c1c2b', 'trust_id': None}
  [Thu Feb 05 19:16:00 2015] [error] self
  [Thu Feb 05 19:16:00 2015] [error] 
  [Thu Feb 05 19:16:00 2015] [error] 19584 WARNING keystone.common.wsgi [-] You 
are not authorized to perform the requested action: identity:get_project 
(Disable debug mode to suppress these details.)

  

  [Thu Feb 05 19:16:00 2015] [error] ***Rule Method
  [Thu Feb 05 19:16:00 2015] [error] ((rule:admin_required and 
domain_id:%(domain_id)s) or rule:cloud_admin)
  [Thu Feb 05 19:16:00 2015] [error] ***Rule
  [Thu Feb 05 19:16:00 2015] [error] identity:list_projects
  [Thu Feb 05 19:16:00 2015] [error] ***Target
  [Thu Feb 05 19:16:00 2015] [error] {'name': 
u'7c305333795944e48b54874c911c1c2b'}
  [Thu Feb 05 19:16:00 2015] [error] ***Creds
  [Thu Feb 05 19:16:00 2015] [error] {'is_delegated_auth': False, 
'access_token_id': None, 'user_id': u'77194b22fb6e4ac2839c1d93c46e82fd', 
'roles': [u'admin'], 'trustee_id': None, 'trustor_id': None, 'consumer_id': 
None, 'token': , 'project_id': 
u'7c305333795944e48b54874c911c1c2b', 'trust_id': None}
  [Thu Feb 05 19:16:00 2015] [error] self
  [Thu Feb 05 19:16:00 2015] [error] 
  [Thu Feb 05 19:16:00 2015] [error] 19586 WARNING keystone.common.wsgi [-] You 
are not authorized to perform the requested action: identity:list_projects 
(Disable debug mode to suppress these details.)

  
  The issue is project admin in default policy file (policy.v3cloudsample.json) 
does not have access to get details of his project. Due to this, keystone 
assumes that the project does not exist, and tries to get the project listing 
which again fails.

  
  I updated default policy file and letting project administrators get the 
project details.

  Updating:

  "identity:get_project": "rule:cloud_admin or
  rule:admin_and_matching_target_project_domain_id”,

  To:

  "identity:get_project": "rule:cloud_admin or 
rule:admin_and_matching_target_project_domain_id or 
rule:admin_and_matching_target_project_id",
  "admin_and_matching_target_project_id": "rule:admin_required and 
project_id:%(target.project.id)s”,

  With this change:

  $ openstack role assignment list --project=7c305333795944e48b54874c911c1c2b
  
+--+--+---+--++
  | Role  

[Yahoo-eng-team] [Bug 1479966] Re: Text About Compressed Images

2015-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/253421
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=e8a2e7ebb489ef5a12f82e2150bb9896604bc0a3
Submitter: Jenkins
Branch:master

commit e8a2e7ebb489ef5a12f82e2150bb9896604bc0a3
Author: Paul Karikh 
Date:   Fri Dec 4 13:17:53 2015 +0300

Fix decription for Image create modal

Current image create modal description/helpbox
contains inaccurate description.
Glance can store compressed images, but
Nova can't boot from compressed images.
This patch deletes mentioning compressed
images as supported type.

Change-Id: Ie896cceae0b20f7d7cbe4af1e6bca67f2389c43f
Closes-Bug: #1479966


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1479966

Title:
  Text About Compressed Images

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Hello,

  In the Create an Image page of Horizon, it says the following:

  "Currently only images available via an HTTP URL are supported. The
  image location must be accessible to the Image Service. Compressed
  image binaries are supported (.zip and .tar.gz.)"

  If I upload a QCOW2 image that has been zipped, gzip'd, or tar'd and
  gzip'd, the image is saved but instances fail to boot because of "no
  bootable device".

  Either this text is too vague, something additional needs configured
  in Glance, or Glance no longer supports this and so the text should be
  removed. Unfortunately I have not been able to determine which is the
  correct answer...

  I'm happy to help test or verify.

  Thanks,
  Joe

  -- edit --

  Related Glance bug: https://bugs.launchpad.net/glance/+bug/1482436

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1479966/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368661] Re: Unit tests sometimes fail because of stale pyc files

2015-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/254058
Committed: 
https://git.openstack.org/cgit/openstack/congress/commit/?id=022950d9d96b1e9ea1a8801ef54755f33769ba98
Submitter: Jenkins
Branch:master

commit 022950d9d96b1e9ea1a8801ef54755f33769ba98
Author: Anusha Ramineni 
Date:   Mon Dec 7 13:36:56 2015 +0530

tox: rm all pyc before doing unit test

Delete python bytecode before every test run.
Because python creates pyc files during tox runs, certain
changes in the tree, like deletes of files, or switching
branches, can create spurious errors.

Closes-Bug: #1368661
Change-Id: If31ffc245ade9f62b61f99246e59208208ed4fb1


** Changed in: congress
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368661

Title:
  Unit tests sometimes fail because of stale pyc files

Status in congress:
  Fix Released
Status in Ironic:
  Fix Released
Status in Magnum:
  Fix Released
Status in Mistral:
  Fix Released
Status in Monasca:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in python-cinderclient:
  Fix Committed
Status in python-congressclient:
  In Progress
Status in python-glanceclient:
  In Progress
Status in python-heatclient:
  Fix Committed
Status in python-keystoneclient:
  Fix Committed
Status in python-magnumclient:
  Fix Released
Status in python-mistralclient:
  New
Status in python-neutronclient:
  In Progress
Status in Python client library for Sahara:
  Fix Committed
Status in python-swiftclient:
  In Progress
Status in python-troveclient:
  Fix Committed
Status in Python client library for Zaqar:
  Fix Committed
Status in Trove:
  Fix Released
Status in zaqar:
  In Progress

Bug description:
  Because python creates pyc files during tox runs, certain changes in
  the tree, like deletes of files, or switching branches, can create
  spurious errors. This can be suppressed by PYTHONDONTWRITEBYTECODE=1
  in the tox.ini.

To manage notifications about this bug go to:
https://bugs.launchpad.net/congress/+bug/1368661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1519449] Re: Remove Python 2.6 Support

2015-12-08 Thread Steve Martinelli
** Changed in: python-keystoneclient
   Status: Fix Committed => Fix Released

** Changed in: python-keystoneclient-kerberos
   Status: Fix Committed => Fix Released

** Changed in: keystonemiddleware
   Status: Fix Committed => Fix Released

** Changed in: keystone
   Status: In Progress => Fix Released

** Changed in: keystone
Milestone: mitaka-2 => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1519449

Title:
  Remove Python 2.6 Support

Status in OpenStack Identity (keystone):
  Fix Released
Status in keystoneauth:
  Fix Released
Status in keystonemiddleware:
  Fix Released
Status in pycadf:
  Invalid
Status in python-keystoneclient:
  Fix Released
Status in python-keystoneclient-kerberos:
  Fix Released

Bug description:
  Remove the support for python 2.6

  Tasks:

  Remove Gate Job
  Remove Tox.ini cruft
  Remove python 2.6 specific code.
  Remove the trove classifiers.

  This is a tracking bug so that anyone can tag work towards this on to
  a known point for the Mitaka release.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1519449/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523793] [NEW] Nova: Quota limit for meta can be set to smaller than already used

2015-12-08 Thread ibm-cloud-qa
Public bug reported:

[Summary]
Quota limit for meta can be set to smaller than already used

[Topo]
devstack all-in-one node

[Description and expect result]
Quota limit for meta can not be set to smaller than already used

[Reproduceable or not]
reproduceable 

[Recreate Steps]
1) current meta quota limit is 10:
root@45-59:/opt/stack/devstack# nova quota-show
+-+---+
| Quota   | Limit |
+-+---+
| instances   | 11|
| cores   | 20|
| ram | 51200 |
| floating_ips| 10|
| fixed_ips   | -1|
| metadata_items  | 10|
| injected_files  | 5 |
| injected_file_content_bytes | 10240 |
| injected_file_path_bytes| 255   |
| key_pairs   | 100   |
| security_groups | 10|
| security_group_rules| 20|
| server_groups   | 10|
| server_group_members| 10|
+-+---+

2)create an instance, and set 2 meta items for the instance:
root@45-59:/opt/stack/devstack# nova meta instance-docker-test set a=1 b=2
root@45-59:/opt/stack/devstack# nova show instance-docker-test 
+--++
| Property | Value  
|
+--++
| OS-DCF:diskConfig| MANUAL 
|
| OS-EXT-AZ:availability_zone  | nova   
|
| OS-EXT-SRV-ATTR:host | 45-59  
|
| OS-EXT-SRV-ATTR:hostname | instance-docker-test   
|
| OS-EXT-SRV-ATTR:hypervisor_hostname  | 45-59  
|
| OS-EXT-SRV-ATTR:instance_name| instance-0002  
|
| OS-EXT-SRV-ATTR:kernel_id| 9400ca84-438d-404d-a454-f1764c105a38   
|
| OS-EXT-SRV-ATTR:launch_index | 0  
|
| OS-EXT-SRV-ATTR:ramdisk_id   | 50aadc88-c755-418b-912f-a1ce4c91ddce   
|
| OS-EXT-SRV-ATTR:reservation_id   | r-10jntt4r 
|
| OS-EXT-SRV-ATTR:root_device_name | /dev/vda   
|
| OS-EXT-SRV-ATTR:user_data| -  
|
| OS-EXT-STS:power_state   | 1  
|
| OS-EXT-STS:task_state| -  
|
| OS-EXT-STS:vm_state  | active 
|
| OS-SRV-USG:launched_at   | 2015-12-08T15:19:30.00 
|
| OS-SRV-USG:terminated_at | -  
|
| accessIPv4   |
|
| accessIPv6   |
|
| config_drive | True   
|
| created  | 2015-12-08T15:19:23Z   
|
| flavor   | m1.tiny (1)
|
| hostId   | 
f8d5a16aa9a8b55e358ed62cc40ae216c1e025bc991c6b71f7898830   |
| id   | 2e332fba-7c31-4271-9443-b36ef7cc76c2   
|
| image| cirros-0.3.4-x86_64-uec 
(6ec79afd-b705-44c1-926c-7e29c2521318) |
| key_name | -  
|
| metadata | {"a": "1", "b": "2"}   
|
| name | instance-docker-test   
|
| net1 network | 1.0.0.4
|
| os-extended-volumes:volumes_attached | [] 
|
| progress | 0  
|
| security_groups  | default