[Yahoo-eng-team] [Bug 1453350] [NEW] race between neutron port create and nova

2015-05-08 Thread jason bishop
Public bug reported:


I am doing load testing with tempest scenario tests and see what I think is a 
race condition between neutron dhcp standup and nova boot.  I believe the 
scenario I am seeing to be a more general case of 
https://bugs.launchpad.net/neutron/+bug/1334447.

test environment: 5 compute nodes, 1 controller node running all api and
neutron services.  ubuntu juno hand patched 1382064 and 1385257 and my
workaround in 1451492.  standard neutron setup otherwise.

If I run tempest scenario test test_server_basic_ops 30 times in
parallel things consistently work fine.  If I increase to 60 in parallel
I get lots of failures (see below).  Upon investigation, it looks to me
that neutron standup of netns and its dnsmasq process is too slow and
loses the race with nova boot and the VM comes up without a (dhcp
provided) IP address (causing ssh to timeout and fail).


Traceback (most recent call last):
  File "/home/aqua/tempest/tempest/test.py", line 125, in wrapper
return f(self, *func_args, **func_kwargs)
  File "/home/aqua/tempest/tempest/scenario/test_server_basic_ops_38.py", line 
105, in test_server_basicops
self.verify_ssh()
  File "/home/aqua/tempest/tempest/scenario/test_server_basic_ops_38.py", line 
95, in verify_ssh
private_key=self.keypair['private_key'])
  File "/home/aqua/tempest/tempest/scenario/manager.py", line 310, in 
get_remote_client
linux_client.validate_authentication()
  File "/home/aqua/tempest/tempest/common/utils/linux/remote_client.py", line 
55, in validate_authentication
self.ssh_client.test_connection_auth()
  File "/home/aqua/tempest/tempest/common/ssh.py", line 150, in 
test_connection_auth
connection = self._get_ssh_connection()
  File "/home/aqua/tempest/tempest/common/ssh.py", line 87, in 
_get_ssh_connection
password=self.password)
tempest.exceptions.SSHTimeout: Connection to the 172.17.205.21 via SSH timed 
out.
User: cirros, Password: None


Ran 60 tests in 742.931s

FAILED (failures=47)


To reproduce test environment:
1) checkout tempest and remove all tempest scenario tests except 
test_server_basic_ops
2) run this command to make 59 copies of the test: for i in {1..59}; do cp -p 
test_server_basic_ops.py test_server_basic_ops_$i.py; sed --in-place -e 
"s/class TestServerBasicOps(manager.ScenarioTest):/class 
TestServerBasicOps$i(manager.ScenarioTest):/" -e "s/
super(TestServerBasicOps, self).setUp()/super(TestServerBasicOps$i, 
self).setUp()/" -e "s/
@test.idempotent_id('7fff3fb3-91d8-4fd0-bd7d-0204f1f180ba')/
@test.idempotent_id(\'$(uuidgen)\')/" test_server_basic_ops_$i.py; done
3) run 30 tests and observe successful run: OS_TEST_TIMEOUT=1200 
./run_tempest.sh tempest.scenario -- --concurrency=30
4) run 60 tests and observe failures: OS_TEST_TIMEOUT=1200 ./run_tempest.sh 
tempest.scenario -- --concurrency=60

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453350

Title:
  race between neutron port create and nova

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  
  I am doing load testing with tempest scenario tests and see what I think is a 
race condition between neutron dhcp standup and nova boot.  I believe the 
scenario I am seeing to be a more general case of 
https://bugs.launchpad.net/neutron/+bug/1334447.

  test environment: 5 compute nodes, 1 controller node running all api
  and neutron services.  ubuntu juno hand patched 1382064 and 1385257
  and my workaround in 1451492.  standard neutron setup otherwise.

  If I run tempest scenario test test_server_basic_ops 30 times in
  parallel things consistently work fine.  If I increase to 60 in
  parallel I get lots of failures (see below).  Upon investigation, it
  looks to me that neutron standup of netns and its dnsmasq process is
  too slow and loses the race with nova boot and the VM comes up without
  a (dhcp provided) IP address (causing ssh to timeout and fail).

  
  Traceback (most recent call last):
File "/home/aqua/tempest/tempest/test.py", line 125, in wrapper
  return f(self, *func_args, **func_kwargs)
File "/home/aqua/tempest/tempest/scenario/test_server_basic_ops_38.py", 
line 105, in test_server_basicops
  self.verify_ssh()
File "/home/aqua/tempest/tempest/scenario/test_server_basic_ops_38.py", 
line 95, in verify_ssh
  private_key=self.keypair['private_key'])
File "/home/aqua/tempest/tempest/scenario/manager.py", line 310, in 
get_remote_client
  linux_client.validate_authentication()
File "/home/aqua/tempest/tempest/common/utils/linux/remote_client.py", line 
55, in validate_authentication
  self.ssh_client.test_connection_auth()
File "/home/aqua/tempest/tempest/common/ssh.py", line 150, in 
test_connection_auth
  connection = self._get_ssh_connection()
File "/home/aqua/te

[Yahoo-eng-team] [Bug 1405533] Re: After the instance cold migration, the injected contents in the file "disk.config" is missing.

2015-05-08 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1405533

Title:
  After the instance cold migration, the injected contents in the file
  "disk.config" is missing.

Status in OpenStack Compute (Nova):
  Expired

Bug description:
  1. Boot a instance with setting config-drive true and a injected file.

  [root@lxlconductor1 ~(keystone_admin)]# nova boot --image fb6d667d-
  79d3-4aa7-9fce-ffe23d3259fc --flavor f2e4ce16-4f00-4d9a-968b-
  54eb778c39d7 --nic net-id=7e63938c-ba9d-448a-be92-c868669c52b4
  --config-drive true --file /home/hanrong/ip.txt=/home/hanrong/ip.txt
  hanrong9

  [root@lxlconductor1 ~(keystone_admin)]# nova list
  
+--+--+++-+---+
  | ID   | Name | Status | Task State | 
Power State | Networks  |
  
+--+--+++-+---+
  | 122e21e1-6f9a-46ce-835a-a05a4e5f5c95 | hanrong9 | ACTIVE | -  | 
Running | test=192.168.0.37 |
  
+--+--+++-+---+

  2.  Check the cd-rom file.
  [root@lxlconductor1 ~(keystone_admin)]# cd 
/var/lib/nova/instances/122e21e1-6f9a-46ce-835a-a05a4e5f5c95

  [root@lxlconductor1 122e21e1-6f9a-46ce-835a-a05a4e5f5c95(keystone_admin)]# ll
  total 1808
  -rw-rw 1 root root   25335 Feb 23 10:45 console.log
  -rw-r--r-- 1 root root 1441792 Feb 23 10:45 disk
  -rw-r--r-- 1 root root  428032 Feb 23 10:44 disk.config
  -rw-r--r-- 1 root root 162 Feb 23 10:44 disk.info
  -rw-r--r-- 1 root root1928 Feb 23 10:44 libvirt.xml

  3. Mount this cd-rom file to a path, and view it's content.
  [root@lxlconductor1 122e21e1-6f9a-46ce-835a-a05a4e5f5c95(keystone_admin)]# 
mount -o loop disk.config /home
  mount: /dev/loop1 is write-protected, mounting read-only
  [root@lxlconductor1 122e21e1-6f9a-46ce-835a-a05a4e5f5c95(keystone_admin)]# cd 
/home/openstack/
  [root@lxlconductor1 openstack(keystone_admin)]# ll
  total 10
  dr-xr-xr-x 2 root root 2048 Feb 23 10:44 2012-08-10
  dr-xr-xr-x 2 root root 2048 Feb 23 10:44 2013-04-04
  dr-xr-xr-x 2 root root 2048 Feb 23 10:44 2013-10-17
  dr-xr-xr-x 2 root root 2048 Feb 23 10:44 content
  dr-xr-xr-x 2 root root 2048 Feb 23 10:44 latest

  4. Migrate this instance.
  [root@lxlconductor1 ~(keystone_admin)]# nova migrate hanrong9
  [root@lxlconductor1 ~(keystone_admin)]# nova list
  
+--+--+---++-+---+
  | ID   | Name | Status| Task 
State | Power State | Networks  |
  
+--+--+---++-+---+
  | 122e21e1-6f9a-46ce-835a-a05a4e5f5c95 | hanrong9 | VERIFY_RESIZE | - 
 | Running | test=192.168.0.37 |
  
+--+--+---++-+---+

  [root@lxlconductor1 ~(keystone_admin)]# nova resize-confirm hanrong9
  [root@lxlconductor1 ~(keystone_admin)]# nova list
  
+--+--+---++-+---+
  | ID   | Name | Status| Task 
State | Power State | Networks  |
  
+--+--+---++-+---+
  | 122e21e1-6f9a-46ce-835a-a05a4e5f5c95 | hanrong9 | ACTIVE | -  | 
Running | test=192.168.0.37 |
  
+--+--+---++-+---+

  5. Check the cd-rom file at the destination host.
  [root@lxlcompute1]# cd 
/var/lib/nova/instances/122e21e1-6f9a-46ce-835a-a05a4e5f5c95/
  [root@lxlcompute1 122e21e1-6f9a-46ce-835a-a05a4e5f5c95]# ll
  total 18716
  -rw-rw 1 root root13188 Feb 23 10:28 console.log
  -rw-r--r-- 1 root root 19398656 Feb 23 10:17 disk
  -rw-r--r-- 1 root root   419840 Feb 23 10:27 disk.config
  -rw-r--r-- 1 root root  162 Feb 23 10:27 disk.info
  -rw-r--r-- 1 root root 1928 Feb 23 10:28 libvirt.xml

  6. Mount this cd-rom file to a path, and view it's content at the destination 
host.
  [root@lxlcompute1 122e21e1-6f9a-46ce-835a-a05a4e5f5c95]# mount -o loop 
disk.config /home
  mount: /dev/loop0 is write-protected, mounting read-only
  [root@lxlcompute1 122e21e1-6f9a-46ce-835a-a05a4e5f5c95]# cd /home/openstack/
  [root@lxlcompute1 openstack]# ll
  total 10
  dr-xr-xr-x 2 root root 2048 Feb 23 10:15 2012-08-10
  dr-

[Yahoo-eng-team] [Bug 1453323] [NEW] Remove router interface on Arista Plugin fails

2015-05-08 Thread Sukhdev Kapur
Public bug reported:

Removing a router interface on Arista L3 Plugin causes the following
exception:

2015-05-07 18:43:35.329 24639 INFO neutron.wsgi [-] (24639) accepted 
('172.28.129.192', 59983)
2015-05-07 18:43:35.428 DEBUG neutron.services.l3_router.l3_arista 
[req-268e85d5-f013-44a0-bbf0-d8ec83ea2155 admin demo] 
neutron.services.l3_router.l3_arista.AristaL3ServicePlugin method 
remove_router_interface called with arguments (, u'e24f4f29-83b1-4aed-afcb-3adfebad22d5', {u'port_id': 
u'867118df-949f-4b8f-82d5-734a82c2ec77'}) {} wrapper 
/opt/stack/neutron/neutron/common/log.py:33
2015-05-07 18:43:35.429 ERROR neutron.api.v2.resource 
[req-268e85d5-f013-44a0-bbf0-d8ec83ea2155 admin demo] remove_router_interface 
failed
2015-05-07 18:43:35.429 24639 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
2015-05-07 18:43:35.429 24639 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 83, in resource
2015-05-07 18:43:35.429 24639 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
2015-05-07 18:43:35.429 24639 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 207, in _handle_action
2015-05-07 18:43:35.429 24639 TRACE neutron.api.v2.resource return 
getattr(self._plugin, name)(*arg_list, **kwargs)
2015-05-07 18:43:35.429 24639 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/common/log.py", line 34, in wrapper
2015-05-07 18:43:35.429 24639 TRACE neutron.api.v2.resource return 
method(*args, **kwargs)
2015-05-07 18:43:35.429 24639 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/services/l3_router/l3_arista.py", line 212, in 
remove_router_interface
2015-05-07 18:43:35.429 24639 TRACE neutron.api.v2.resource context, 
router_id, interface_info))
2015-05-07 18:43:35.429 24639 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/l3_db.py", line 1322, in remove_router_interface
2015-05-07 18:43:35.429 24639 TRACE neutron.api.v2.resource context, 
router_id, interface_info)
2015-05-07 18:43:35.429 24639 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/l3_db.py", line 724, in remove_router_interface
2015-05-07 18:43:35.429 24639 TRACE neutron.api.v2.resource 
self._validate_interface_info(interface_info, for_removal=True)
2015-05-07 18:43:35.429 24639 TRACE neutron.api.v2.resource TypeError: 
_validate_interface_info() got an unexpected keyword argument 'for_removal'
2015-05-07 18:43:35.429 24639 TRACE neutron.api.v2.resource

** Affects: neutron
 Importance: Undecided
 Assignee: Sukhdev Kapur (sukhdev-8)
 Status: New

** Description changed:

  Removing a router interface on Arista L3 Plugin causes the following
  exception:
- 
  
  2015-05-07 18:43:35.329 24639 INFO neutron.wsgi [-] (24639) accepted 
('172.28.129.192', 59983)
  2015-05-07 18:43:35.428 DEBUG neutron.services.l3_router.l3_arista 
[req-268e85d5-f013-44a0-bbf0-d8ec83ea2155 admin demo] 
neutron.services.l3_router.l3_arista.AristaL3ServicePlugin method 
remove_router_interface called with arguments (, u'e24f4f29-83b1-4aed-afcb-3adfebad22d5', {u'port_id': 
u'867118df-949f-4b8f-82d5-734a82c2ec77'}) {} wrapper 
/opt/stack/neutron/neutron/common/log.py:33
  2015-05-07 18:43:35.429 ERROR neutron.api.v2.resource 
[req-268e85d5-f013-44a0-bbf0-d8ec83ea2155 admin demo] remove_router_interface 
failed
  2015-05-07 18:43:35.429 24639 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2015-05-07 18:43:35.429 24639 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 83, in resource
  2015-05-07 18:43:35.429 24639 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2015-05-07 18:43:35.429 24639 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 207, in _handle_action
  2015-05-07 18:43:35.429 24639 TRACE neutron.api.v2.resource return 
getattr(self._plugin, name)(*arg_list, **kwargs)
  2015-05-07 18:43:35.429 24639 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/common/log.py", line 34, in wrapper
  2015-05-07 18:43:35.429 24639 TRACE neutron.api.v2.resource return 
method(*args, **kwargs)
  2015-05-07 18:43:35.429 24639 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/services/l3_router/l3_arista.py", line 212, in 
remove_router_interface
  2015-05-07 18:43:35.429 24639 TRACE neutron.api.v2.resource context, 
router_id, interface_info))
  2015-05-07 18:43:35.429 24639 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/l3_db.py", line 1322, in remove_router_interface
  2015-05-07 18:43:35.429 24639 TRACE neutron.api.v2.resource context, 
router_id, interface_info)
  2015-05-07 18:43:35.429 24639 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/l3_db.py", line 724, in remove_router_interface
  2015-05-07 18:43:35.429 24639 TRACE neutron.api.v2.

[Yahoo-eng-team] [Bug 1453322] [NEW] The strange case of the quota_items config option

2015-05-08 Thread Salvatore Orlando
Public bug reported:

The quota_items option [1] declares which resources will be subject to quota 
limiting.
Then the code, for each value, there, will look for an option named 
quota_, when registering resources with the quota engine. This 
happens at module load time.

This is pretty much how networks, ports, and subnets have been registering 
themselves with the quota engine so far.
All the other resources for which neutron does quota limiting, are a bit 
smarter, and register themselves when the respective API extensions are loaded.

Indeed there are config options for routers, floating ips, and other resources, 
which are not listed in quota_items. While this is not an error, it is surely 
confusing for operators. 
In order to avoid making the configuration schema dependent on the value for a 
conf option (eg: requiring a quota_meh option if 'meh' is in quota_items), the 
system picks a 'default limit' for all resources for which there is no 
corresponding limit. This avoid failures but is, again, fairly confusing.
Registration happens at module load. This is probably not great from a 
practical perspective. It's also bad for maintainability, and it is a bit ugly 
from a coding style perspective.
And finally it is unclear why resource registration is done in a way for core 
resources and in another way for all other resources. Consistency is somewhat 
important.

the quota_items options should probably just be deprecated

[1]
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/quota.py#n35

** Affects: neutron
 Importance: Undecided
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453322

Title:
  The strange case of the quota_items config option

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  The quota_items option [1] declares which resources will be subject to quota 
limiting.
  Then the code, for each value, there, will look for an option named 
quota_, when registering resources with the quota engine. This 
happens at module load time.

  This is pretty much how networks, ports, and subnets have been registering 
themselves with the quota engine so far.
  All the other resources for which neutron does quota limiting, are a bit 
smarter, and register themselves when the respective API extensions are loaded.

  Indeed there are config options for routers, floating ips, and other 
resources, which are not listed in quota_items. While this is not an error, it 
is surely confusing for operators. 
  In order to avoid making the configuration schema dependent on the value for 
a conf option (eg: requiring a quota_meh option if 'meh' is in quota_items), 
the system picks a 'default limit' for all resources for which there is no 
corresponding limit. This avoid failures but is, again, fairly confusing.
  Registration happens at module load. This is probably not great from a 
practical perspective. It's also bad for maintainability, and it is a bit ugly 
from a coding style perspective.
  And finally it is unclear why resource registration is done in a way for core 
resources and in another way for all other resources. Consistency is somewhat 
important.

  the quota_items options should probably just be deprecated

  [1]
  http://git.openstack.org/cgit/openstack/neutron/tree/neutron/quota.py#n35

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1453322/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453320] [NEW] Create periodic task that logs agent's state

2015-05-08 Thread Eugene Nikanorov
Public bug reported:

When analyzing logs it would be convenient to have an information about
liveliness of agents that would create a short-time context of cluster
health.

That also could be than used for some automated log-analysis tools.

** Affects: neutron
 Importance: Wishlist
 Assignee: Eugene Nikanorov (enikanorov)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453320

Title:
  Create periodic task that logs agent's state

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When analyzing logs it would be convenient to have an information
  about liveliness of agents that would create a short-time context of
  cluster health.

  That also could be than used for some automated log-analysis tools.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1453320/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1422049] Re: Security group checking action permissions raise error

2015-05-08 Thread Lin Hua Cheng
Code is not in Juno, no need to backport

** Tags removed: juno-backport-potential

** Changed in: horizon/juno
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1422049

Title:
  Security group   checking action permissions raise error

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) juno series:
  Invalid

Bug description:
  When using nova-network, I got the output on horizon:

  [Sun Feb 15 02:48:41.965163 2015] [:error] [pid 21259:tid 140656137611008] 
Error while checking action permissions.
  [Sun Feb 15 02:48:41.965184 2015] [:error] [pid 21259:tid 140656137611008] 
Traceback (most recent call last):
  [Sun Feb 15 02:48:41.965193 2015] [:error] [pid 21259:tid 140656137611008]   
File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tables/base.py", 
line 1260, in _filter_action
  [Sun Feb 15 02:48:41.965199 2015] [:error] [pid 21259:tid 140656137611008]
 return action._allowed(request, datum) and row_matched
  [Sun Feb 15 02:48:41.965205 2015] [:error] [pid 21259:tid 140656137611008]   
File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tables/actions.py", 
line 137, in _allowed
  [Sun Feb 15 02:48:41.965211 2015] [:error] [pid 21259:tid 140656137611008]
 return self.allowed(request, datum)
  [Sun Feb 15 02:48:41.965440 2015] [:error] [pid 21259:tid 140656137611008]   
File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/access_and_security/security_groups/tables.py",
 line 83, in allowed
  [Sun Feb 15 02:48:41.965457 2015] [:error] [pid 21259:tid 140656137611008]
 if usages['security_groups']['available'] <= 0:
  [Sun Feb 15 02:48:41.965466 2015] [:error] [pid 21259:tid 140656137611008] 
KeyError: 'available'
  [Sun Feb 15 02:48:41.986480 2015] [:error] [pid 21259:tid 140656137611008] 
Error while checking action permissions.
  [Sun Feb 15 02:48:41.986533 2015] [:error] [pid 21259:tid 140656137611008] 
Traceback (most recent call last):
  [Sun Feb 15 02:48:41.986569 2015] [:error] [pid 21259:tid 140656137611008]   
File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tables/base.py", 
line 1260, in _filter_action
  [Sun Feb 15 02:48:41.986765 2015] [:error] [pid 21259:tid 140656137611008]
 return action._allowed(request, datum) and row_matched
  [Sun Feb 15 02:48:41.986806 2015] [:error] [pid 21259:tid 140656137611008]   
File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tables/actions.py", 
line 137, in _allowed
  [Sun Feb 15 02:48:41.986841 2015] [:error] [pid 21259:tid 140656137611008]
 return self.allowed(request, datum)
  [Sun Feb 15 02:48:41.987010 2015] [:error] [pid 21259:tid 140656137611008]   
File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/access_and_security/security_groups/tables.py",
 line 83, in allowed
  [Sun Feb 15 02:48:41.987051 2015] [:error] [pid 21259:tid 140656137611008]
 if usages['security_groups']['available'] <= 0:
  [Sun Feb 15 02:48:41.987088 2015] [:error] [pid 21259:tid 140656137611008] 
KeyError: 'available'

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1422049/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453304] [NEW] Exception ConnectionFailed on network topology page

2015-05-08 Thread Jingjing Ren
Public bug reported:

Horizon returns an exception on network_topology tab:

Sequence of events:
  Was viewing the network topology page
  Time passes
  Refresh page
  Get booted to login page
  Authenticate
  Exception and stack trace.

ConnectionFailed at /project/network_topology/
Connection to neutron failed: ('Connection aborted.', gaierror(-2, 'Name or 
service not known'))
Request Method: GET
Request URL:xxx/horizon/project/network_topology/
Django Version: 1.6.11
Exception Type: ConnectionFailed
Exception Value:
Connection to neutron failed: ('Connection aborted.', gaierror(-2, 'Name or 
service not known'))
Exception Location: 
/opt/stack/horizon/venv/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../neutronclient/client.py
 in _cs_request, line 106
Python Executable:  /usr/bin/python
Python Version: 2.7.5
Python Path:
['/opt/stack/horizon/venv/lib/python2.7/site-packages/openstack_dashboard/wsgi/../..',
'/opt/stack/horizon/venv/lib64/python27.zip',
'/opt/stack/horizon/venv/lib64/python2.7',
'/opt/stack/horizon/venv/lib64/python2.7/plat-linux2',
'/opt/stack/horizon/venv/lib64/python2.7/lib-tk',
'/opt/stack/horizon/venv/lib64/python2.7/lib-old',
'/opt/stack/horizon/venv/lib64/python2.7/lib-dynload',
'/usr/lib64/python2.7',
'/usr/lib/python2.7',
'/opt/stack/horizon/venv/lib/python2.7/site-packages',
'/opt/stack/horizon/venv/lib/python2.7/site-packages/openstack_dashboard']
Environment:
Request Method: GET
Request URL: xxx/horizon/project/network_topology/
Django Version: 1.6.11
Python Version: 2.7.5
Installed Applications:
['openstack_dashboard.dashboards.project',
'openstack_dashboard.dashboards.admin',
'openstack_dashboard.dashboards.identity',
'openstack_dashboard.dashboards.settings',
'openstack_dashboard',
'django.contrib.contenttypes',
'django.contrib.auth',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.humanize',
'django_pyscss',
'openstack_dashboard.django_pyscss_fix',
'compressor',
'horizon',
'openstack_auth']
Installed Middleware:
('django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'horizon.middleware.HorizonMiddleware',
'django.middleware.doc.XViewMiddleware',
'django.middleware.locale.LocaleMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware')
Traceback:
File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/django/core/handlers/base.py"
 in get_response
112. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../horizon/decorators.py"
 in dec
36. return view_func(request, *args, **kwargs)
File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../horizon/decorators.py"
 in dec
52. return view_func(request, *args, **kwargs)
File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../horizon/decorators.py"
 in dec
36. return view_func(request, *args, **kwargs)
File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../horizon/decorators.py"
 in dec
84. return view_func(request, *args, **kwargs)
File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/django/views/generic/base.py"
 in view
69. return self.dispatch(request, *args, **kwargs)
File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/django/views/generic/base.py"
 in dispatch
87. return handler(request, *args, **kwargs)
File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/django/views/generic/base.py"
 in get
154. context = self.get_context_data(**kwargs)
File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/network_topology/views.py"
 in get_context_data
123. context['instance_quota_exceeded'] = self._quota_exceeded('instances')
File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/network_topology/views.py"
 in _quota_exceeded
113. usages = quotas.tenant_quota_usages(self.request)
File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../horizon/utils/memoized.py"
 in wrapped
90. value = cache[key] = func(*args, **kwargs)
File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../openstack_dashboard/usage/quotas.py"
 in tenant_quota_usages
354. _get_tenant_network_usages(request, usages, disabled_quotas, tenant_id)
File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../openstack_dashboard/usage/quotas.py"
 in _get_tenant_network_usages
303. networks = neutron.network_list(request, shared=False)
File 
"/opt/stack/horizon/venv/lib/python2.7/site-packages/openstack_das

[Yahoo-eng-team] [Bug 1422049] Re: Security group checking action permissions raise error

2015-05-08 Thread Lin Hua Cheng
** Also affects: horizon/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1422049

Title:
  Security group   checking action permissions raise error

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) juno series:
  New

Bug description:
  When using nova-network, I got the output on horizon:

  [Sun Feb 15 02:48:41.965163 2015] [:error] [pid 21259:tid 140656137611008] 
Error while checking action permissions.
  [Sun Feb 15 02:48:41.965184 2015] [:error] [pid 21259:tid 140656137611008] 
Traceback (most recent call last):
  [Sun Feb 15 02:48:41.965193 2015] [:error] [pid 21259:tid 140656137611008]   
File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tables/base.py", 
line 1260, in _filter_action
  [Sun Feb 15 02:48:41.965199 2015] [:error] [pid 21259:tid 140656137611008]
 return action._allowed(request, datum) and row_matched
  [Sun Feb 15 02:48:41.965205 2015] [:error] [pid 21259:tid 140656137611008]   
File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tables/actions.py", 
line 137, in _allowed
  [Sun Feb 15 02:48:41.965211 2015] [:error] [pid 21259:tid 140656137611008]
 return self.allowed(request, datum)
  [Sun Feb 15 02:48:41.965440 2015] [:error] [pid 21259:tid 140656137611008]   
File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/access_and_security/security_groups/tables.py",
 line 83, in allowed
  [Sun Feb 15 02:48:41.965457 2015] [:error] [pid 21259:tid 140656137611008]
 if usages['security_groups']['available'] <= 0:
  [Sun Feb 15 02:48:41.965466 2015] [:error] [pid 21259:tid 140656137611008] 
KeyError: 'available'
  [Sun Feb 15 02:48:41.986480 2015] [:error] [pid 21259:tid 140656137611008] 
Error while checking action permissions.
  [Sun Feb 15 02:48:41.986533 2015] [:error] [pid 21259:tid 140656137611008] 
Traceback (most recent call last):
  [Sun Feb 15 02:48:41.986569 2015] [:error] [pid 21259:tid 140656137611008]   
File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tables/base.py", 
line 1260, in _filter_action
  [Sun Feb 15 02:48:41.986765 2015] [:error] [pid 21259:tid 140656137611008]
 return action._allowed(request, datum) and row_matched
  [Sun Feb 15 02:48:41.986806 2015] [:error] [pid 21259:tid 140656137611008]   
File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tables/actions.py", 
line 137, in _allowed
  [Sun Feb 15 02:48:41.986841 2015] [:error] [pid 21259:tid 140656137611008]
 return self.allowed(request, datum)
  [Sun Feb 15 02:48:41.987010 2015] [:error] [pid 21259:tid 140656137611008]   
File 
"/opt/stack/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/access_and_security/security_groups/tables.py",
 line 83, in allowed
  [Sun Feb 15 02:48:41.987051 2015] [:error] [pid 21259:tid 140656137611008]
 if usages['security_groups']['available'] <= 0:
  [Sun Feb 15 02:48:41.987088 2015] [:error] [pid 21259:tid 140656137611008] 
KeyError: 'available'

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1422049/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1403291] Re: VM's fail to receive DHCPOFFER messages

2015-05-08 Thread Joe Gordon
Top gate bug, this is not fixed.

** Changed in: neutron
   Status: Fix Released => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1403291

Title:
  VM's fail to receive DHCPOFFER messages

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  http://logs.openstack.org/46/142246/1/check//check-tempest-dsvm-
  neutron-full/ff04c3e/console.html#_2014-12-16_23_45_56_966

  message:"check_public_network_connectivity" AND
  message:"AssertionError: False is not true : Timed out waiting for"
  AND message:"to become reachable" AND tags:"tempest.txt"

  420 hits in 7 days, check and gate, all failures.  Seems like this is
  probably a known issue already so could be a duplicate of another bug,
  but given elastic-recheck didn't comment on my patch when this failed
  I'm reporting a new bug and a new e-r query:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiY2hlY2tfcHVibGljX25ldHdvcmtfY29ubmVjdGl2aXR5XCIgQU5EIG1lc3NhZ2U6XCJBc3NlcnRpb25FcnJvcjogRmFsc2UgaXMgbm90IHRydWUgOiBUaW1lZCBvdXQgd2FpdGluZyBmb3JcIiBBTkQgbWVzc2FnZTpcInRvIGJlY29tZSByZWFjaGFibGVcIiBBTkQgdGFnczpcInRlbXBlc3QudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MTg3ODcwOTM1OTIsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1403291/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453274] [NEW] libvirt: resume instance with utf-8 name results in UnicodeDecodeError

2015-05-08 Thread Taylor Peoples
Public bug reported:

This bug is very similar to
https://bugs.launchpad.net/nova/+bug/1388386.

Resuming a server that has a unicode name after suspending it results
in:

2015-05-08 15:22:30.148 4370 INFO nova.compute.manager 
[req-ac919325-aa2d-422c-b679-5f05ecca5d42 
0688b01e6439ca32d698d20789d52169126fb41fb1a4ddafcebb97d854e836c9 
6dfced8dd0df4d4d98e4a0db60526c8d - - -] [instance: 
12371aa8-889d-4333-8fab-61a13f87a547] Resuming
2015-05-08 15:22:31.651 4370 ERROR nova.compute.manager 
[req-ac919325-aa2d-422c-b679-5f05ecca5d42 
0688b01e6439ca32d698d20789d52169126fb41fb1a4ddafcebb97d854e836c9 
6dfced8dd0df4d4d98e4a0db60526c8d - - -] [instance: 
12371aa8-889d-4333-8fab-61a13f87a547] Setting instance vm_state to ERROR
2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547] Traceback (most recent call last):
2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6427, in 
_error_out_instance_on_exception
2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547] yield
2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 4371, in 
resume_instance
2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547] block_device_info)
2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2234, in 
resume
2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547] vifs_already_plugged=True)
2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547]   File 
"/usr/lib/python2.7/site-packages/powervc_nova/virt/powerkvm/driver.py", line 
2061, in _create_domain_and_network
2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547] disk_info=disk_info)
2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4391, in 
_create_domain_and_network
2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547] power_on=power_on)
2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4322, in 
_create_domain
2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547] LOG.error(err)
2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547]   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547] six.reraise(self.type_, self.value, 
self.tb)
2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4305, in 
_create_domain
2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547] err = _LE('Error defining a domain 
with XML: %s') % xml
2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547] UnicodeDecodeError: 'ascii' codec can't 
decode byte 0xc3 in position 297: ordinal not in range(128)
2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547]

The _create_domain() method has the following line:

err = _LE('Error defining a domain with XML: %s') % xml

which fails with the UnicodeDecodeError because the xml object has utf-8
encoding.  The fix is to wrap the xml object in
oslo.utils.encodeutils.safe_decode for the error message.

I'm seeing the issue on Kilo, but it is also likely an issue on Juno as
well.

** Affects: nova
 Importance: Undecided
 Assignee: Taylor Peoples (tpeoples)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Taylor Peoples (tpeoples)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1453274

Title:
  libvirt: resume instance with utf-8 name results in UnicodeDecodeError

Status in OpenStack Compute (Nova):
  New

Bug description:
  This bug is very similar to
  https://bugs.launchpad.net/nova/+bug/1388386.

  Resuming a server that has a unicode name after suspending it results
  in:

  2015

[Yahoo-eng-team] [Bug 1453268] Re: dynamic nova.conf generation does not work without python-barbicanclient installed

2015-05-08 Thread Matt Riedemann
python-barbicanclient is in test-requirements.txt, the readme says to
use tox -e genconfig, this isn't nova's problem.

If you're not using tox, then install the packages via site-packages and
pip, or rpm, or apt-get, or whatever your distro builds require.

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1453268

Title:
  dynamic nova.conf generation does not work without python-
  barbicanclient installed

Status in OpenStack Compute (Nova):
  Invalid
Status in OpenStack Compute (nova) kilo series:
  New
Status in Oslo configuration management library:
  New

Bug description:
  In gentoo we dynamically generate the docs (we also package stable
  branch releases that install from git).  This doesn't work unless
  python-barbicanclient is installed.

  What I would like to see is either a silent failure with that section
  of nova.conf removed or just add python-barbicanclient to requirements
  from test-requirements.  I understand that it is expected that example
  conf generation is expected to need test-requirements, but I don't
  agree :P

  Let me know if you need more info.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1453268/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453268] Re: dynamic nova.conf generation does not work without python-barbicanclient installed

2015-05-08 Thread Joe Gordon
IMHO,  we should be able to fail gracefully if some part of the sample
config generation fails.  Barbican isn't needed by default so I don't
think we should move it into requirements. This whole thing is yet
another manifestation of having optional dependencies and how to
represent that.

** Changed in: nova
   Status: New => Confirmed

** Also affects: oslo.config
   Importance: Undecided
   Status: New

** Changed in: nova
   Importance: Undecided => Low

** Tags added: low-hanging-fruit

** Also affects: nova/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1453268

Title:
  dynamic nova.conf generation does not work without python-
  barbicanclient installed

Status in OpenStack Compute (Nova):
  Invalid
Status in OpenStack Compute (nova) kilo series:
  New
Status in Oslo configuration management library:
  New

Bug description:
  In gentoo we dynamically generate the docs (we also package stable
  branch releases that install from git).  This doesn't work unless
  python-barbicanclient is installed.

  What I would like to see is either a silent failure with that section
  of nova.conf removed or just add python-barbicanclient to requirements
  from test-requirements.  I understand that it is expected that example
  conf generation is expected to need test-requirements, but I don't
  agree :P

  Let me know if you need more info.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1453268/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453268] [NEW] dynamic nova.conf generation does not work without python-barbicanclient installed

2015-05-08 Thread Matthew Thode
Public bug reported:

In gentoo we dynamically generate the docs (we also package stable
branch releases that install from git).  This doesn't work unless
python-barbicanclient is installed.

What I would like to see is either a silent failure with that section of
nova.conf removed or just add python-barbicanclient to requirements from
test-requirements.  I understand that it is expected that example conf
generation is expected to need test-requirements, but I don't agree :P

Let me know if you need more info.

** Affects: nova
 Importance: Low
 Status: Confirmed

** Affects: nova/kilo
 Importance: Undecided
 Status: New

** Affects: oslo.config
 Importance: Undecided
 Status: New


** Tags: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1453268

Title:
  dynamic nova.conf generation does not work without python-
  barbicanclient installed

Status in OpenStack Compute (Nova):
  Confirmed
Status in OpenStack Compute (nova) kilo series:
  New
Status in Oslo configuration management library:
  New

Bug description:
  In gentoo we dynamically generate the docs (we also package stable
  branch releases that install from git).  This doesn't work unless
  python-barbicanclient is installed.

  What I would like to see is either a silent failure with that section
  of nova.conf removed or just add python-barbicanclient to requirements
  from test-requirements.  I understand that it is expected that example
  conf generation is expected to need test-requirements, but I don't
  agree :P

  Let me know if you need more info.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1453268/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453264] [NEW] iptables_manager can run very slowly when a large number of security group rules are present

2015-05-08 Thread Brian Haley
Public bug reported:

We have customers that typically add a few hundred security group rules
or more.  We also typically run 30+ VMs per compute node.  When about
10+ VMs with a large SG set all get scheduled to the same node, the L2
agent (OVS) can spend many minutes in the iptables_manager.apply() code,
so much so that by the time all the rules are updated, the VM has
already tried DHCP and failed, leaving it in an unusable state.

While there have been some patches that tried to address this in Juno
and Kilo, they've either not helped as much as necessary, or broken SGs
completely due to re-ordering the of the iptables rules.

I've been able to show some pretty bad scaling with just a handful of
VMs running in devstack based on today's code (May 8th, 2015) from
upstream Openstack.

Here's what I tested:

1. I created a security group with 1000 TCP port rules (you could
alternately have a smaller number of rules and more VMs, but it's
quicker this way)

2. I booted VMs, specifying both the default and "large" SGs, and timed
from the second it took Neutron to "learn" about the port until it
completed it's work

3. I got a :( pretty quickly

And here's some data:

1-3 VM - didn't time, less than 20 seconds
4th VM - 0:36
5th VM - 0:53
6th VM - 1:11
7th VM - 1:25
8th VM - 1:48
9th VM - 2:14

While it's busy adding the rules, the OVS agent is consuming pretty
close to 100% of a CPU for most of this time (from top):

  PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND 
25767 stack 20   0  157936  76572   4416 R  89.2  0.5  50:14.28 python

And this is with only ~10K rules at this point!  When we start crossing
the 20K point VM boot failures start to happen.

I'm filing this bug since we need to take a closer look at this in
Liberty and fix it, it's been this way since Havana and needs some TLC.

I've attached a simple script I've used to recreate this, and will start
taking a look at options here.

** Affects: neutron
 Importance: Undecided
 Assignee: Brian Haley (brian-haley)
 Status: New

** Attachment added: "Script to add 1000 security group rules"
   
https://bugs.launchpad.net/bugs/1453264/+attachment/4393817/+files/big-sec-rules.sh

** Changed in: neutron
 Assignee: (unassigned) => Brian Haley (brian-haley)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453264

Title:
  iptables_manager can run very slowly when a large number of security
  group rules are present

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  We have customers that typically add a few hundred security group
  rules or more.  We also typically run 30+ VMs per compute node.  When
  about 10+ VMs with a large SG set all get scheduled to the same node,
  the L2 agent (OVS) can spend many minutes in the
  iptables_manager.apply() code, so much so that by the time all the
  rules are updated, the VM has already tried DHCP and failed, leaving
  it in an unusable state.

  While there have been some patches that tried to address this in Juno
  and Kilo, they've either not helped as much as necessary, or broken
  SGs completely due to re-ordering the of the iptables rules.

  I've been able to show some pretty bad scaling with just a handful of
  VMs running in devstack based on today's code (May 8th, 2015) from
  upstream Openstack.

  Here's what I tested:

  1. I created a security group with 1000 TCP port rules (you could
  alternately have a smaller number of rules and more VMs, but it's
  quicker this way)

  2. I booted VMs, specifying both the default and "large" SGs, and
  timed from the second it took Neutron to "learn" about the port until
  it completed it's work

  3. I got a :( pretty quickly

  And here's some data:

  1-3 VM - didn't time, less than 20 seconds
  4th VM - 0:36
  5th VM - 0:53
  6th VM - 1:11
  7th VM - 1:25
  8th VM - 1:48
  9th VM - 2:14

  While it's busy adding the rules, the OVS agent is consuming pretty
  close to 100% of a CPU for most of this time (from top):

PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND   
  
  25767 stack 20   0  157936  76572   4416 R  89.2  0.5  50:14.28 python

  And this is with only ~10K rules at this point!  When we start
  crossing the 20K point VM boot failures start to happen.

  I'm filing this bug since we need to take a closer look at this in
  Liberty and fix it, it's been this way since Havana and needs some
  TLC.

  I've attached a simple script I've used to recreate this, and will
  start taking a look at options here.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1453264/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453260] [NEW] Should use generated unique ID for Launch Instance wizard form field

2015-05-08 Thread Shaoquan Chen
Public bug reported:

Currently, in Launch Instance wizard, form field uses hard-coded ID
value which cannot be guaranteed to be unique in the page.

** Affects: horizon
 Importance: Undecided
 Assignee: Shaoquan Chen (sean-chen2)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1453260

Title:
  Should use generated unique ID for Launch Instance wizard form field

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Currently, in Launch Instance wizard, form field uses hard-coded ID
  value which cannot be guaranteed to be unique in the page.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1453260/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1400966] Re: [OSSA-2014-041] Glance allows users to download and delete any file in glance-api server (CVE-2014-9493)

2015-05-08 Thread Kevin Carter
** Changed in: openstack-ansible
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1400966

Title:
  [OSSA-2014-041] Glance allows users to download and delete any file in
  glance-api server (CVE-2014-9493)

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance icehouse series:
  Fix Released
Status in Glance juno series:
  Fix Released
Status in Ansible playbooks for deploying OpenStack:
  Fix Released
Status in openstack-ansible icehouse series:
  Fix Released
Status in openstack-ansible juno series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  Updating image-location by update images API users can download any file for 
which glance-api has read permission. 
  And the file for which glance-api has write permission will be deleted when 
users delete the image.

  
  For example:
  When users specify '/etc/passwd' as locations value of an image user can get 
the file by image download.

  When locations of an image is set with 'file:///path/to/glance-
  api.conf' the conf will be deleted when users delete the image.

  How to recreate the bug:
  download files:
   - set show_multiple_locations True in glance-api.conf
   - create a new image
   - set locations of the image's property a path you want to get such as 
file:///etc/passwd.
   - download the image

  delete files:
   - set show_multiple_locations True in glance-api.conf
   - create a new image
   - set locations of the image's property a path you want to delete such as 
file:///path/to/glance-api.conf
   - delete the image

  I found this bug in 2014.2 (742c898956d655affa7351505c8a3a5c72881eae).

  What a big A RE RE!!

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1400966/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453245] [NEW] Keystone REST handles default role incorrectly

2015-05-08 Thread Thai Tran
Public bug reported:

The keystone REST currently assigns a role to the user even if the role
is the default role (_member_). This causes keystone client v2 to return
an error.

** Affects: horizon
 Importance: Medium
 Assignee: Thai Tran (tqtran)
 Status: In Progress

** Summary changed:

- KeystoneREST handles default role incorrectly
+ Keystone REST handles default role incorrectly

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1453245

Title:
  Keystone REST handles default role incorrectly

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  The keystone REST currently assigns a role to the user even if the
  role is the default role (_member_). This causes keystone client v2 to
  return an error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1453245/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449260] Re: Sanitation of metadata label

2015-05-08 Thread sunil
** Changed in: horizon
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1449260

Title:
  Sanitation of metadata label

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Security Advisories:
  Triaged

Bug description:
  1) Start up Horizon
  2) Go to Images
  3) Next to an image, pick "Update Metadata"
  4) From the dropdown button, select "Update Metadata"
  5) In the Custom box, enter a value with some HTML like 
'alert(1)//', click +
  6) On the right-hand side, give it a value, like "ee"
  7) Click "Save"
  8) Pick "Update Metadata" for the image again, the page will fail to load, 
and the JavaScript console says:

  SyntaxError: invalid property id
  var existing_metadata = {"

  An alternative is if you change the URL to update_metadata for the
  image (for example,
  
http://192.168.122.239/admin/images/fa62ba27-e731-4ab9-8487-f31bac355b4c/update_metadata/),
  it will actually display the alert box and a bunch of junk.

  I'm not sure if update_metadata is actually a page, though... can't
  figure out how to get to it other than typing it in.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1449260/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1452840] Re: libvirt: nova's detach_volume silently fails sometimes

2015-05-08 Thread Nicolas Simonds
** Also affects: libvirt-python
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1452840

Title:
  libvirt: nova's detach_volume silently fails sometimes

Status in libvirt-python:
  New
Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  This behavior has been observed on the following platforms:

  * Nova Icehouse, Debian 12.04, QEMU 1.5.3, libvirt 1.1.3.5, with the Cinder 
Icehouse NFS driver, CirrOS 0.3.2 guest
  * Nova Icehouse, Debian 12.04, QEMU 1.5.3, libvirt 1.1.3.5, with the Cinder 
Icehouse RBD (Ceph) driver, CirrOS 0.3.2 guest
  * Nova master, Debian 14.04, QEMU 2.0.0, libvirt 1.2.2, with the Cinder 
master iSCSI driver, CirrOS 0.3.2 guest

  Nova's "detach_volume" fires the detach method into libvirt, which
  claims success, but the device is still attached according to "virsh
  domblklist".  Nova then finishes the teardown, releasing the
  resources, which then causes I/O errors in the guest, and subsequent
  volume_attach requests from Nova to fail spectacularly due to it
  trying to use an in-use resource.

  This appears to be a race condition, in that it does occasionally work
  fine.

  Steps to Reproduce:

  This script will usually trigger the error condition:

  #!/bin/bash -vx

  : Setup
  img=$(glance image-list --disk-format ami | awk 
'/cirros-0.3.2-x86_64-uec/ {print $2}')
  vol1_id=$(cinder create 1 | awk '($2=="id"){print $4}')
  sleep 5

  : Launch
  nova boot --flavor m1.tiny --image "$img" --block-device 
source=volume,id="$vol1_id",dest=volume,shutdown=preserve --poll test

  : Measure
  nova show test | grep "volumes_attached.*$vol1_id"

  : Poke the bear
  nova volume-detach test "$vol1_id"
  sudo virsh list --all --uuid | xargs -r -n 1 sudo virsh domblklist
  sleep 10
  sudo virsh list --all --uuid | xargs -r -n 1 sudo virsh domblklist
  vol2_id=$(cinder create 1 | awk '($2=="id"){print $4}')
  nova volume-attach test "$vol2_id"
  sleep 1

  : Measure again
  nova show test | grep "volumes_attached.*$vol2_id"

  Expected behavior:

  The volumes attach/detach/attach properly

  Actual behavior:

  The second attachment fails, and n-cpu throws the following exception:

  Failed to attach volume at mountpoint: /dev/vdb
  Traceback (most recent call last):
  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 1057, in 
attach_volume
   virt_dom.attachDeviceFlags(conf.to_xml(), flags)
     File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 
183, in doit
   result = proxy_call(self._autowrap, f, *args, **kwargs)
     File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 
141, in proxy_call
   rv = execute(f, *args, **kwargs)
     File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 
122, in execute
   six.reraise(c, e, tb)
     File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 
80, in tworker
   rv = meth(*args, **kwargs)
     File "/usr/local/lib/python2.7/dist-packages/libvirt.py", line 517, in 
attachDeviceFlags
   if ret == -1: raise libvirtError ('virDomainAttachDeviceFlags() 
failed', dom=self)
   libvirtError: operation failed: target vdb already exists

  Workaround:

  "sudo virsh detach-disk $SOME_UUID $SOME_DISK_ID" appears to cause the
  guest to properly detach the device, and also seems to ward off
  whatever gremlins caused the problem in the first place; i.e., the
  problem gets much less likely to present itself after firing a virsh
  command.

To manage notifications about this bug go to:
https://bugs.launchpad.net/libvirt-python/+bug/1452840/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453208] [NEW] security_group_rules are lack of schema check

2015-05-08 Thread jichenjc
Public bug reported:

security_group_default_rules.py in plugs/v3 are lack of schema check

** Affects: nova
 Importance: Undecided
 Assignee: jichenjc (jichenjc)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => jichenjc (jichenjc)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1453208

Title:
  security_group_rules are lack of schema check

Status in OpenStack Compute (Nova):
  New

Bug description:
  security_group_default_rules.py in plugs/v3 are lack of schema check

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1453208/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450318] Re: Create security group button gone

2015-05-08 Thread Lin Hua Cheng
*** This bug is a duplicate of bug 1422049 ***
https://bugs.launchpad.net/bugs/1422049

I think this is duplicate, let me know if you think otherwise.

I'll propose a backport to Juno as well.

** This bug has been marked a duplicate of bug 1422049
   Security group   checking action permissions raise error

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1450318

Title:
  Create security group button gone

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  We upgraded our dashboard to juno and now the "Create Security Group"
  button has disappeared.

  I've tracked this down to a key error in class
  CreateGroup(tables.LinkAction) method allowed:

  if usages['security_groups']['available'] <= 0:

  KeyError: ('available',)

  pp usages['security_groups']
  {'quota': 10}

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1450318/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453202] [NEW] test_token_revoked_once_group_role_grant_revoked fails in keystone

2015-05-08 Thread Boris Bobrov
Public bug reported:

Traceback (most recent call last):
  File "keystone/tests/test_v3_identity.py", line 1223, in 
test_token_revoked_once_group_role_grant_revoked
expected_status=404)
  File "keystone/tests/test_v3.py", line 476, in head
r = self.v3_request(method='HEAD', path=path, **kwargs)
  File "keystone/tests/test_v3.py", line 467, in v3_request
return self.admin_request(path=path, token=token, **kwargs)
  File "keystone/tests/test_v3.py", line 401, in admin_request
r = super(RestfulTestCase, self).admin_request(*args, **kwargs)
  File "keystone/tests/rest.py", line 220, in admin_request
return self._request(app=self.admin_app, **kwargs)
  File "keystone/tests/rest.py", line 209, in _request
response = self.restful_request(**kwargs)
  File "keystone/tests/rest.py", line 195, in restful_request
**kwargs)
  File "keystone/tests/rest.py", line 96, in request
status=expected_status, **kwargs)
  File 
"/home/breton/src/mos/keystone/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py",
 line 566, in request
expect_errors=expect_errors,
  File 
"/home/breton/src/mos/keystone/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py",
 line 631, in do_request
self._check_status(status, res)
  File 
"/home/breton/src/mos/keystone/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py",
 line 666, in _check_status
"Bad response: %s (not %s)", res_status, status)
AppError: Bad response: 200 OK (not 404)

6.1

** Affects: mos
 Importance: High
 Assignee: MOS Keystone (mos-keystone)
 Status: Confirmed


** Tags: keystone

** Changed in: keystone
   Status: New => Invalid

** Project changed: keystone => mos

** Changed in: mos
   Status: Invalid => Confirmed

** Changed in: mos
 Assignee: (unassigned) => MOS Keystone (mos-keystone)

** Changed in: mos
Milestone: None => 6.1

** Changed in: mos
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1453202

Title:
  test_token_revoked_once_group_role_grant_revoked fails in keystone

Status in Mirantis OpenStack:
  Confirmed

Bug description:
  Traceback (most recent call last):
File "keystone/tests/test_v3_identity.py", line 1223, in 
test_token_revoked_once_group_role_grant_revoked
  expected_status=404)
File "keystone/tests/test_v3.py", line 476, in head
  r = self.v3_request(method='HEAD', path=path, **kwargs)
File "keystone/tests/test_v3.py", line 467, in v3_request
  return self.admin_request(path=path, token=token, **kwargs)
File "keystone/tests/test_v3.py", line 401, in admin_request
  r = super(RestfulTestCase, self).admin_request(*args, **kwargs)
File "keystone/tests/rest.py", line 220, in admin_request
  return self._request(app=self.admin_app, **kwargs)
File "keystone/tests/rest.py", line 209, in _request
  response = self.restful_request(**kwargs)
File "keystone/tests/rest.py", line 195, in restful_request
  **kwargs)
File "keystone/tests/rest.py", line 96, in request
  status=expected_status, **kwargs)
File 
"/home/breton/src/mos/keystone/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py",
 line 566, in request
  expect_errors=expect_errors,
File 
"/home/breton/src/mos/keystone/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py",
 line 631, in do_request
  self._check_status(status, res)
File 
"/home/breton/src/mos/keystone/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py",
 line 666, in _check_status
  "Bad response: %s (not %s)", res_status, status)
  AppError: Bad response: 200 OK (not 404)

  6.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/mos/+bug/1453202/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362676] Re: Hyper-V agent doesn't create stateful security group rules

2015-05-08 Thread Claudiu Belu
** Changed in: networking-hyperv
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1362676

Title:
  Hyper-V agent doesn't create stateful security group rules

Status in Hyper-V Networking Agent:
  Fix Released
Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Hyper-V agent does not create stateful security group rules (ACLs),
  meaning it doesn't allow any response traffic to pass through.

  For example, the following security group rule:
  {"direction": "ingress", "remote_ip_prefix": null, "protocol": "tcp", 
"port_range_max": 22,  "port_range_min": 22, "ethertype": "IPv4"}
  Allows tcp  inbound traffic through port 22, but since the Hyper-V agent does 
not add this rule as stateful, the reply traffic never received, unless 
specifically added an egress security group rule as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-hyperv/+bug/1362676/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453170] [NEW] compute update do unused update of RT when set instance to ERROR

2015-05-08 Thread jichenjc
Public bug reported:

we have function _set_instance_error_state in compute manager
 it will update the RT if the instance is set to ERROR , actually 
nothing changed except the instance state .
so we don't need to do so, use _set_instance_obj_error_state will be fine

** Affects: nova
 Importance: Undecided
 Assignee: jichenjc (jichenjc)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => jichenjc (jichenjc)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1453170

Title:
  compute update do unused update of RT when set instance to ERROR

Status in OpenStack Compute (Nova):
  New

Bug description:
  we have function _set_instance_error_state in compute manager
   it will update the RT if the instance is set to ERROR , actually 
  nothing changed except the instance state .
  so we don't need to do so, use _set_instance_obj_error_state will be fine

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1453170/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1452983] Re: can't add/del parameter in nested stack during stack-update

2015-05-08 Thread jichenjc
I didn't see any thing related to nova, assume heat

** Project changed: nova => heat

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1452983

Title:
  can't add/del parameter in nested stack during stack-update

Status in Orchestration API (Heat):
  New

Bug description:
  Looks like nested stack doesn't allow parameter change. It fails at
  after_props.validate()

  Is it the inention to disable that since update_allowed_properties
  default is none?

  But it's possible to allow that for nested stack, since it's special
  resource constructed by base resource.

  2015-05-08 04:07:08.164 18023 TRACE heat.engine.resource   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/heat/engine/resource.py",
 line 439, in _action_recorder
  2015-05-08 04:07:08.164 18023 TRACE heat.engine.resource yield
  2015-05-08 04:07:08.164 18023 TRACE heat.engine.resource   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/heat/engine/resource.py",
 line 688, in update
  2015-05-08 04:07:08.164 18023 TRACE heat.engine.resource 
after_props.validate()
  2015-05-08 04:07:08.164 18023 TRACE heat.engine.resource   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/heat/engine/properties.py",
 line 375, in validate
  2015-05-08 04:07:08.164 18023 TRACE heat.engine.resource raise 
exception.StackValidationFailed(message=msg)
  2015-05-08 04:07:08.164 18023 TRACE heat.engine.resource 
StackValidationFailed: Unknown Property str_len
  2015-05-08 04:07:08.164 18023 TRACE heat.engine.resource 

  Regards,
  Neil

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1452983/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453068] [NEW] task: Image's locations empty after importing to store

2015-05-08 Thread Flavio Percoco
Public bug reported:

The ImportToStore task is setting the image data correctly but not
saving it after it's been imported. This causes the image's location to
be lost and therefore the image is completely useless because there are
no locations associated to it

** Affects: glance
 Importance: High
 Assignee: Flavio Percoco (flaper87)
 Status: In Progress

** Changed in: glance
   Importance: Undecided => High

** Changed in: glance
 Assignee: (unassigned) => Flavio Percoco (flaper87)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1453068

Title:
  task: Image's locations empty after importing to store

Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress

Bug description:
  The ImportToStore task is setting the image data correctly but not
  saving it after it's been imported. This causes the image's location
  to be lost and therefore the image is completely useless because there
  are no locations associated to it

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1453068/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1448822] Re: let modal center

2015-05-08 Thread tinytmy
it's not vertical center

** Changed in: horizon
   Status: Invalid => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1448822

Title:
  let modal center

Status in OpenStack Dashboard (Horizon):
  Incomplete

Bug description:
  The pop-up modal now is not center, we can let it center.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1448822/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453048] [NEW] ajust the buttom position in disassociate floating ip modal

2015-05-08 Thread tinytmy
Public bug reported:

In the disassociate floating ip modal, the cancel buttom and confirm buttom is 
dislocation.
Change it as other.

** Affects: horizon
 Importance: Undecided
 Assignee: tinytmy (tangmeiyan77)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => tinytmy (tangmeiyan77)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1453048

Title:
  ajust the buttom position in disassociate floating ip modal

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In the disassociate floating ip modal, the cancel buttom and confirm buttom 
is dislocation.
  Change it as other.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1453048/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453047] [NEW] ValueError: Tables "artifact_blob_locations, artifact_blobs, artifact_dependencies, artifact_properties, artifact_tags, artifacts, image_locations, image_members,

2015-05-08 Thread Flavio Percoco
Public bug reported:

A new sanity_check has been enabled in oslo.db, which verifies the table
charset. We need to make the switch to utf8 explicit in our models
definition. The current error in the gate is:


Traceback (most recent call last):
  File 
"/home/jenkins/workspace/gate-glance-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/test_migrations.py",
 line 572, in test_models_sync
self.db_sync(self.get_engine())
  File "glance/tests/unit/test_migrations.py", line 1686, in db_sync
migration.db_sync(engine=engine)
  File "glance/db/migration.py", line 65, in db_sync
init_version=init_version)
  File 
"/home/jenkins/workspace/gate-glance-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/migration.py",
 line 84, in db_sync
_db_schema_sanity_check(engine)
  File 
"/home/jenkins/workspace/gate-glance-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/migration.py",
 line 114, in _db_schema_sanity_check
) % ','.join(table_names))
ValueError: Tables 
"artifact_blob_locations,artifact_blobs,artifact_dependencies,artifact_properties,artifact_tags,artifacts,image_locations,image_members,image_properties,image_tags,images,metadef_namespace_resource_types,metadef_namespaces,metadef_objects,metadef_properties,metadef_resource_types,metadef_tags,task_info,tasks"
 have non utf8 collation, please make sure all tables are CHARSET=utf8


And the fix required should consist in adding `'charset': 'utf-8'` to our 
GlanceBase model.

** Affects: glance
 Importance: High
 Assignee: Flavio Percoco (flaper87)
 Status: New

** Changed in: glance
   Importance: Undecided => High

** Changed in: glance
 Assignee: (unassigned) => Flavio Percoco (flaper87)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1453047

Title:
  ValueError: Tables
  
"artifact_blob_locations,artifact_blobs,artifact_dependencies,artifact_properties,artifact_tags,artifacts,image_locations,image_members,image_properties,image_tags,images,metadef_namespace_resource_types,metadef_namespaces,metadef_objects,metadef_properties,metadef_resource_types,metadef_tags,task_info,tasks"
  have non utf8 collation, please make sure all tables are CHARSET=utf8

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  A new sanity_check has been enabled in oslo.db, which verifies the
  table charset. We need to make the switch to utf8 explicit in our
  models definition. The current error in the gate is:

  
  Traceback (most recent call last):
File 
"/home/jenkins/workspace/gate-glance-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/test_migrations.py",
 line 572, in test_models_sync
  self.db_sync(self.get_engine())
File "glance/tests/unit/test_migrations.py", line 1686, in db_sync
  migration.db_sync(engine=engine)
File "glance/db/migration.py", line 65, in db_sync
  init_version=init_version)
File 
"/home/jenkins/workspace/gate-glance-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/migration.py",
 line 84, in db_sync
  _db_schema_sanity_check(engine)
File 
"/home/jenkins/workspace/gate-glance-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/migration.py",
 line 114, in _db_schema_sanity_check
  ) % ','.join(table_names))
  ValueError: Tables 
"artifact_blob_locations,artifact_blobs,artifact_dependencies,artifact_properties,artifact_tags,artifacts,image_locations,image_members,image_properties,image_tags,images,metadef_namespace_resource_types,metadef_namespaces,metadef_objects,metadef_properties,metadef_resource_types,metadef_tags,task_info,tasks"
 have non utf8 collation, please make sure all tables are CHARSET=utf8

  
  And the fix required should consist in adding `'charset': 'utf-8'` to our 
GlanceBase model.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1453047/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453008] [NEW] Deadlock on update_port_status

2015-05-08 Thread Eugene Nikanorov
Public bug reported:

Deadlock trace found in neutron-server logs when updating router gateway
port to BUILD status which port binding process is in progress. The
trace:

http://logs.openstack.org/32/181132/5/check/gate-rally-dsvm-neutron-
neutron/c90deb5/logs/screen-q-svc.txt.gz?#_2015-05-08_00_54_04_657

** Affects: neutron
 Importance: High
 Assignee: Eugene Nikanorov (enikanorov)
 Status: Confirmed


** Tags: db

** Changed in: neutron
 Assignee: (unassigned) => Eugene Nikanorov (enikanorov)

** Tags added: db

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453008

Title:
  Deadlock on update_port_status

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  Deadlock trace found in neutron-server logs when updating router
  gateway port to BUILD status which port binding process is in
  progress. The trace:

  http://logs.openstack.org/32/181132/5/check/gate-rally-dsvm-neutron-
  neutron/c90deb5/logs/screen-q-svc.txt.gz?#_2015-05-08_00_54_04_657

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1453008/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp