[Yahoo-eng-team] [Bug 1338447] [NEW] Netwrap missing allowed command

2014-07-07 Thread Martins Jakubovics
Public bug reported:

Hello,

I successfully installed neutron with XenServer 6.2, but I got error message
in /var/log/neutron/openvswitch-agent.log at domU compute node:

2014-06-17 11:26:52.431 1346 ERROR neutron.agent.linux.ovsdb_monitor [-] Error 
received from ovsdb monitor: Traceback (most recent call last):
2014-06-17 11:27:22.795 1346 ERROR neutron.agent.linux.ovsdb_monitor [-] Error 
received from ovsdb monitor: Traceback (most recent call last):
2014-06-17 11:27:53.150 1346 ERROR neutron.agent.linux.ovsdb_monitor [-] Error 
received from ovsdb monitor: Traceback (most recent call last):
2014-06-17 11:28:23.600 1346 ERROR neutron.agent.linux.ovsdb_monitor [-] Error 
received from ovsdb monitor: Traceback (most recent call last):

XenServer netwrap plugin does not allow to run command ovsdb-client
and this error appears. If in netwrap plugin insert ovsdb-client,
error disappears.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: netwrap xenapi

** Tags added: xenapi

** Tags added: netwrap

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1338447

Title:
  Netwrap missing allowed command

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Hello,

  I successfully installed neutron with XenServer 6.2, but I got error message
  in /var/log/neutron/openvswitch-agent.log at domU compute node:

  2014-06-17 11:26:52.431 1346 ERROR neutron.agent.linux.ovsdb_monitor [-] 
Error received from ovsdb monitor: Traceback (most recent call last):
  2014-06-17 11:27:22.795 1346 ERROR neutron.agent.linux.ovsdb_monitor [-] 
Error received from ovsdb monitor: Traceback (most recent call last):
  2014-06-17 11:27:53.150 1346 ERROR neutron.agent.linux.ovsdb_monitor [-] 
Error received from ovsdb monitor: Traceback (most recent call last):
  2014-06-17 11:28:23.600 1346 ERROR neutron.agent.linux.ovsdb_monitor [-] 
Error received from ovsdb monitor: Traceback (most recent call last):

  XenServer netwrap plugin does not allow to run command ovsdb-client
  and this error appears. If in netwrap plugin insert ovsdb-client,
  error disappears.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1338447/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338450] [NEW] ValueError at /project/stacks/ while deleting stacks with failed status

2014-07-07 Thread Amit Prakash Pandey
Public bug reported:

When I tried to Launch Stack , it shows status as failed. Now If I try
to delete that stack , it gives ValueError at /project/stacks/.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: ValueError.png
   
https://bugs.launchpad.net/bugs/1338450/+attachment/4146964/+files/ValueError.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1338450

Title:
  ValueError at /project/stacks/ while deleting stacks with failed
  status

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When I tried to Launch Stack , it shows status as failed. Now If I try
  to delete that stack , it gives ValueError at /project/stacks/.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1338450/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338451] [NEW] shelve api does not work in the nova-cell environment

2014-07-07 Thread Abhijeet Malawade
Public bug reported:

If you run nova shelve api in nova-cell environment It throws following
error:

Nova cell (n-cell-child) Logs:

2014-07-06 23:57:13.445 ERROR nova.cells.messaging 
[req-a689a1a1-4634-4634-974a-7343b5554f46 admin admin] Error processing message 
locally: save() got an unexpected keyword argument 'expected_task_state'
2014-07-06 23:57:13.445 TRACE nova.cells.messaging Traceback (most recent call 
last):
2014-07-06 23:57:13.445 TRACE nova.cells.messaging   File 
/opt/stack/nova/nova/cells/messaging.py, line 200, in _process_locally
2014-07-06 23:57:13.445 TRACE nova.cells.messaging resp_value = 
self.msg_runner._process_message_locally(self)
2014-07-06 23:57:13.445 TRACE nova.cells.messaging   File 
/opt/stack/nova/nova/cells/messaging.py, line 1287, in 
_process_message_locally
2014-07-06 23:57:13.445 TRACE nova.cells.messaging return fn(message, 
**message.method_kwargs)
2014-07-06 23:57:13.445 TRACE nova.cells.messaging   File 
/opt/stack/nova/nova/cells/messaging.py, line 700, in run_compute_api_method
2014-07-06 23:57:13.445 TRACE nova.cells.messaging return fn(message.ctxt, 
*args, **method_info['method_kwargs'])
2014-07-06 23:57:13.445 TRACE nova.cells.messaging   File 
/opt/stack/nova/nova/compute/api.py, line 192, in wrapped
2014-07-06 23:57:13.445 TRACE nova.cells.messaging return func(self, 
context, target, *args, **kwargs)
2014-07-06 23:57:13.445 TRACE nova.cells.messaging   File 
/opt/stack/nova/nova/compute/api.py, line 182, in inner
2014-07-06 23:57:13.445 TRACE nova.cells.messaging return function(self, 
context, instance, *args, **kwargs)
2014-07-06 23:57:13.445 TRACE nova.cells.messaging   File 
/opt/stack/nova/nova/compute/api.py, line 163, in inner
2014-07-06 23:57:13.445 TRACE nova.cells.messaging return f(self, context, 
instance, *args, **kw)
2014-07-06 23:57:13.445 TRACE nova.cells.messaging   File 
/opt/stack/nova/nova/compute/api.py, line 2458, in shelve
2014-07-06 23:57:13.445 TRACE nova.cells.messaging 
instance.save(expected_task_state=[None])
2014-07-06 23:57:13.445 TRACE nova.cells.messaging TypeError: save() got an 
unexpected keyword argument 'expected_task_state'
2014-07-06 23:57:13.445 TRACE nova.cells.messaging

Nova compute log:

2014-07-07 00:05:19.084 ERROR oslo.messaging.rpc.dispatcher 
[req-9539189d-239b-4e74-8aea-8076740
31c2f admin admin] Exception during message handling: 'NoneType' object is not 
iterable
Traceback (most recent call last):

  File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
134, in _
dispatch_and_reply
incoming.message))

  File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
177, in _
dispatch
return self._do_dispatch(endpoint, method, ctxt, args)

  File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
123, in _
do_dispatch
result = getattr(endpoint, method)(ctxt, **new_args)

  File /opt/stack/nova/nova/conductor/manager.py, line 351, in 
notify_usage_exists
system_metadata, extra_usage_info)

  File /opt/stack/nova/nova/compute/utils.py, line 250, in notify_usage_exists
ignore_missing_network_data)

  File /opt/stack/nova/nova/notifications.py, line 285, in bandwidth_usage
macs = [vif['address'] for vif in nw_info]

TypeError: 'NoneType' object is not iterable

2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher Traceback (most 
recent call last):
2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dis
t-packages/oslo/messaging/rpc/dispatcher.py, line 134, in _dispatch_and_reply
2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
177, in _dispatch
2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
123, in _do_dispatch
2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/exception.py, line 88, in wrapped
2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher payload)
2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/openstack/common/excutils.py, line 82, in __exit__
2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/exception.py, line 71, in wrapped
2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher return f(self, 
context, *args, **kw)
2014-07-07 00:05:19.084 TRACE 

[Yahoo-eng-team] [Bug 1338470] [NEW] LBaaS Round Robin does not work as expected

2014-07-07 Thread Nir Magnezi
Public bug reported:

Description of problem:
===
I configured a load balancing pool with 2 members using round robin mechanism.
My expectation was that each request will be directed to the next available 
pool member.
Meaning, the expected result was:
Req #1 - Member #1
Req #2 - Member #2
Req #3 - Member #1
Req #4 - Member #2

etc..

I configured the instances guest image to replay to the request with the 
private ip address of the instance, and by that i can easily see who handled 
the request.
This is the result I witnessed:

# for i in {1..10} ; do curl -s 192.168.170.9 ; echo ; done
192.168.208.4
192.168.208.4
192.168.208.2
192.168.208.2
192.168.208.4
192.168.208.4
192.168.208.2
192.168.208.4
192.168.208.2
192.168.208.4

Details about the pool: http://pastebin.com/index/MwRX7HCR

Version-Release number of selected component (if applicable):
=
Icehouse:
python-neutronclient-2.3.4-2
python-neutron-2014.1-35
openstack-neutron-2014.1-35
openstack-neutron-openvswitch-2014.1-35
haproxy-1.5-0.3.dev22.el7

How reproducible:
=
100%

Steps to Reproduce:
===
1. As detailed above, configure a LB pool with round robin and two members.
2.

Additional info:

Tested with RHEL7
haproxy.cfg: http://pastebin.com/vuNe1p7H

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1338470

Title:
  LBaaS Round Robin does not work as expected

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Description of problem:
  ===
  I configured a load balancing pool with 2 members using round robin mechanism.
  My expectation was that each request will be directed to the next available 
pool member.
  Meaning, the expected result was:
  Req #1 - Member #1
  Req #2 - Member #2
  Req #3 - Member #1
  Req #4 - Member #2

  etc..

  I configured the instances guest image to replay to the request with the 
private ip address of the instance, and by that i can easily see who handled 
the request.
  This is the result I witnessed:

  # for i in {1..10} ; do curl -s 192.168.170.9 ; echo ; done
  192.168.208.4
  192.168.208.4
  192.168.208.2
  192.168.208.2
  192.168.208.4
  192.168.208.4
  192.168.208.2
  192.168.208.4
  192.168.208.2
  192.168.208.4

  Details about the pool: http://pastebin.com/index/MwRX7HCR

  Version-Release number of selected component (if applicable):
  =
  Icehouse:
  python-neutronclient-2.3.4-2
  python-neutron-2014.1-35
  openstack-neutron-2014.1-35
  openstack-neutron-openvswitch-2014.1-35
  haproxy-1.5-0.3.dev22.el7

  How reproducible:
  =
  100%

  Steps to Reproduce:
  ===
  1. As detailed above, configure a LB pool with round robin and two members.
  2.

  Additional info:
  
  Tested with RHEL7
  haproxy.cfg: http://pastebin.com/vuNe1p7H

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1338470/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338473] [NEW] catch InstanceUserDataTooLarge when create instance at api layer

2014-07-07 Thread jichenjc
Public bug reported:

we should catch InstanceUserDataTooLarge when we create an instance
because compute/api.py might report this exception

** Affects: nova
 Importance: Undecided
 Assignee: jichenjc (jichenjc)
 Status: New


** Tags: api

** Tags added: api

** Changed in: nova
 Assignee: (unassigned) = jichenjc (jichenjc)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1338473

Title:
  catch InstanceUserDataTooLarge when create instance at api layer

Status in OpenStack Compute (Nova):
  New

Bug description:
  we should catch InstanceUserDataTooLarge when we create an instance
  because compute/api.py might report this exception

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1338473/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338480] [NEW] VM's allowed address pair IP is not updated in the remote group VMs allowed IP list.

2014-07-07 Thread Alok Kumar Maurya
Public bug reported:

1. Create a new tenant
2. Create a network - Add subnet  (10.10.10.0/24)  in the network
3. Create  two VMs(VM1  and VM2)   in network in s  default security  group.
4.  Now   updated  VM1 port   with  an  allowed address pair IP  (20.20.20.2)

 neutron port-update  079804ae-d941-4ec2-b36a-8b1d60b0cda8 --allowed-
address-pairs type=dict list=true ip_address=20.20.20.2


Update VM port2 IP with  and allowed address pair  IP   20.20.20.3

neutron port-update f538604a-3437-447b-a4ea-7d37b07a88c6 --allowed-
address-pairs type=dict list=true ip_address=20.20.20.3

5.  In VM1 , add  one more IP address  20.20.20.2

sudo ip addr add 20.20.20.2/24 dev eth0

In VM2   , add IP address 20.20.20.2

sudo ip addr add 20.20.20.3/24 dev eth0

now  from VM1  , try  to ping  20.20.20.3

It  fails  to ping

Then try to restart  neutron-plugin-openvswitch-agent on compute  ndoe
,

 then  try to ping 20.20.20.3  from VM1   , it starts  pinging.

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  1. Create a new tenant
  2. Create a network - Add subnet  (10.10.10.0/24)  in the network
  3. Create  two VMs(VM1  and VM2)   in network in s  default security  
group.
  4.  Now   updated  VM1 port   with  an  allowed address pair IP  (20.20.20.2)
  
+  neutron port-update  079804ae-d941-4ec2-b36a-8b1d60b0cda8 --allowed-
+ address-pairs type=dict list=true ip_address=20.20.20.2
+ 
+ 
  Update VM port2 IP with  and allowed address pair  IP   20.20.20.3
  
+ neutron port-update f538604a-3437-447b-a4ea-7d37b07a88c6 --allowed-
+ address-pairs type=dict list=true ip_address=20.20.20.3
  
  5.  In VM1 , add  one more IP address  20.20.20.2
  
  sudo ip addr add 20.20.20.2/24 dev eth0
  
- 
  In VM2   , add IP address 20.20.20.2
  
  sudo ip addr add 20.20.20.3/24 dev eth0
- 
  
  now  from VM1  , try  to ping  20.20.20.3
  
  It  fails  to ping
  
+ Then try to restart  neutron-plugin-openvswitch-agent on compute  ndoe
+ ,
  
- Then try to restart  neutron-plugin-openvswitch-agent on compute  ndoe  ,
- 
- 
-  then  try to ping 20.20.20.3  from VM1   , it starts  pinging.
+  then  try to ping 20.20.20.3  from VM1   , it starts  pinging.

** Summary changed:

- Updated IP  of allowed address pair  is not reflected in the remote  security 
groups  (allowed IP list)
+ VM's  allowed address pair IP  is not  updated in the remote  group  VMs  
allowed IP list.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1338480

Title:
  VM's  allowed address pair IP  is not  updated in the remote  group
  VMs  allowed IP list.

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  1. Create a new tenant
  2. Create a network - Add subnet  (10.10.10.0/24)  in the network
  3. Create  two VMs(VM1  and VM2)   in network in s  default security  
group.
  4.  Now   updated  VM1 port   with  an  allowed address pair IP  (20.20.20.2)

   neutron port-update  079804ae-d941-4ec2-b36a-8b1d60b0cda8 --allowed-
  address-pairs type=dict list=true ip_address=20.20.20.2

  
  Update VM port2 IP with  and allowed address pair  IP   20.20.20.3

  neutron port-update f538604a-3437-447b-a4ea-7d37b07a88c6 --allowed-
  address-pairs type=dict list=true ip_address=20.20.20.3

  5.  In VM1 , add  one more IP address  20.20.20.2

  sudo ip addr add 20.20.20.2/24 dev eth0

  In VM2   , add IP address 20.20.20.2

  sudo ip addr add 20.20.20.3/24 dev eth0

  now  from VM1  , try  to ping  20.20.20.3

  It  fails  to ping

  Then try to restart  neutron-plugin-openvswitch-agent on compute  ndoe
  ,

   then  try to ping 20.20.20.3  from VM1   , it starts  pinging.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1338480/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338479] [NEW] Unhelpful error message when updating quota

2014-07-07 Thread Xurong Yang
Public bug reported:

When updating network quota using the following command:

neutron quota-update --network 100

the client ouputs:

Request Failed: internal server error while processing your request.

This request fails since the parameter exceeds the integer range. An
error message like Request Failed: quota limit exceeds integer range
would be more friendly to users than just raising a single internal
server error.

** Affects: neutron
 Importance: Undecided
 Assignee: Xurong Yang (idopra)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Xurong Yang (idopra)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1338479

Title:
  Unhelpful error message when updating quota

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When updating network quota using the following command:

  neutron quota-update --network 100

  the client ouputs:

  Request Failed: internal server error while processing your request.

  This request fails since the parameter exceeds the integer range. An
  error message like Request Failed: quota limit exceeds integer range
  would be more friendly to users than just raising a single internal
  server error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1338479/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338485] [NEW] Glance fail to alert when rados packages are not installed

2014-07-07 Thread Yogev Rabl
Public bug reported:

Description of problem:
When the Glance is configure to work with rbd backend (Ceph)  the Rados 
packages (python-ceph) are not installed the Error that the Glance's logs show 
is: 

2014-07-07 11:28:27.982  TRACE glance.api.v1.upload_utils Traceback (most 
recent call last):
2014-07-07 11:28:27.982  TRACE glance.api.v1.upload_utils   File 
/usr/lib/python2.7/site-packages/glance/api/v1/upload_utils.py, line 99, in 
upload_data_to_store
2014-07-07 11:28:27.982  TRACE glance.api.v1.upload_utils store)
2014-07-07 11:28:27.982  TRACE glance.api.v1.upload_utils   File 
/usr/lib/python2.7/site-packages/glance/store/__init__.py, line 380, in 
store_add_to_backend
2014-07-07 11:28:27.982  TRACE glance.api.v1.upload_utils (location, 
size, checksum, metadata) = store.add(image_id, data, size)
2014-07-07 11:28:27.982  TRACE glance.api.v1.upload_utils   File 
/usr/lib/python2.7/site-packages/glance/store/rbd.py, line 319, in add
2014-07-07 11:28:27.982  TRACE glance.api.v1.upload_utils with 
rados.Rados(conffile=self.conf_file, rados_id=self.user) as conn:
2014-07-07 11:28:27.982  TRACE glance.api.v1.upload_utils AttributeError: 
'NoneType' object has no attribute 'Rados'
2014-07-07 11:28:27.982  TRACE glance.api.v1.upload_utils 

Instead of catching an import error, the Glance should fail on the count
of lack of Rados packages.


Version-Release number of selected component (if applicable):
python-glance-2014.1-4.el7ost.noarch
python-glanceclient-0.12.0-1.el7ost.noarch
openstack-glance-2014.1-4.el7ost.noarch


How reproducible:
100%

Steps to Reproduce:
1. Configure the glance to work with rbd backend (see 
http://ceph.com/docs/master/rbd/rbd-openstack/?highlight=openstack). without 
installing the python-ceph packages.
2. try to create a new image.


Actual results:
The Glance catch an import error.

Expected results:
The Glance should alert that the Rados packages are missing.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1338485

Title:
  Glance fail to alert when rados packages are not installed

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Description of problem:
  When the Glance is configure to work with rbd backend (Ceph)  the Rados 
packages (python-ceph) are not installed the Error that the Glance's logs show 
is: 

  2014-07-07 11:28:27.982  TRACE glance.api.v1.upload_utils Traceback (most 
recent call last):
  2014-07-07 11:28:27.982  TRACE glance.api.v1.upload_utils   File 
/usr/lib/python2.7/site-packages/glance/api/v1/upload_utils.py, line 99, in 
upload_data_to_store
  2014-07-07 11:28:27.982  TRACE glance.api.v1.upload_utils store)
  2014-07-07 11:28:27.982  TRACE glance.api.v1.upload_utils   File 
/usr/lib/python2.7/site-packages/glance/store/__init__.py, line 380, in 
store_add_to_backend
  2014-07-07 11:28:27.982  TRACE glance.api.v1.upload_utils (location, 
size, checksum, metadata) = store.add(image_id, data, size)
  2014-07-07 11:28:27.982  TRACE glance.api.v1.upload_utils   File 
/usr/lib/python2.7/site-packages/glance/store/rbd.py, line 319, in add
  2014-07-07 11:28:27.982  TRACE glance.api.v1.upload_utils with 
rados.Rados(conffile=self.conf_file, rados_id=self.user) as conn:
  2014-07-07 11:28:27.982  TRACE glance.api.v1.upload_utils AttributeError: 
'NoneType' object has no attribute 'Rados'
  2014-07-07 11:28:27.982  TRACE glance.api.v1.upload_utils 

  Instead of catching an import error, the Glance should fail on the
  count of lack of Rados packages.

  
  Version-Release number of selected component (if applicable):
  python-glance-2014.1-4.el7ost.noarch
  python-glanceclient-0.12.0-1.el7ost.noarch
  openstack-glance-2014.1-4.el7ost.noarch

  
  How reproducible:
  100%

  Steps to Reproduce:
  1. Configure the glance to work with rbd backend (see 
http://ceph.com/docs/master/rbd/rbd-openstack/?highlight=openstack). without 
installing the python-ceph packages.
  2. try to create a new image.

  
  Actual results:
  The Glance catch an import error.

  Expected results:
  The Glance should alert that the Rados packages are missing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1338485/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1289627] Re: VMware NoPermission faults do not log what permission was missing

2014-07-07 Thread Vipin Balachandran
Released in oslo.vmware 0.3.

** Changed in: oslo.vmware
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1289627

Title:
  VMware NoPermission faults do not log what permission was missing

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in Oslo VMware library for OpenStack projects:
  Fix Released

Bug description:
  NoPermission object has a privilegeId that tells us which permission
  the user did not have. Presently the VMware nova driver does not log
  this data. This is very useful for debugging user permissions problems
  on vCenter or ESX.

  
http://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.wssdk.apiref.doc/vim.fault.NoPermission.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1289627/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338450] Re: Unable to delete Stacks created with Failed status

2014-07-07 Thread Amit Prakash Pandey
this is not a problem any more! So I am taking this bug off the radar.

** Changed in: horizon
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1338450

Title:
  Unable to delete Stacks created with Failed status

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  When I tried to Launch Stack , it shows status as failed. Now If I try
  to delete that stack , it gives ValueError at /project/stacks/.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1338450/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338522] [NEW] 'Add' button at Admin-Identity-Groups-'Group management' doesn't work

2014-07-07 Thread Timur Sufiev
Public bug reported:

Upon pressing on that button modal spinner appears, then disappears and
here we are again with empty 'Group Members' table.

Having investigated it a bit, I found that `data` string that is
appended here
https://github.com/openstack/horizon/blob/2014.2.b1/horizon/static/horizon/js/horizon.modals.js#L43
is not appended as it supposed: instead of adding div class=modal
hide../div + script.../script into div
id='modal_wrapper/div, only script.../script is appended. Also,
after pressing 'Add' button several times there will be multiple
identical script nodes inside modal_wrapper div. I do not understand
the root cause of this bug, but moving script.../script into div
class=modal hide.../div solved the problem.

** Affects: horizon
 Importance: Undecided
 Assignee: Timur Sufiev (tsufiev-x)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1338522

Title:
  'Add' button at Admin-Identity-Groups-'Group management' doesn't
  work

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Upon pressing on that button modal spinner appears, then disappears
  and here we are again with empty 'Group Members' table.

  Having investigated it a bit, I found that `data` string that is
  appended here
  
https://github.com/openstack/horizon/blob/2014.2.b1/horizon/static/horizon/js/horizon.modals.js#L43
  is not appended as it supposed: instead of adding div class=modal
  hide../div + script.../script into div
  id='modal_wrapper/div, only script.../script is appended. Also,
  after pressing 'Add' button several times there will be multiple
  identical script nodes inside modal_wrapper div. I do not understand
  the root cause of this bug, but moving script.../script into div
  class=modal hide.../div solved the problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1338522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338534] [NEW] Login error after session timeout

2014-07-07 Thread Robin Wang
Public bug reported:

Reproduce Procedure:

1. Login
2. Do nothing and wait till session timeout
3. Login again. Horizon asks you to login twice. The first time you login with 
correct user/password, it shows session timeout. You login again, it enters 
dashboard as expected.

The expected behavior is, user could see dashboard the first time they
login after session timeout, not enter user/password twice.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1338534

Title:
  Login error  after session timeout

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Reproduce Procedure:

  1. Login
  2. Do nothing and wait till session timeout
  3. Login again. Horizon asks you to login twice. The first time you login 
with correct user/password, it shows session timeout. You login again, it 
enters dashboard as expected.

  The expected behavior is, user could see dashboard the first time they
  login after session timeout, not enter user/password twice.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1338534/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1310513] Re: Unable to log in from vnc console

2014-07-07 Thread Qiu Yu
Just found out it should be an image issue.

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1310513

Title:
  Unable to log in from vnc console

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Versions
  
  Nova: Havana 2013.2.3 release
  Libvirt: 0.9.13-0ubuntu12.2~cloud0
  Guest Image: ubuntu-12.04.2-server-amd64

  Steps to reproduce
  --
  1. boot up guest instance with ubuntu-12.04.2-server-amd64 image
  2. open vnc console from horizon, or from `nova get-vnc-console server-uuid 
novnc` link.
  3. server console shows up
  4. however, no login prompt shows up in vnc console

  Expected
  -
  Able to login from vnc console

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1310513/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337821] [NEW] Volume attach fails while attaching to an instance that is booted from volume

2014-07-07 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

I have booted an instance from a volume, successfully booted,
now another volume, i try to attach to same instance, it is failing.
see the stack trace..


2014-07-04 08:56:11.391 TRACE oslo.messaging.rpc.dispatcher raise 
exception.InvalidDevicePath(path=root_device_name)
2014-07-04 08:56:11.391 TRACE oslo.messaging.rpc.dispatcher InvalidDevicePath: 
The supplied device path (vda) is invalid.
2014-07-04 08:56:11.391 TRACE oslo.messaging.rpc.dispatcher
2014-07-04 08:56:11.396 ERROR oslo.messaging._drivers.common 
[req-648122d5-fd39-495b-a3a7-a96bd32091d6 admin admin] Returning exception The 
supplied device path (vda) is invalid. to caller
2014-07-04 08:56:11.396 ERROR oslo.messaging._drivers.common 
[req-648122d5-fd39-495b-a3a7-a96bd32091d6 admin admin] ['Traceback (most recent 
call last):\n', '  File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
134, in _dispatch_and_reply\nincoming.message))\n', '  File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
177, in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, 
args)\n', '  File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
123, in _do_dispatch\nresult = getattr(endpoint, method)(ctxt, 
**new_args)\n', '  File /opt/stack/nova/nova/compute/manager.py, line 401, in 
decorated_function\nreturn function(self, context, *args, **kwargs)\n', '  
File /opt/stack/nova/nova/exception.py, line 88, in wrapped\npayload)\n', 
'  File /opt/stack/nova/nova/openstack/common/excutils.py, line 82, in 
__exit__\nsix.reraise(self.type_, self.value, self.tb)\n
 ', '  File /opt/stack/nova/nova/exception.py, line 71, in wrapped\n
return f(self, context, *args, **kw)\n', '  File 
/opt/stack/nova/nova/compute/manager.py, line 286, in decorated_function\n
pass\n', '  File /opt/stack/nova/nova/openstack/common/excutils.py, line 82, 
in __exit__\nsix.reraise(self.type_, self.value, self.tb)\n', '  File 
/opt/stack/nova/nova/compute/manager.py, line 272, in decorated_function\n
return function(self, context, *args, **kwargs)\n', '  File 
/opt/stack/nova/nova/compute/manager.py, line 314, in decorated_function\n
kwargs[\'instance\'], e, sys.exc_info())\n', '  File 
/opt/stack/nova/nova/openstack/common/excutils.py, line 82, in __exit__\n
six.reraise(self.type_, self.value, self.tb)\n', '  File 
/opt/stack/nova/nova/compute/manager.py, line 302, in decorated_function\n
return function(self, context, *args, **kwargs)\n', '  File 
/opt/stack/nova/nova/compute/manager.py, line 4201, in 
reserve_block_device_name\nretur
 n do_reserve()\n', '  File 
/opt/stack/nova/nova/openstack/common/lockutils.py, line 249, in inner\n
return f(*args, **kwargs)\n', '  File 
/opt/stack/nova/nova/compute/manager.py, line 4188, in do_reserve\n
context, instance, bdms, device)\n', '  File 
/opt/stack/nova/nova/compute/utils.py, line 106, in 
get_device_name_for_instance\nmappings[\'root\'], device)\n', '  File 
/opt/stack/nova/nova/compute/utils.py, line 155, in get_next_device_name\n
raise exception.InvalidDevicePath(path=root_device_name)\n', 
'InvalidDevicePath: The supplied device path (vda) is invalid.\n']

** Affects: nova
 Importance: Undecided
 Status: Invalid

-- 
Volume attach fails while attaching to an instance that is booted from volume
https://bugs.launchpad.net/bugs/1337821
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337821] Re: Volume attach fails while attaching to an instance that is booted from volume

2014-07-07 Thread Ajay Bajaj
** Project changed: cinder = nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1337821

Title:
  Volume attach fails while attaching to an instance that is booted from
  volume

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I have booted an instance from a volume, successfully booted,
  now another volume, i try to attach to same instance, it is failing.
  see the stack trace..

  
  2014-07-04 08:56:11.391 TRACE oslo.messaging.rpc.dispatcher raise 
exception.InvalidDevicePath(path=root_device_name)
  2014-07-04 08:56:11.391 TRACE oslo.messaging.rpc.dispatcher 
InvalidDevicePath: The supplied device path (vda) is invalid.
  2014-07-04 08:56:11.391 TRACE oslo.messaging.rpc.dispatcher
  2014-07-04 08:56:11.396 ERROR oslo.messaging._drivers.common 
[req-648122d5-fd39-495b-a3a7-a96bd32091d6 admin admin] Returning exception The 
supplied device path (vda) is invalid. to caller
  2014-07-04 08:56:11.396 ERROR oslo.messaging._drivers.common 
[req-648122d5-fd39-495b-a3a7-a96bd32091d6 admin admin] ['Traceback (most recent 
call last):\n', '  File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
134, in _dispatch_and_reply\nincoming.message))\n', '  File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
177, in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, 
args)\n', '  File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
123, in _do_dispatch\nresult = getattr(endpoint, method)(ctxt, 
**new_args)\n', '  File /opt/stack/nova/nova/compute/manager.py, line 401, in 
decorated_function\nreturn function(self, context, *args, **kwargs)\n', '  
File /opt/stack/nova/nova/exception.py, line 88, in wrapped\npayload)\n', 
'  File /opt/stack/nova/nova/openstack/common/excutils.py, line 82, in 
__exit__\nsix.reraise(self.type_, self.value, self.tb)
 \n', '  File /opt/stack/nova/nova/exception.py, line 71, in wrapped\n
return f(self, context, *args, **kw)\n', '  File 
/opt/stack/nova/nova/compute/manager.py, line 286, in decorated_function\n
pass\n', '  File /opt/stack/nova/nova/openstack/common/excutils.py, line 82, 
in __exit__\nsix.reraise(self.type_, self.value, self.tb)\n', '  File 
/opt/stack/nova/nova/compute/manager.py, line 272, in decorated_function\n
return function(self, context, *args, **kwargs)\n', '  File 
/opt/stack/nova/nova/compute/manager.py, line 314, in decorated_function\n
kwargs[\'instance\'], e, sys.exc_info())\n', '  File 
/opt/stack/nova/nova/openstack/common/excutils.py, line 82, in __exit__\n
six.reraise(self.type_, self.value, self.tb)\n', '  File 
/opt/stack/nova/nova/compute/manager.py, line 302, in decorated_function\n
return function(self, context, *args, **kwargs)\n', '  File 
/opt/stack/nova/nova/compute/manager.py, line 4201, in 
reserve_block_device_name\nret
 urn do_reserve()\n', '  File 
/opt/stack/nova/nova/openstack/common/lockutils.py, line 249, in inner\n
return f(*args, **kwargs)\n', '  File 
/opt/stack/nova/nova/compute/manager.py, line 4188, in do_reserve\n
context, instance, bdms, device)\n', '  File 
/opt/stack/nova/nova/compute/utils.py, line 106, in 
get_device_name_for_instance\nmappings[\'root\'], device)\n', '  File 
/opt/stack/nova/nova/compute/utils.py, line 155, in get_next_device_name\n
raise exception.InvalidDevicePath(path=root_device_name)\n', 
'InvalidDevicePath: The supplied device path (vda) is invalid.\n']

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1337821/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338550] [NEW] V3 API project/user/group list only work with domain scoped token

2014-07-07 Thread mouadino
Public bug reported:

From the policy.json of the V3 API:

admin_and_matching_domain_id: rule:admin_required and 
domain_id:%(domain_id)s,
identity:list_projects: rule:admin_required and domain_id:%(domain_id)s,
...
identity:list_users: rule:cloud_admin or 
rule:admin_and_matching_domain_id,

This specify that if an admin user of a domain ask for GET /v3/users
/domain-id/ then this later will only work if token was scoped in the
this domain but not if it was scoped in a project in that domain.

A patch is coming soon that hopefully will clarify more.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1338550

Title:
  V3 API project/user/group list  only work with domain scoped token

Status in OpenStack Identity (Keystone):
  New

Bug description:
  From the policy.json of the V3 API:

  admin_and_matching_domain_id: rule:admin_required and 
domain_id:%(domain_id)s,
  identity:list_projects: rule:admin_required and 
domain_id:%(domain_id)s,
  ...
  identity:list_users: rule:cloud_admin or 
rule:admin_and_matching_domain_id,

  This specify that if an admin user of a domain ask for GET /v3/users
  /domain-id/ then this later will only work if token was scoped in
  the this domain but not if it was scoped in a project in that domain.

  A patch is coming soon that hopefully will clarify more.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1338550/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338551] [NEW] Failure in interface-attach may leave port around

2014-07-07 Thread Drew Thorstensen
Public bug reported:

When the interface-attach action is run, it may be passed in a network
(but no port identifier).  Therefore, the action allocates a port on
that network.  However, if the attach method fails for some reason, the
port is not cleaned up.

This behavior would be appropriate if the invoker had passed in a port
identifier.  However if nova created the port for the action and that
action failed, the port should be cleaned up as part of the failure.

The allocation of the port occurs in nova/compute/manager.py in the
attach_interface method.  Recommend that we de-allocate the port for the
instance had no port_id been passed in.

** Affects: nova
 Importance: Undecided
 Assignee: Drew Thorstensen (thorst)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Drew Thorstensen (thorst)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1338551

Title:
  Failure in interface-attach may leave port around

Status in OpenStack Compute (Nova):
  New

Bug description:
  When the interface-attach action is run, it may be passed in a network
  (but no port identifier).  Therefore, the action allocates a port on
  that network.  However, if the attach method fails for some reason,
  the port is not cleaned up.

  This behavior would be appropriate if the invoker had passed in a port
  identifier.  However if nova created the port for the action and that
  action failed, the port should be cleaned up as part of the failure.

  The allocation of the port occurs in nova/compute/manager.py in the
  attach_interface method.  Recommend that we de-allocate the port for
  the instance had no port_id been passed in.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1338551/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337821] Re: Volume attach fails while attaching to an instance that is booted from volume

2014-07-07 Thread Ajay Bajaj
It should belong to nova, so moved it to nova and moved to new.

** Changed in: nova
   Status: Invalid = New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1337821

Title:
  Volume attach fails while attaching to an instance that is booted from
  volume

Status in OpenStack Compute (Nova):
  New

Bug description:
  I have booted an instance from a volume, successfully booted,
  now another volume, i try to attach to same instance, it is failing.
  see the stack trace..

  
  2014-07-04 08:56:11.391 TRACE oslo.messaging.rpc.dispatcher raise 
exception.InvalidDevicePath(path=root_device_name)
  2014-07-04 08:56:11.391 TRACE oslo.messaging.rpc.dispatcher 
InvalidDevicePath: The supplied device path (vda) is invalid.
  2014-07-04 08:56:11.391 TRACE oslo.messaging.rpc.dispatcher
  2014-07-04 08:56:11.396 ERROR oslo.messaging._drivers.common 
[req-648122d5-fd39-495b-a3a7-a96bd32091d6 admin admin] Returning exception The 
supplied device path (vda) is invalid. to caller
  2014-07-04 08:56:11.396 ERROR oslo.messaging._drivers.common 
[req-648122d5-fd39-495b-a3a7-a96bd32091d6 admin admin] ['Traceback (most recent 
call last):\n', '  File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
134, in _dispatch_and_reply\nincoming.message))\n', '  File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
177, in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, 
args)\n', '  File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
123, in _do_dispatch\nresult = getattr(endpoint, method)(ctxt, 
**new_args)\n', '  File /opt/stack/nova/nova/compute/manager.py, line 401, in 
decorated_function\nreturn function(self, context, *args, **kwargs)\n', '  
File /opt/stack/nova/nova/exception.py, line 88, in wrapped\npayload)\n', 
'  File /opt/stack/nova/nova/openstack/common/excutils.py, line 82, in 
__exit__\nsix.reraise(self.type_, self.value, self.tb)
 \n', '  File /opt/stack/nova/nova/exception.py, line 71, in wrapped\n
return f(self, context, *args, **kw)\n', '  File 
/opt/stack/nova/nova/compute/manager.py, line 286, in decorated_function\n
pass\n', '  File /opt/stack/nova/nova/openstack/common/excutils.py, line 82, 
in __exit__\nsix.reraise(self.type_, self.value, self.tb)\n', '  File 
/opt/stack/nova/nova/compute/manager.py, line 272, in decorated_function\n
return function(self, context, *args, **kwargs)\n', '  File 
/opt/stack/nova/nova/compute/manager.py, line 314, in decorated_function\n
kwargs[\'instance\'], e, sys.exc_info())\n', '  File 
/opt/stack/nova/nova/openstack/common/excutils.py, line 82, in __exit__\n
six.reraise(self.type_, self.value, self.tb)\n', '  File 
/opt/stack/nova/nova/compute/manager.py, line 302, in decorated_function\n
return function(self, context, *args, **kwargs)\n', '  File 
/opt/stack/nova/nova/compute/manager.py, line 4201, in 
reserve_block_device_name\nret
 urn do_reserve()\n', '  File 
/opt/stack/nova/nova/openstack/common/lockutils.py, line 249, in inner\n
return f(*args, **kwargs)\n', '  File 
/opt/stack/nova/nova/compute/manager.py, line 4188, in do_reserve\n
context, instance, bdms, device)\n', '  File 
/opt/stack/nova/nova/compute/utils.py, line 106, in 
get_device_name_for_instance\nmappings[\'root\'], device)\n', '  File 
/opt/stack/nova/nova/compute/utils.py, line 155, in get_next_device_name\n
raise exception.InvalidDevicePath(path=root_device_name)\n', 
'InvalidDevicePath: The supplied device path (vda) is invalid.\n']

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1337821/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338567] [NEW] delete the image using v2 api when we upload a image using v1 api, glance don't delete the image data after finishing the uploading.

2014-07-07 Thread Hua Wang
Public bug reported:

First, I use glance cli to upload a image
glance image-create --name myimage --disk-format=raw --container-format=bare 
--file /path/to/file.img
At the same time, I use the v2 api to delete the image
curl -i -X DELETE -H 'X-Auth-Token: $TOKNE_ID' -H 'Content-Type: 
application/json' http://localhost:9292/v2/images/$IMAGE_ID.
After the uploading is finished, the response shows that the image status is 
active and the image is deleted. The image data that has been uploaded has not 
been removed from glance store backend. 
The right response should be Image  could not be found after upload. The 
image may have been deleted during the upload. as we see when we upload image 
using v1 api and delete using v1 api or we upload image using v2 api and delete 
using v2 api.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1338567

Title:
  delete the image using v2 api when we upload a image using v1 api,
  glance don't delete the image data after finishing the uploading.

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  First, I use glance cli to upload a image
  glance image-create --name myimage --disk-format=raw --container-format=bare 
--file /path/to/file.img
  At the same time, I use the v2 api to delete the image
  curl -i -X DELETE -H 'X-Auth-Token: $TOKNE_ID' -H 'Content-Type: 
application/json' http://localhost:9292/v2/images/$IMAGE_ID.
  After the uploading is finished, the response shows that the image status is 
active and the image is deleted. The image data that has been uploaded has not 
been removed from glance store backend. 
  The right response should be Image  could not be found after upload. The 
image may have been deleted during the upload. as we see when we upload image 
using v1 api and delete using v1 api or we upload image using v2 api and delete 
using v2 api.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1338567/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1297962] Re: [sru] Nova-compute doesnt start

2014-07-07 Thread Launchpad Bug Tracker
This bug was fixed in the package nova - 1:2014.1.1-0ubuntu1

---
nova (1:2014.1.1-0ubuntu1) trusty; urgency=medium

  * Resynchronize with stable/icehouse (867341f) (LP: #1328134):
- [867341f] Fix security group race condition while listing and deleting 
rules
- [ffcb176] VMware: ensure rescue instance is deleted when instance is 
deleted
- [fe4fe70] VMware: Log additional details of suds faults
- [43f0437] Add info_cache as expected attribute when evacuate instance
- [a2da9ce] VMware: uncaught exception during snapshot deletion
- [1a45944] Catch InstanceNotFound exception if migration fails
- [ee374f1] Do not wait for neutron event if not powering on libvirt domain
- [705ad64] Reap child processes gracefully if greenlet thread gets killed
- [f769bf8] Fixes arguments parsing when executing command
- [bedb66f] Use one query instead of two for quota_usages
- [422decd] VMWare - Check for compute node before triggering destroy
- [6629116] Use debug level logging in unit tests, but don't save them.
- [088b718] support local debug logging
- [080f785] Revert Use debug level logging during unit tests
- [fb03028] VMWare: add power off vm before detach disk during unrescue
- [d93427a] Check for None or timestamp in availability zone api sample
- [f5c3330f] Pass configured auth strategy to neutronclient
- [74d1043] remove unneeded call to network_api on rebuild_instance
- [f1fdb3c] Remove unnecessary call to fetch info_cache
- [395ec82] Remove metadata's network-api dependence on the database
- [a48d268] InvalidCPUInfo exception added to except block
- [77392a9] Moved the registration of lifecycle event handler in init_host()
- [40ae1ee] Fix display of server group members
- [66c7ca1] Change errors_out_migration decorator to work with RPC
- [e1e140b] Don't explode if we fail to unplug VIFs after a failed boot
- [c816488] Remove unneeded call to fetch network info on shutdown
- [7f9f3ef] Don't overwrite instance object with dict in _init_instance()
- [2728f1e] Fix bug detach volume fails with KeyError in EC2
   * debian/patches/libvirt-Handle-unsupported-host-capabilities.patch: Fix 
exception
 when starting LXC containers. (LP: #1297962)
 -- Chuck Short zul...@ubuntu.com   Tue, 24 Jun 2014 10:47:47 -0400

** Changed in: nova (Ubuntu Trusty)
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1297962

Title:
  [sru] Nova-compute doesnt start

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  In Progress
Status in “nova” package in Ubuntu:
  Confirmed
Status in “nova” source package in Trusty:
  Fix Released

Bug description:
  2014-03-26 13:08:21.268 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/tpool.py, line 77, in tworker
  2014-03-26 13:08:21.268 TRACE nova.openstack.common.threadgroup rv = 
meth(*args,**kwargs)
  2014-03-26 13:08:21.268 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/libvirt.py, line 3127, in baselineCPU
  2014-03-26 13:08:21.268 TRACE nova.openstack.common.threadgroup if ret is 
None: raise libvirtError ('virConnectBaselineCPU() failed', conn=self)
  2014-03-26 13:08:21.268 TRACE nova.openstack.common.threadgroup libvirtError: 
this function is not supported by the connection driver: virConnectBaselineCPU

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1297962/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1331912] Re: [OSSA 2014-022] V2 Trusts allow trustee to emulate trustor in other projects (CVE-2014-3520)

2014-07-07 Thread Thierry Carrez
** Changed in: ossa
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1331912

Title:
  [OSSA 2014-022] V2 Trusts allow trustee to emulate trustor in other
  projects (CVE-2014-3520)

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone havana series:
  Fix Committed
Status in Keystone icehouse series:
  Fix Committed
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  When you consume a trust in a v2 token you must provide the project id
  as part of your auth. This is a bug and should be reported after this.

  If the trustee requests a trust scoped token to a project different to
  the one the trust is created for AND the trustor has the required
  roles in the other project then the token will be provided with those
  roles on the other project.

  Attaching a script to show the problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1331912/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1335208] Re: Shell injection possibility in cmd/control.py

2014-07-07 Thread Thierry Carrez
** Information type changed from Private Security to Public

** Changed in: ossa
   Status: Incomplete = Invalid

** Tags added: security

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1335208

Title:
  Shell injection possibility in cmd/control.py

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in OpenStack Security Advisories:
  Invalid

Bug description:
  The glance/cmd/control.py file contains a possible shell injection
  vulnerability:
  https://github.com/openstack/glance/blob/master/glance/cmd/control.py#L134
  .  Setting 'shell=True' here opens the possibility of shell injection
  by setting server to something like '; rm -rf /'.  This will cause the
  command 'rm -rf /' to be run with the privileges of the user that ran
  Glance.

  This may not be a major security concern at this time because the only
  place that I found for 'server' to come from is a Glance configuration
  file, which should be locked down.  Only privileged users should have
  write access to the config file, and if they want to do bad things on
  the system there are easier ways.

  Still, 'shell=True' appears to be completely unnecessary for this
  call.  Simply omitting the shell parameter here will cause it to
  revert to the default behavior, which requires that the command to be
  run be specified in a separate parameter than the arguments to the
  command.  This effectively prevents shell injection vulnerabilities.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1335208/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338606] [NEW] if a deadlock exception is got a retry logic should be in place to retry the operation

2014-07-07 Thread Rossella Sblendido
Public bug reported:

In Neutron there's no retry logic in case a DB deadlock is got.
If a deadlock occurs the operation should be retried.

** Affects: neutron
 Importance: Undecided
 Assignee: Rossella Sblendido (rossella-o)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1338606

Title:
  if a deadlock exception is got a retry logic should be in place to
  retry the operation

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  In Neutron there's no retry logic in case a DB deadlock is got.
  If a deadlock occurs the operation should be retried.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1338606/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338614] [NEW] Backgrounded resizing does not work

2014-07-07 Thread Mohammed Naser
Public bug reported:

When setting resize_rootfs to 'noblock', cloud-init should fork a new
process and continue with it's own initialization process.  However, it
seems that this is currently broken, as you see from these logs that it
still blocks on it:

Jul  7 12:34:20 localhost [CLOUDINIT] cc_resizefs.py[DEBUG]: Resizing (via 
forking) root filesystem (type=ext4, val=noblock)
Jul  7 12:34:20 localhost [CLOUDINIT] util.py[WARNING]: Failed forking and 
calling callback NoneType
Jul  7 12:34:20 localhost [CLOUDINIT] util.py[DEBUG]: Failed forking and 
calling callback NoneType#012Traceback (most recent call last):#012  File 
/usr/lib/python2.6/site-packages/cloudinit/util.py, line 220, in fork_cb#012  
  child_cb(*args)#012TypeError: 'NoneType' object is not callable

Also, when looking at timings, you can see that it was blocked on it for
the whole time

Jul  7 12:33:38 localhost [CLOUDINIT] util.py[DEBUG]: Cloud-init v. 0.7.4 
running 'init' at Mon, 07 Jul 2014 12:33:38 +. Up 5.67 seconds.
Jul  7 12:34:20 localhost [CLOUDINIT] util.py[DEBUG]: backgrounded Resizing 
took 41.487 seconds
Jul  7 12:34:20 localhost [CLOUDINIT] util.py[DEBUG]: cloud-init mode 'init' 
took 41.799 seconds (41.80)

** Affects: cloud-init
 Importance: Undecided
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1338614

Title:
  Backgrounded resizing does not work

Status in Init scripts for use on cloud images:
  Confirmed

Bug description:
  When setting resize_rootfs to 'noblock', cloud-init should fork a new
  process and continue with it's own initialization process.  However,
  it seems that this is currently broken, as you see from these logs
  that it still blocks on it:

  Jul  7 12:34:20 localhost [CLOUDINIT] cc_resizefs.py[DEBUG]: Resizing (via 
forking) root filesystem (type=ext4, val=noblock)
  Jul  7 12:34:20 localhost [CLOUDINIT] util.py[WARNING]: Failed forking and 
calling callback NoneType
  Jul  7 12:34:20 localhost [CLOUDINIT] util.py[DEBUG]: Failed forking and 
calling callback NoneType#012Traceback (most recent call last):#012  File 
/usr/lib/python2.6/site-packages/cloudinit/util.py, line 220, in fork_cb#012  
  child_cb(*args)#012TypeError: 'NoneType' object is not callable

  Also, when looking at timings, you can see that it was blocked on it
  for the whole time

  Jul  7 12:33:38 localhost [CLOUDINIT] util.py[DEBUG]: Cloud-init v. 0.7.4 
running 'init' at Mon, 07 Jul 2014 12:33:38 +. Up 5.67 seconds.
  Jul  7 12:34:20 localhost [CLOUDINIT] util.py[DEBUG]: backgrounded Resizing 
took 41.487 seconds
  Jul  7 12:34:20 localhost [CLOUDINIT] util.py[DEBUG]: cloud-init mode 'init' 
took 41.799 seconds (41.80)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1338614/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338630] [NEW] Integration test should run full screen

2014-07-07 Thread Daniel Korn
Public bug reported:

Integration tests using Selenium Webdriver are currently running in a medium 
size window (Selenium's defualt size for Firefox browser).
Maximizing the Firefox's window size requires a simple change and will improve 
the tests display on run time.

TODO:
-
Add maximize_window() method to the driver in 
openstack_dashboard/test/integration_tests/helpers.py -- BaseTestCase -- 
setUp.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: integration-tests selenium

** Description changed:

- Integration tests using Selenium Webdriver are currently running in a medium 
size window (Selenium's defualt size for Firefox browser). 
+ Integration tests using Selenium Webdriver are currently running in a medium 
size window (Selenium's defualt size for Firefox browser).
  Maximizing the Firefox's window size requires a simple change and will 
improve the tests display on run time.
  
  TODO:
  -
  Add maximize_window() method to the driver in 
openstack_dashboard/test/integration_tests/helpers.py -- BaseTestCase -- 
setUp.

** Tags added: selenium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1338630

Title:
  Integration test should run full screen

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Integration tests using Selenium Webdriver are currently running in a medium 
size window (Selenium's defualt size for Firefox browser).
  Maximizing the Firefox's window size requires a simple change and will 
improve the tests display on run time.

  TODO:
  -
  Add maximize_window() method to the driver in 
openstack_dashboard/test/integration_tests/helpers.py -- BaseTestCase -- 
setUp.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1338630/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338672] [NEW] Nova might spawn without waiting for network-vif-plugged event

2014-07-07 Thread Salvatore Orlando
Public bug reported:

This applies only when the nova/neutron event reporting mechanism is
enabled.

It has been observed that in some cases Nova spawns an instance without
waiting for network-vif-plugged event, even if the vif was unplugged and
then plugged again.

This happens because the status of the VIF in the network info cache is not 
updated when such events are received.
Therefore the cache contains an out-of-date value and the VIF might already be 
in status ACTIVE when the instance is being spawned. However there is no 
guarantee that this would be the actual status of the VIF.

For instance in this case there are only two instances in which nova
starts waiting for 'network-vif-plugged' on f800d4a8-0a01-475f-
bd34-8d975ce6f1ab. However this instance is used in
tempest.api.compute.servers.test_server_actions, and the tests in this
suite should trigger more than 2 events requiring a respawn of an
instance after unplugging vifs.

From what can be gathered by logs, this issue, if confirmed, should
occur only when actions such as stop, resize, reboot_hard are executed
on a instance.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1338672

Title:
  Nova might spawn without waiting for network-vif-plugged event

Status in OpenStack Compute (Nova):
  New

Bug description:
  This applies only when the nova/neutron event reporting mechanism is
  enabled.

  It has been observed that in some cases Nova spawns an instance
  without waiting for network-vif-plugged event, even if the vif was
  unplugged and then plugged again.

  This happens because the status of the VIF in the network info cache is not 
updated when such events are received.
  Therefore the cache contains an out-of-date value and the VIF might already 
be in status ACTIVE when the instance is being spawned. However there is no 
guarantee that this would be the actual status of the VIF.

  For instance in this case there are only two instances in which nova
  starts waiting for 'network-vif-plugged' on f800d4a8-0a01-475f-
  bd34-8d975ce6f1ab. However this instance is used in
  tempest.api.compute.servers.test_server_actions, and the tests in this
  suite should trigger more than 2 events requiring a respawn of an
  instance after unplugging vifs.

  From what can be gathered by logs, this issue, if confirmed, should
  occur only when actions such as stop, resize, reboot_hard are executed
  on a instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1338672/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1329546] Re: Upon rebuild instances might never get to Active state

2014-07-07 Thread Salvatore Orlando
Contrary to what claimed in the bug description, the actual root cause
is instead a different one, and it's in neutron.

For events like rebuilding or rebooting an instance a VIF disappears and 
reappears rather quickly.
In this case the OVS agent loop starts processing the VIF, and then it skips 
processing when it realizes it's not anymore on the integration bridge.

However it keeps it into the set of 'current' VIFs. This means that when
the VIF is plugged again it's not processed and hence the problem.

Removing nova from affected projects. Patch will follow up soon.



** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1329546

Title:
  Upon rebuild instances might never get to Active state

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  VMware mine sweeper for Neutron (*) recently showed a 100% failure
  rate on tempest.api.compute.v3.servers.test_server_actions

  Logs for two instances of these failures are available at [1] and [2]
  The failure manifested as an instance unable to go active after a rebuild.
  A bit of instrumentation and log analysis revealed no obvious error on the 
neutron side - and also that the instance was actually in running state even 
if its take state was rebuilding/spawning

  N-API logs [3] revealed that the instance spawn was timing out on a
  missed notification from neutron regarding VIF plug - however the same
  log showed such notification was received [4]

  It turns out that, after rebuild, the instance network cache had still
  'active': False for the instance's VIF, even if the status for the
  corresponding port was 'ACTIVE'. This happened because after the
  network-vif-plugged event was received, nothing triggered a refresh of
  the instance network info. For this reason, the VM, after a rebuild,
  kept waiting for an even which obviously was never sent from neutron.

  While this manifested only on mine sweeper - this appears to be a nova bug - 
manifesting in vmware minesweeper only because of the way the plugin 
synchronizes with the backend for reporting the operational status of a port.
  A simple solution for this problem would be to reload the instance network 
info cache when network-vif-plugged events are received by nova. (But as the 
reporter knows nothing about nova this might be a very bad idea as well)

  [1] http://208.91.1.172/logs/neutron/98278/2/413209/testr_results.html
  [2] http://208.91.1.172/logs/neutron/73234/34/413213/testr_results.html
  [3] 
http://208.91.1.172/logs/neutron/73234/34/413213/logs/screen-n-cpu.txt.gz?level=WARNING#_2014-06-06_01_46_36_219
  [4] 
http://208.91.1.172/logs/neutron/73234/34/413213/logs/screen-n-cpu.txt.gz?level=DEBUG#_2014-06-06_01_41_31_767

  (*) runs libvirt/KVM + NSX

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1329546/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338736] [NEW] compute.instance.create.end no longer has launched_at populated

2014-07-07 Thread Andrew Laski
Public bug reported:

The launched_at instance field should be populated with the launch time
in the compute.instance.create.end notification.  Since the move to
build_and_run_instance this field is no longer populated when the
notification is sent.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1338736

Title:
  compute.instance.create.end no longer has launched_at populated

Status in OpenStack Compute (Nova):
  New

Bug description:
  The launched_at instance field should be populated with the launch
  time in the compute.instance.create.end notification.  Since the move
  to build_and_run_instance this field is no longer populated when the
  notification is sent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1338736/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1316822] Re: soft reboot of instance does not ensure iptables rules are present

2014-07-07 Thread Jeremy Stanley
After discussing with Andrew and Thierry, I'm convinced that the
potential behavior change introduced by a backport of that mitigating
commit, when weighed against the amount of social engineering needed to
exploit this in Havana, means this bug is probably better just
documented as a known behavior.

Removed the advisory task and tagged security in case the OSSG has any
interest in documenting this.

** Tags added: security

** Information type changed from Public Security to Public

** No longer affects: ossa

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1316822

Title:
  soft reboot of instance does not ensure iptables rules are present

Status in OpenStack Compute (Nova):
  New

Bug description:
  The iptables rules needed to implement instance security group rules
  get inserted by the _create_domain_and_network function in
  nova/virt/libvirt/driver.py

  This function is called by the following functions: _hard_reboot,
  resume and spawn (also in a couple of migration related functions).

  Doing nova reboot instance_id only does a soft reboot
  (_soft_reboot) and assumes that the rules are already present and
  therefore does not check or try to add them.

  If the instances is stopped (nova stop instance_id) and nova-compute
  is restarted (for example for a maintenance or problem), the iptables
  rules are removed as observed via output displayed in iptables -S.

  If the instance is started via  nova reboot instance_id the rule is
  NOT reapplied until a service nova-compute restart is issued. I have
  reports that this may affect nova start instance_id as well.

  Depending on if the Cloud is public facing, this opens up a
  potentially huge security vulnerability as an instance can be powered
  on without being protected by any security group rules (not even the
  sg-fallback rule). This is unbeknownst to the instance owner or Cloud
  operators unless they specifically monitor for this situation.

  The code should not do a soft reboot/start and error out or fallback
  to a resume (start)or hard reboot if it detects that the domain is not
  running.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1316822/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338735] [NEW] Live-migration with volumes creates orphan access records

2014-07-07 Thread Rajini Ram
Public bug reported:

When live migration is performed on instances with volume attached, nova
sends two initiator commands and one terminate connection. This causes
orphan access records in some storage arrays ( tested with Dell
EqualLogic Driver).

Steps to reproduce:
1. Have one controller and two compute node setup. Setup cinder volume on iscsi 
or a storage array.
2. Create an instance
3. Create a volume and attach it to the instance
4. Check the location of the instance ( computenode1 or 2)
nova instance1 show
4. Perform live migration of the instance and move it to the second compute node
nova live-migration instance1 computenode2
5. Check the cinder api log. c-api
There will be two os-initialize_connection and one os-terminate-connection. 
There should only be one set of initialize and terminate connection

** Affects: cinder
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: cinder live-migration nova

** Description changed:

  When live migration is performed on instances with volume attached, nova
- sends two initiator commands and one ter5.minate connection. This causes
+ sends two initiator commands and one terminate connection. This causes
  orphan access records in some storage arrays ( tested with Dell
  EqualLogic Driver).
  
  Steps to reproduce:
- 1. Have one controller and two compute node setup. Setup cinder volume on 
iscsi or a storage array. 
+ 1. Have one controller and two compute node setup. Setup cinder volume on 
iscsi or a storage array.
  2. Create an instance
- 3. Create a volume and attach it to the instance 
+ 3. Create a volume and attach it to the instance
  4. Check the location of the instance ( computenode1 or 2)
  nova instance1 show
  4. Perform live migration of the instance and move it to the second compute 
node
  nova live-migration instance1 computenode2
  5. Check the cinder api log. c-api
  There will be two os-initialize_connection and one os-terminate-connection. 
There should only be one set of initialize and terminate connection

** Also affects: cinder
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1338735

Title:
  Live-migration with volumes creates orphan access records

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  When live migration is performed on instances with volume attached,
  nova sends two initiator commands and one terminate connection. This
  causes orphan access records in some storage arrays ( tested with Dell
  EqualLogic Driver).

  Steps to reproduce:
  1. Have one controller and two compute node setup. Setup cinder volume on 
iscsi or a storage array.
  2. Create an instance
  3. Create a volume and attach it to the instance
  4. Check the location of the instance ( computenode1 or 2)
  nova instance1 show
  4. Perform live migration of the instance and move it to the second compute 
node
  nova live-migration instance1 computenode2
  5. Check the cinder api log. c-api
  There will be two os-initialize_connection and one os-terminate-connection. 
There should only be one set of initialize and terminate connection

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1338735/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338745] [NEW] Add healthcheck middleware

2014-07-07 Thread John Dewey
Public bug reported:

Would be useful for keystone to support a healthcheck URL for
consumption by load balancers.

This middleware should provide the ability to manually disable the
service via the existence of a file on the system's local disk.  This
middleware can also be extended [1] to perform basic application
functionality checks prior to reporting OK.

Having this middleware would give us some flexibility around LB health
checks we do not have today, and IMO would be beneficial.  This is
fairly similar to what swift [2] is doing as well.


[1] 
https://github.com/CiscoSystems/puppet-monit/blob/a459a7314ac4f0250ad8b9c6956a872b949840f1/files/healthcheck.py#L37
[2] 
https://github.com/openstack/swift/blob/0b594bc3afbffe942edebe9cdf02f60c06e627ab/swift/common/middleware/healthcheck.p

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: ops

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1338745

Title:
  Add healthcheck middleware

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Would be useful for keystone to support a healthcheck URL for
  consumption by load balancers.

  This middleware should provide the ability to manually disable the
  service via the existence of a file on the system's local disk.  This
  middleware can also be extended [1] to perform basic application
  functionality checks prior to reporting OK.

  Having this middleware would give us some flexibility around LB health
  checks we do not have today, and IMO would be beneficial.  This is
  fairly similar to what swift [2] is doing as well.

  
  [1] 
https://github.com/CiscoSystems/puppet-monit/blob/a459a7314ac4f0250ad8b9c6956a872b949840f1/files/healthcheck.py#L37
  [2] 
https://github.com/openstack/swift/blob/0b594bc3afbffe942edebe9cdf02f60c06e627ab/swift/common/middleware/healthcheck.p

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1338745/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337717] Re: L2-population fanout-cast leads to performance and scalability issue

2014-07-07 Thread Eugene Nikanorov
The problem described in the bug seems to be a new feature needed to increase 
performance on the scale.
It can't be really considered as a bug because the described behavior is as 
designed.

I suggest to work on this problem in the scope of appropriate blueprint.

** Changed in: neutron
   Importance: Undecided = Medium

** Changed in: neutron
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1337717

Title:
  L2-population fanout-cast leads to performance and scalability issue

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  
https://github.com/osrg/quantum/blob/master/neutron/plugins/ml2/drivers/l2pop/rpc.py

  def _notification_fanout(self, context, method, fdb_entries):
  
  self.fanout_cast(context,
   self.make_msg(method, fdb_entries=fdb_entries),
   topic=self.topic_l2pop_update)

  the fanout_cast will publish the message to all L2 agents listening
  l2population topic.

  If there are 1000 agents (it is a small cloud), and all of them are
  listening to  l2population topic, adding one new port will leads to
  1000 sub messages. Generally rabbitMQ can handle 10k messages per
  second, and the fanout_cast method will leads to greatly performance
  issues, and make the neutron service hard to scale, the concurrency of
  VM port request will be very very small.

  No matter how many ports in the subnet, the performance is up to the
  number of the L2 agents listening the topic.

  The way to solve the performance and scalability issue is to make the
  L2 agent listening a topic related to network, for example, using
  network uuid as the topic. If one port is activated in the subnet,
  only those agents where there are VMs of the same network should
  receive the L2-pop message.  This is parial-mesh, the original design
  purpose, but not implemented yet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1337717/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338737] Re: nova needs to require oslotest in test-requirements

2014-07-07 Thread Matt Riedemann
** Changed in: nova
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1338737

Title:
  nova needs to require oslotest in test-requirements

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Python 2.7.3 (default, Sep 26 2013, 20:08:41) 
  [GCC 4.6.3] on linux2
  Type help, copyright, credits or license for more information.
   from nova.tests.db import test_db_api
  Traceback (most recent call last):
File stdin, line 1, in module
File nova/tests/db/test_db_api.py, line 54, in module
  from nova.openstack.common.db.sqlalchemy import test_base
File nova/openstack/common/db/sqlalchemy/test_base.py, line 21, in 
module
  from oslotest import base as test_base
  ImportError: No module named oslotest

  
  Looks like this showed up with nova commit 
0f07f8546fda9732a7e3597a2de78156f1fb5a34.

  This is the corresponding oslo-incubator change:
  https://review.openstack.org/#/c/87536/

  Note that requirements.txt was updated there but not in the nova sync.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1338737/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338745] Re: Add healthcheck middleware

2014-07-07 Thread Dolph Mathews
It looks like swifts middleware could be moved to oslo, as there's
nothing swift-specific about it. There's nothing stopping you from
deploying that middleware in front of Keystone or swift, regardless of
whether it lives in oslo or swift.

** Description changed:

  Would be useful for keystone to support a healthcheck URL for
  consumption by load balancers.
  
  This middleware should provide the ability to manually disable the
  service via the existence of a file on the system's local disk.  This
  middleware should also perform a basic application functionality [1]
  check prior to reporting OK.
  
  Having this middleware would give us some flexibility around LB health
  checks we do not have today, and IMO would be beneficial.  This is
  fairly similar to what swift [2] is doing as well.
  
  [1] 
https://github.com/CiscoSystems/puppet-monit/blob/a459a7314ac4f0250ad8b9c6956a872b949840f1/files/healthcheck.py#L37
- [2] 
https://github.com/openstack/swift/blob/0b594bc3afbffe942edebe9cdf02f60c06e627ab/swift/common/middleware/healthcheck.p
+ [2] 
https://github.com/openstack/swift/blob/0b594bc3afbffe942edebe9cdf02f60c06e627ab/swift/common/middleware/healthcheck.py

** Changed in: keystone
   Status: New = Opinion

** Changed in: keystone
   Importance: Undecided = Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1338745

Title:
  Add healthcheck middleware

Status in OpenStack Identity (Keystone):
  Opinion

Bug description:
  Would be useful for keystone to support a healthcheck URL for
  consumption by load balancers.

  This middleware should provide the ability to manually disable the
  service via the existence of a file on the system's local disk.  This
  middleware should also perform a basic application functionality [1]
  check prior to reporting OK.

  Having this middleware would give us some flexibility around LB health
  checks we do not have today, and IMO would be beneficial.  This is
  fairly similar to what swift [2] is doing as well.

  [1] 
https://github.com/CiscoSystems/puppet-monit/blob/a459a7314ac4f0250ad8b9c6956a872b949840f1/files/healthcheck.py#L37
  [2] 
https://github.com/openstack/swift/blob/0b594bc3afbffe942edebe9cdf02f60c06e627ab/swift/common/middleware/healthcheck.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1338745/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338795] [NEW] VMware store: upload and download performance need to be improved

2014-07-07 Thread Arnaud Legendre
Public bug reported:

It takes too much time to upload to the VMware store. The bits are uploaded to 
Glance then go through vCenter, then through ESXi to finally land on the 
datastore.
The upload time is not necessarily good, also uploading through vCenter adds 
unnecessary load on the vCenter server.

Since VC 5.5, it is possible to get a ticket from VC to upload to a
specific host directly. This way, we bypass vCenter which makes the
upload much faster.

** Affects: glance
 Importance: Undecided
 Assignee: Arnaud Legendre (arnaudleg)
 Status: In Progress

** Changed in: glance
 Assignee: (unassigned) = Arnaud Legendre (arnaudleg)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1338795

Title:
  VMware store: upload and download performance need to be improved

Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress

Bug description:
  It takes too much time to upload to the VMware store. The bits are uploaded 
to Glance then go through vCenter, then through ESXi to finally land on the 
datastore.
  The upload time is not necessarily good, also uploading through vCenter adds 
unnecessary load on the vCenter server.

  Since VC 5.5, it is possible to get a ticket from VC to upload to a
  specific host directly. This way, we bypass vCenter which makes the
  upload much faster.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1338795/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338737] Re: nova needs to require oslotest in test-requirements

2014-07-07 Thread Matt Riedemann
I need to bring this back. Right now oslotest is a runtime dependency of
nova which is wrong since oslotest.base is only used for nova unit
tests, so it should be in test-requirements.txt.

This is especially bad for downstream packagers/deployers because the
runtime dependencies for oslotest include things like mock and mox,
which shouldn't be in a production install of openstack.

** Changed in: nova
   Status: Invalid = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1338737

Title:
  nova needs to require oslotest in test-requirements

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Python 2.7.3 (default, Sep 26 2013, 20:08:41) 
  [GCC 4.6.3] on linux2
  Type help, copyright, credits or license for more information.
   from nova.tests.db import test_db_api
  Traceback (most recent call last):
File stdin, line 1, in module
File nova/tests/db/test_db_api.py, line 54, in module
  from nova.openstack.common.db.sqlalchemy import test_base
File nova/openstack/common/db/sqlalchemy/test_base.py, line 21, in 
module
  from oslotest import base as test_base
  ImportError: No module named oslotest

  
  Looks like this showed up with nova commit 
0f07f8546fda9732a7e3597a2de78156f1fb5a34.

  This is the corresponding oslo-incubator change:
  https://review.openstack.org/#/c/87536/

  Note that requirements.txt was updated there but not in the nova sync.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1338737/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338822] [NEW] CPU limit not used in instance resize

2014-07-07 Thread Daniel Snider
Public bug reported:

If I resize an instance to a flavor with more cpus than should be
possible, even more cpus than cpu_allocation_ratio would allow, then
nova proceeds with the resize and the instance state goes to error and
it does exist anymore in the hypervisor.

My environment:
Nova 2014.1 (Icehouse)
Libvirt / KVM hypervisor
Ceph RBD for volume storage
1 nova-manage node, 2 nova-compute nodes (all virtualized on my laptop)
Ubuntu 14.04 for all OSes except for the instance
Cirros 0.3.2 for the instance 
/etc/nova/nova.conf on all nodes contain:
allow_resize_to_same_host=true (although this didn't make a difference!)
ram_allocation_ratio=0.95
cpu_allocation_ratio=5

CPU and RAM allocation_ratio limits are set but only the memory limit took 
effect. This is from the log below:
memory limit: 1900.95 MB, free: 1388.95 MB
CPUs limit not specified, defaulting to unlimited

This is the log from nova-compute.log when I resize an instance to a
flavor with 500 vcpus:

AUDIT nova.compute.resource_tracker [-] Auditing locally available compute 
resources
AUDIT nova.compute.resource_tracker [-] Free ram (MB): 1489
AUDIT nova.compute.resource_tracker [-] Free disk (GB): 34
AUDIT nova.compute.resource_tracker [-] Free VCPUS: 2
INFO nova.compute.resource_tracker [-] Compute_service record updated for 
r-ABCDEF1234:r-ABCDEF1234.vagrant.local
AUDIT nova.compute.claims [req-960d005b-0ca5-434b-96dc-e1e7575aef0f 
49e16f440dcc4dffa485132bf4c1fceb a5a08d831062498995090b1fb6e7fdf2] [instance: 
75204b82-70e9-4e13-b56f-d7b78613cac7] Attempting claim: memory 500 MB, disk 1 
GB, VCPUs 500
AUDIT nova.compute.claims [req-960d005b-0ca5-434b-96dc-e1e7575aef0f 
49e16f440dcc4dffa485132bf4c1fceb a5a08d831062498995090b1fb6e7fdf2] [instance: 
75204b82-70e9-4e13-b56f-d7b78613cac7] Total memory: 2001 MB, used: 512.00 MB
AUDIT nova.compute.claims [req-960d005b-0ca5-434b-96dc-e1e7575aef0f 
49e16f440dcc4dffa485132bf4c1fceb a5a08d831062498995090b1fb6e7fdf2] [instance: 
75204b82-70e9-4e13-b56f-d7b78613cac7] memory limit: 1900.95 MB, free: 1388.95 MB
AUDIT nova.compute.claims [req-960d005b-0ca5-434b-96dc-e1e7575aef0f 
49e16f440dcc4dffa485132bf4c1fceb a5a08d831062498995090b1fb6e7fdf2] [instance: 
75204b82-70e9-4e13-b56f-d7b78613cac7] Total disk: 34 GB, used: 0.00 GB
AUDIT nova.compute.claims [req-960d005b-0ca5-434b-96dc-e1e7575aef0f 
49e16f440dcc4dffa485132bf4c1fceb a5a08d831062498995090b1fb6e7fdf2] [instance: 
75204b82-70e9-4e13-b56f-d7b78613cac7] disk limit not specified, defaulting to 
unlimited
AUDIT nova.compute.claims [req-960d005b-0ca5-434b-96dc-e1e7575aef0f 
49e16f440dcc4dffa485132bf4c1fceb a5a08d831062498995090b1fb6e7fdf2] [instance: 
75204b82-70e9-4e13-b56f-d7b78613cac7] Total CPUs: 2 VCPUs, used: 0.00 VCPUs
AUDIT nova.compute.claims [req-960d005b-0ca5-434b-96dc-e1e7575aef0f 
49e16f440dcc4dffa485132bf4c1fceb a5a08d831062498995090b1fb6e7fdf2] [instance: 
75204b82-70e9-4e13-b56f-d7b78613cac7] CPUs limit not specified, defaulting to 
unlimited
AUDIT nova.compute.claims [req-960d005b-0ca5-434b-96dc-e1e7575aef0f 
49e16f440dcc4dffa485132bf4c1fceb a5a08d831062498995090b1fb6e7fdf2] [instance: 
75204b82-70e9-4e13-b56f-d7b78613cac7] Claim successful
AUDIT nova.compute.resource_tracker [req-960d005b-0ca5-434b-96dc-e1e7575aef0f 
49e16f440dcc4dffa485132bf4c1fceb a5a08d831062498995090b1fb6e7fdf2] Updating 
from migration 75204b82-70e9-4e13-b56f-d7b78613cac7
AUDIT nova.compute.manager [req-960d005b-0ca5-434b-96dc-e1e7575aef0f 
49e16f440dcc4dffa485132bf4c1fceb a5a08d831062498995090b1fb6e7fdf2] [instance: 
75204b82-70e9-4e13-b56f-d7b78613cac7] Migrating
WARNING nova.virt.libvirt.utils [req-960d005b-0ca5-434b-96dc-e1e7575aef0f 
49e16f440dcc4dffa485132bf4c1fceb a5a08d831062498995090b1fb6e7fdf2] systool is 
not installed
WARNING nova.virt.libvirt.utils [req-960d005b-0ca5-434b-96dc-e1e7575aef0f 
49e16f440dcc4dffa485132bf4c1fceb a5a08d831062498995090b1fb6e7fdf2] systool is 
not installed
INFO urllib3.connectionpool [-] Starting new HTTP connection (1): 192.168.55.253
INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP server on 
192.168.55.11:5672
INFO nova.virt.libvirt.driver [req-960d005b-0ca5-434b-96dc-e1e7575aef0f 
49e16f440dcc4dffa485132bf4c1fceb a5a08d831062498995090b1fb6e7fdf2] [instance: 
75204b82-70e9-4e13-b56f-d7b78613cac7] Creating image
ERROR glanceclient.common.http [-] Request returned failure status.
WARNING nova.compute.utils [req-960d005b-0ca5-434b-96dc-e1e7575aef0f 
49e16f440dcc4dffa485132bf4c1fceb a5a08d831062498995090b1fb6e7fdf2] [instance: 
75204b82-70e9-4e13-b56f-d7b78613cac7] Can't access image : Image  could not be 
found.
INFO nova.virt.libvirt.firewall [req-960d005b-0ca5-434b-96dc-e1e7575aef0f 
49e16f440dcc4dffa485132bf4c1fceb a5a08d831062498995090b1fb6e7fdf2] [instance: 
75204b82-70e9-4e13-b56f-d7b78613cac7] Called setup_basic_filtering in nwfilter
INFO nova.virt.libvirt.firewall [req-960d005b-0ca5-434b-96dc-e1e7575aef0f 
49e16f440dcc4dffa485132bf4c1fceb a5a08d831062498995090b1fb6e7fdf2] [instance: 

[Yahoo-eng-team] [Bug 1338835] [NEW] cisco n1kv plugin: when launching vm fails, the ports not get cleaned up

2014-07-07 Thread AARON ZHANG
Public bug reported:

in cisco n1kv plugin, port gets created during launching vm instance.
But upon failure of launching, the ports are not cleaned up in the
except block.

The issue can be easily recreated by creating a network without subnet
and then use the network for vm creation.

** Affects: horizon
 Importance: Undecided
 Assignee: AARON ZHANG (fenzhang)
 Status: In Progress


** Tags: cisco n1kv

** Changed in: horizon
 Assignee: (unassigned) = AARON ZHANG (fenzhang)

** Changed in: horizon
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1338835

Title:
  cisco n1kv plugin: when launching vm fails, the ports not get cleaned
  up

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  in cisco n1kv plugin, port gets created during launching vm instance.
  But upon failure of launching, the ports are not cleaned up in the
  except block.

  The issue can be easily recreated by creating a network without subnet
  and then use the network for vm creation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1338835/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338836] [NEW] Old sessionid cookie causes 500 Internal Server Error at login

2014-07-07 Thread Nathan Ward
Public bug reported:

After switching to a different cloud, Django's old sessionid cookie causes 
Horizon to greet you with the 500 Internal Server Error page. Clearing browser 
cookies or deleting just the sessionid cookie (e.g. in Chrome Dev Tools  
Resources  Cookies) and refreshing is a workaround to bring the user back to a 
working login screen. Could this happen because request.user.is_authenticated() 
does not check for a valid session before proceeding?
https://github.com/openstack/horizon/blob/0bd4350cb308d57b6afc69daee4a7823055be5a9/openstack_dashboard/views.py#L40

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: login session

** Attachment added: Screenshot of the Something went wrong! 500 error page
   
https://bugs.launchpad.net/bugs/1338836/+attachment/4147712/+files/sessionid.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1338836

Title:
  Old sessionid cookie causes 500 Internal Server Error at login

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  After switching to a different cloud, Django's old sessionid cookie causes 
Horizon to greet you with the 500 Internal Server Error page. Clearing browser 
cookies or deleting just the sessionid cookie (e.g. in Chrome Dev Tools  
Resources  Cookies) and refreshing is a workaround to bring the user back to a 
working login screen. Could this happen because request.user.is_authenticated() 
does not check for a valid session before proceeding?
  
https://github.com/openstack/horizon/blob/0bd4350cb308d57b6afc69daee4a7823055be5a9/openstack_dashboard/views.py#L40

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1338836/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338841] [NEW] EC2KeysTest fails in tearDownClass with InvalidKeyPair.Duplicate

2014-07-07 Thread Matt Riedemann
Public bug reported:

The trace for the failure is here:

http://logs.openstack.org/57/105257/4/check/check-tempest-dsvm-postgres-
full/f72b818/logs/tempest.txt.gz?level=TRACE#_2014-07-07_23_43_37_250

This is the console error:

2014-07-07 23:44:59.590 | tearDownClass 
(tempest.thirdparty.boto.test_ec2_keys.EC2KeysTest)
2014-07-07 23:44:59.590 | 
-
2014-07-07 23:44:59.590 | 
2014-07-07 23:44:59.590 | Captured traceback:
2014-07-07 23:44:59.590 | ~~~
2014-07-07 23:44:59.590 | Traceback (most recent call last):
2014-07-07 23:44:59.590 |   File tempest/thirdparty/boto/test.py, line 
272, in tearDownClass
2014-07-07 23:44:59.590 | raise 
exceptions.TearDownException(num=fail_count)
2014-07-07 23:44:59.590 | TearDownException: 1 cleanUp operation failed

There isn't much in the n-api logs, just the 400 response.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: ec2 testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1338841

Title:
  EC2KeysTest fails in tearDownClass with InvalidKeyPair.Duplicate

Status in OpenStack Compute (Nova):
  New

Bug description:
  The trace for the failure is here:

  http://logs.openstack.org/57/105257/4/check/check-tempest-dsvm-
  postgres-
  full/f72b818/logs/tempest.txt.gz?level=TRACE#_2014-07-07_23_43_37_250

  This is the console error:

  2014-07-07 23:44:59.590 | tearDownClass 
(tempest.thirdparty.boto.test_ec2_keys.EC2KeysTest)
  2014-07-07 23:44:59.590 | 
-
  2014-07-07 23:44:59.590 | 
  2014-07-07 23:44:59.590 | Captured traceback:
  2014-07-07 23:44:59.590 | ~~~
  2014-07-07 23:44:59.590 | Traceback (most recent call last):
  2014-07-07 23:44:59.590 |   File tempest/thirdparty/boto/test.py, line 
272, in tearDownClass
  2014-07-07 23:44:59.590 | raise 
exceptions.TearDownException(num=fail_count)
  2014-07-07 23:44:59.590 | TearDownException: 1 cleanUp operation failed

  There isn't much in the n-api logs, just the 400 response.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1338841/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338745] Re: Add healthcheck middleware

2014-07-07 Thread OpenStack Infra
Fix proposed to branch: master
Review: https://review.openstack.org/105311

** Changed in: keystone
   Status: Opinion = In Progress

** Changed in: keystone
 Assignee: (unassigned) = John Dewey (retr0h)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1338745

Title:
  Add healthcheck middleware

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  Would be useful for keystone to support a healthcheck URL for
  consumption by load balancers.

  This middleware should provide the ability to manually disable the
  service via the existence of a file on the system's local disk.  This
  middleware should also perform a basic application functionality [1]
  check prior to reporting OK.

  Having this middleware would give us some flexibility around LB health
  checks we do not have today, and IMO would be beneficial.  This is
  fairly similar to what swift [2] is doing as well.

  [1] 
https://github.com/CiscoSystems/puppet-monit/blob/a459a7314ac4f0250ad8b9c6956a872b949840f1/files/healthcheck.py#L37
  [2] 
https://github.com/openstack/swift/blob/0b594bc3afbffe942edebe9cdf02f60c06e627ab/swift/common/middleware/healthcheck.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1338745/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338844] [NEW] FixedIpLimitExceeded: Maximum number of fixed ips exceeded in tempest nova-network runs since 7/4

2014-07-07 Thread Matt Riedemann
Public bug reported:

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQnVpbGRBYm9ydEV4Y2VwdGlvbjogQnVpbGQgb2YgaW5zdGFuY2VcIiBBTkQgbWVzc2FnZTpcImFib3J0ZWQ6IEZhaWxlZCB0byBhbGxvY2F0ZSB0aGUgbmV0d29yayhzKSB3aXRoIGVycm9yIE1heGltdW0gbnVtYmVyIG9mIGZpeGVkIGlwcyBleGNlZWRlZCwgbm90IHJlc2NoZWR1bGluZy5cIiBBTkQgdGFnczpcInNjcmVlbi1uLWNwdS50eHRcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwNDc3OTE1MzY1MiwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

Saw it here:

http://logs.openstack.org/63/98563/5/check/check-tempest-dsvm-postgres-
full/1472e7b/logs/screen-n-cpu.txt.gz?level=TRACE

Looks like it's only in jobs using nova-network.

Started on 7/4, 70 failures in 7 days, check and gate, multiple changes.

Maybe related to https://review.openstack.org/104581.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: gate-failure network nova-network testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1338844

Title:
  FixedIpLimitExceeded: Maximum number of fixed ips exceeded in
  tempest nova-network runs since 7/4

Status in OpenStack Compute (Nova):
  New

Bug description:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQnVpbGRBYm9ydEV4Y2VwdGlvbjogQnVpbGQgb2YgaW5zdGFuY2VcIiBBTkQgbWVzc2FnZTpcImFib3J0ZWQ6IEZhaWxlZCB0byBhbGxvY2F0ZSB0aGUgbmV0d29yayhzKSB3aXRoIGVycm9yIE1heGltdW0gbnVtYmVyIG9mIGZpeGVkIGlwcyBleGNlZWRlZCwgbm90IHJlc2NoZWR1bGluZy5cIiBBTkQgdGFnczpcInNjcmVlbi1uLWNwdS50eHRcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwNDc3OTE1MzY1MiwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

  Saw it here:

  http://logs.openstack.org/63/98563/5/check/check-tempest-dsvm-
  postgres-full/1472e7b/logs/screen-n-cpu.txt.gz?level=TRACE

  Looks like it's only in jobs using nova-network.

  Started on 7/4, 70 failures in 7 days, check and gate, multiple
  changes.

  Maybe related to https://review.openstack.org/104581.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1338844/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338846] [NEW] corner case in nsx api_client code

2014-07-07 Thread Aaron Rosen
Public bug reported:

There is a corner case that the nsx api_client code does not handle
today where the nsx controller can return a 307 in order to redirect the
request to another controller. At this point neutron-server issues this
request to the redirected controller and usually this works fine. Though
in the case that the session cookie has expired we'll issue the request
and get a 401 and clear the cookie from the request. Then we'll retry
the request  and get the same 307 again which will result in a 401 as
the session cookie was never renewed.

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  There is a corner case that the nsx api_client code does not handle
  today where the nsx controller can return a 307 in order to redirect the
  request to another controller. At this point neutron-server issues this
  request to the redirected controller and usually this works fine. Though
  in the case that the session cookie has expired we'll issue the request
  and get a 401 and clear the cookie from the request. Then we'll retry
  the request  and get the same 307 again which will result in a 401 as
- the session cookie was never reviewed.
+ the session cookie was never renewed.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1338846

Title:
  corner case in nsx api_client  code

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  There is a corner case that the nsx api_client code does not handle
  today where the nsx controller can return a 307 in order to redirect
  the request to another controller. At this point neutron-server issues
  this request to the redirected controller and usually this works fine.
  Though in the case that the session cookie has expired we'll issue the
  request and get a 401 and clear the cookie from the request. Then
  we'll retry the request  and get the same 307 again which will result
  in a 401 as the session cookie was never renewed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1338846/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338853] [NEW] Remove tables that store the mapping between neutron and Nuage VSD resources

2014-07-07 Thread Sayaji Patil
Public bug reported:

Nuage plugin stores a mapping of neutron and VSD id's for every neutron 
resource.
This bug is to remove the mapping to avoid storing redundant data and also 
avoid the 
upgrade and out of sync issues.

** Affects: neutron
 Importance: Undecided
 Assignee: Sayaji Patil (sayaji15)
 Status: In Progress


** Tags: nuage

** Changed in: horizon
 Assignee: (unassigned) = Sayaji Patil (sayaji15)

** Project changed: horizon = neutron

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1338853

Title:
  Remove tables that store the mapping between neutron and Nuage VSD
  resources

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Nuage plugin stores a mapping of neutron and VSD id's for every neutron 
resource.
  This bug is to remove the mapping to avoid storing redundant data and also 
avoid the 
  upgrade and out of sync issues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1338853/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338857] [NEW] help_text for create subnet not transfer

2014-07-07 Thread sh.huang
Public bug reported:

The help_text for create subnet when allocation_polls.

For lt and gt should be transfer to  and  , but it does not
in .po

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1338857

Title:
  help_text for create subnet not transfer

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The help_text for create subnet when allocation_polls.

  For lt and gt should be transfer to  and  , but it does
  not in .po

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1338857/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1314130] Re: network freezes for some seconds after service neutron-plugin-openvswitch-agent restart

2014-07-07 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1314130

Title:
  network freezes for some seconds after service neutron-plugin-
  openvswitch-agent restart

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  network freezes for some seconds after service 
neutron-plugin-openvswitch-agent restart
  Ubuntu 14.04 
  latest neutron code from 
http://ppa.launchpad.net/openstack-ubuntu-testing/icehouse/ubuntu
  ovs-vsctl (Open vSwitch) 2.0.1

  neutron-openvswitch-agent log:
  2014-04-29 10:44:12.836 36987 ERROR neutron.agent.linux.ovsdb_monitor 
[req-c5aeb93c-2254-4dc4-af5f-6df3a69995f7 None] Error received from ovsdb 
monitor: ovsdb-client: unix:/var/run/openvswitch/db.sock: receive failed (End 
of file)
  2014-04-29 10:44:12.968 36987 ERROR neutron.agent.linux.ovs_lib 
[req-c5aeb93c-2254-4dc4-af5f-6df3a69995f7 None] Unable to execute ['ovs-vsctl', 
'--timeout=10', 'list-ports', 'br-int']. Exception:
  Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'ovs-vsctl', '--timeout=10', 'list-ports', 'br-int']
  Exit code: 1
  Stdout: ''
  Stderr: 
'2014-04-29T09:44:12Z|1|reconnect|WARN|unix:/var/run/openvswitch/db.sock: 
connection attempt failed (No such file or directory)\novs-vsctl: 
unix:/var/run/openvswitch/db.sock: database connection failed (No such file or 
directory)\n'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1314130/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1308713] Re: VPN user change restarts all pluto instances for all users

2014-07-07 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1308713

Title:
  VPN user change restarts all pluto instances for all users

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  In troubleshooting some VPNaaS issues we've noticed that when any user
  makes a change or configures VPNaaS all existing pluto processes get
  restarted.  Not sure if this is by design or not, but I would expect
  that one user could not impact all the configured VPN tunnels.

  Very easy to verify.
  Create a few VPN tunnels.
  Get on the network node and get pluto process timestamps
  Make a VPN change
  Verify that all pluto processes have restarted

  This cause a VPN flap

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1308713/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1312521] Re: ProgrammingError: (ProgrammingError) (1146, Table 'neutron.externalnetworks' doesn't exist

2014-07-07 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1312521

Title:
  ProgrammingError: (ProgrammingError) (1146, Table
  'neutron.externalnetworks' doesn't exist

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  2014-04-24 22:33:48.311 2143 TRACE neutron.api.v2.resource   File 
/usr/lib64/python2.6/site-packages/MySQLdb/connections.py, line 36, in 
defaulterrorhandler
  2014-04-24 22:33:48.311 2143 TRACE neutron.api.v2.resource raise 
errorclass, errorvalue
  2014-04-24 22:33:48.311 2143 TRACE neutron.api.v2.resource ProgrammingError: 
(ProgrammingError) (1146, Table 'neutron.externalnetworks' doesn't exist) 
'SELECT count(*) AS count_1 \nFROM (SELECT networks.tenant_id AS 
networks_tenant_id, networks.id AS networks_id, networks.name AS networks_name, 
networks.status AS networks_status, networks.admin_state_up AS 
networks_admin_state_up, networks.shared AS networks_shared \nFROM networks 
LEFT OUTER JOIN externalnetworks ON networks.id = externalnetworks.network_id 
\nWHERE networks.tenant_id IN (%s)) AS anon_1' 
('a8b242a19a164c97bc022ef37b260ade',)
  2014-04-24 22:33:48.311 2143 TRACE neutron.api.v2.resource 
  2014-04-24 22:33:48.320 2143 INFO neutron.wsgi 
[req-dc9307c3-9249-4b49-af16-0e6d4a8fa2c3 None] 10.0.0.11 - - [24/Apr/2014 
22:33:48] POST /v2.0/networks.json HTTP/1.1 500 296 0.511138

  when trying to run the neutron net-create ext-net --shared
  --router:external=True

  http://docs.openstack.org/icehouse/install-guide/install/yum/content
  /neutron_initial-external-network.html  I worked with Sam-I-Am in irc
  and was not able to figure any thing out.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1312521/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338880] [NEW] Any user can set a network as external

2014-07-07 Thread Gabriel Assis Bezerra
Public bug reported:

Even though the default policy.json restrict the creation of external
networks to admin_only, any user can update a network as external.

I could verify this with the following test (PseudoPython):

project: ProjectA
user: ProjectMemberA has Member role on project ProjectA.

with network(name=UpdateNetworkExternalRouter, tenant_id=ProjectA, 
router_external=False) as test_network:

self.project_member_a_neutron_client.update_network(network=test_network, 
router_external=True)

project_member_a_neutron_client encapsulates a python-neutronclient, and
here it is what the method does.

def update_network(self, network, name=None, shared=None, 
router_external=None):
body = {
'network': {
}
}
if name is not None:
body['network']['name'] = name
if shared is not None:
body['network']['shared'] = shared
if router_external is not None:
body['network']['router:external'] = router_external

self.python_neutronclient.update_network(network=network.id,
body=body)['network']


The expected behaviour is that the operation should not be allowed, but the 
user without admin privileges is able to perform such change.

Trying to add an update_network:router:external: rule:admin_only
policy did not work and broke other operations a regular user should be
able to do.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1338880

Title:
  Any user can set a network as external

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Even though the default policy.json restrict the creation of external
  networks to admin_only, any user can update a network as external.

  I could verify this with the following test (PseudoPython):

  project: ProjectA
  user: ProjectMemberA has Member role on project ProjectA.

  with network(name=UpdateNetworkExternalRouter, tenant_id=ProjectA, 
router_external=False) as test_network:
  
self.project_member_a_neutron_client.update_network(network=test_network, 
router_external=True)

  project_member_a_neutron_client encapsulates a python-neutronclient,
  and here it is what the method does.

  def update_network(self, network, name=None, shared=None, 
router_external=None):
  body = {
  'network': {
  }
  }
  if name is not None:
  body['network']['name'] = name
  if shared is not None:
  body['network']['shared'] = shared
  if router_external is not None:
  body['network']['router:external'] = router_external

  self.python_neutronclient.update_network(network=network.id,
  body=body)['network']

  
  The expected behaviour is that the operation should not be allowed, but the 
user without admin privileges is able to perform such change.

  Trying to add an update_network:router:external: rule:admin_only
  policy did not work and broke other operations a regular user should
  be able to do.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1338880/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338881] [NEW] VMware: Unable to validate session when start nova compute service

2014-07-07 Thread David Geng
Public bug reported:

We are using non-administrator to connect the vCenter when start compute
service. In vCenter we defined a separate role (you can it in the
attachment) for this account and allow it to only access the cluster
that is used to provision VM and split with the management cluster.

I can use this user/password to login vCenter, but I hint the follow error when 
start the compute service.
So I want to know what kinds of privleges should be assigned to this account.
2014-07-08 05:26:55.485 30556 WARNING nova.virt.vmwareapi.driver 
[req-35ad4408-f0d3-423a-a211-c7200ae8da3c None None] Session 
527362cd-b3d2-0ba9-0be8-b7dd3200e9f1 is inactive!
2014-07-08 05:27:06.479 30556 ERROR suds.client [-] ?xml version=1.0 
encoding=UTF-8?
SOAP-ENV:Envelope xmlns:ns0=urn:vim25 
xmlns:ns1=http://schemas.xmlsoap.org/soap/envelope/; 
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; 
xmlns:SOAP-ENV=http://schemas.xmlsoap.org/soap/envelope/;
   ns1:Body
  ns0:TerminateSession
 ns0:_this type=SessionManagerSessionManager/ns0:_this
 ns0:sessionId527362cd-b3d2-0ba9-0be8-b7dd3200e9f1/ns0:sessionId
  /ns0:TerminateSession
   /ns1:Body
/SOAP-ENV:Envelope
2014-07-08 05:27:06.483 30556 DEBUG nova.virt.vmwareapi.driver 
[req-35ad4408-f0d3-423a-a211-c7200ae8da3c None None] Server raised fault: 
'Permission to perform this operation was denied.'


2014-07-08 05:27:44.310 30556 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/error_util.py, line 123, 
in retrievepropertiesex_fault_checker
2014-07-08 05:27:44.310 30556 TRACE nova.openstack.common.threadgroup 
exc_msg_list))
2014-07-08 05:27:44.310 30556 TRACE nova.openstack.common.threadgroup 
VimFaultException: Error(s) NotAuthenticated occurred in the call to 
RetrievePropertiesEx
2014-07-08 05:27:44.310 30556 TRACE nova.openstack.common.threadgroup

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1338881

Title:
  VMware: Unable to validate session when start nova compute service

Status in OpenStack Compute (Nova):
  New

Bug description:
  We are using non-administrator to connect the vCenter when start
  compute service. In vCenter we defined a separate role (you can it in
  the attachment) for this account and allow it to only access the
  cluster that is used to provision VM and split with the management
  cluster.

  I can use this user/password to login vCenter, but I hint the follow error 
when start the compute service.
  So I want to know what kinds of privleges should be assigned to this account.
  2014-07-08 05:26:55.485 30556 WARNING nova.virt.vmwareapi.driver 
[req-35ad4408-f0d3-423a-a211-c7200ae8da3c None None] Session 
527362cd-b3d2-0ba9-0be8-b7dd3200e9f1 is inactive!
  2014-07-08 05:27:06.479 30556 ERROR suds.client [-] ?xml version=1.0 
encoding=UTF-8?
  SOAP-ENV:Envelope xmlns:ns0=urn:vim25 
xmlns:ns1=http://schemas.xmlsoap.org/soap/envelope/; 
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; 
xmlns:SOAP-ENV=http://schemas.xmlsoap.org/soap/envelope/;
 ns1:Body
ns0:TerminateSession
   ns0:_this type=SessionManagerSessionManager/ns0:_this
   ns0:sessionId527362cd-b3d2-0ba9-0be8-b7dd3200e9f1/ns0:sessionId
/ns0:TerminateSession
 /ns1:Body
  /SOAP-ENV:Envelope
  2014-07-08 05:27:06.483 30556 DEBUG nova.virt.vmwareapi.driver 
[req-35ad4408-f0d3-423a-a211-c7200ae8da3c None None] Server raised fault: 
'Permission to perform this operation was denied.'

  
  2014-07-08 05:27:44.310 30556 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/error_util.py, line 123, 
in retrievepropertiesex_fault_checker
  2014-07-08 05:27:44.310 30556 TRACE nova.openstack.common.threadgroup 
exc_msg_list))
  2014-07-08 05:27:44.310 30556 TRACE nova.openstack.common.threadgroup 
VimFaultException: Error(s) NotAuthenticated occurred in the call to 
RetrievePropertiesEx
  2014-07-08 05:27:44.310 30556 TRACE nova.openstack.common.threadgroup

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1338881/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1316822] Re: soft reboot of instance does not ensure iptables rules are present

2014-07-07 Thread Nathan Kinder
** Also affects: ossn
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1316822

Title:
  soft reboot of instance does not ensure iptables rules are present

Status in OpenStack Compute (Nova):
  New
Status in OpenStack Security Notes:
  New

Bug description:
  The iptables rules needed to implement instance security group rules
  get inserted by the _create_domain_and_network function in
  nova/virt/libvirt/driver.py

  This function is called by the following functions: _hard_reboot,
  resume and spawn (also in a couple of migration related functions).

  Doing nova reboot instance_id only does a soft reboot
  (_soft_reboot) and assumes that the rules are already present and
  therefore does not check or try to add them.

  If the instances is stopped (nova stop instance_id) and nova-compute
  is restarted (for example for a maintenance or problem), the iptables
  rules are removed as observed via output displayed in iptables -S.

  If the instance is started via  nova reboot instance_id the rule is
  NOT reapplied until a service nova-compute restart is issued. I have
  reports that this may affect nova start instance_id as well.

  Depending on if the Cloud is public facing, this opens up a
  potentially huge security vulnerability as an instance can be powered
  on without being protected by any security group rules (not even the
  sg-fallback rule). This is unbeknownst to the instance owner or Cloud
  operators unless they specifically monitor for this situation.

  The code should not do a soft reboot/start and error out or fallback
  to a resume (start)or hard reboot if it detects that the domain is not
  running.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1316822/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp