[Yahoo-eng-team] [Bug 1452983] [NEW] can't add/del parameter in nested stack during stack-update

2015-05-07 Thread neil nie
Public bug reported:

Looks like nested stack doesn't allow parameter change. It fails at
after_props.validate()

Is it the inention to disable that since update_allowed_properties
default is none?

But it's possible to allow that for nested stack, since it's special
resource constructed by base resource.

2015-05-08 04:07:08.164 18023 TRACE heat.engine.resource   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/heat/engine/resource.py",
 line 439, in _action_recorder
2015-05-08 04:07:08.164 18023 TRACE heat.engine.resource yield
2015-05-08 04:07:08.164 18023 TRACE heat.engine.resource   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/heat/engine/resource.py",
 line 688, in update
2015-05-08 04:07:08.164 18023 TRACE heat.engine.resource 
after_props.validate()
2015-05-08 04:07:08.164 18023 TRACE heat.engine.resource   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/heat/engine/properties.py",
 line 375, in validate
2015-05-08 04:07:08.164 18023 TRACE heat.engine.resource raise 
exception.StackValidationFailed(message=msg)
2015-05-08 04:07:08.164 18023 TRACE heat.engine.resource StackValidationFailed: 
Unknown Property str_len
2015-05-08 04:07:08.164 18023 TRACE heat.engine.resource 

Regards,
Neil

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1452983

Title:
  can't add/del parameter in nested stack during stack-update

Status in OpenStack Compute (Nova):
  New

Bug description:
  Looks like nested stack doesn't allow parameter change. It fails at
  after_props.validate()

  Is it the inention to disable that since update_allowed_properties
  default is none?

  But it's possible to allow that for nested stack, since it's special
  resource constructed by base resource.

  2015-05-08 04:07:08.164 18023 TRACE heat.engine.resource   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/heat/engine/resource.py",
 line 439, in _action_recorder
  2015-05-08 04:07:08.164 18023 TRACE heat.engine.resource yield
  2015-05-08 04:07:08.164 18023 TRACE heat.engine.resource   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/heat/engine/resource.py",
 line 688, in update
  2015-05-08 04:07:08.164 18023 TRACE heat.engine.resource 
after_props.validate()
  2015-05-08 04:07:08.164 18023 TRACE heat.engine.resource   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/heat/engine/properties.py",
 line 375, in validate
  2015-05-08 04:07:08.164 18023 TRACE heat.engine.resource raise 
exception.StackValidationFailed(message=msg)
  2015-05-08 04:07:08.164 18023 TRACE heat.engine.resource 
StackValidationFailed: Unknown Property str_len
  2015-05-08 04:07:08.164 18023 TRACE heat.engine.resource 

  Regards,
  Neil

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1452983/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1452955] Re: Client does not catch exceptions when making a token authentication request

2015-05-07 Thread Morgan Fainberg
** Also affects: python-keystoneclient
   Importance: Undecided
   Status: New

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1452955

Title:
  Client does not catch exceptions when making a token authentication
  request

Status in OpenStack Identity (Keystone):
  Invalid
Status in Python client library for Keystone:
  New

Bug description:
  keystoneclient.auth.identity.v3.token.TokenMethod does a
  session.post() without catching exceptions.

  In my case, I had a misconfigured DNS which meant that this post()
  never succeeded, however the error that ends up going back to Horizon
  is a simplified:

  "Login failed: An error occurred authenticating. Please try again
  later."

  which makes no mention of the underlying cause, nor do the keystone
  logs. This caused me an enormous amount of wasted time debugging, the
  error could certainly be improved here!

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1452955/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1452976] Re: CenOS7 kilo keystone can not create endpoint with ImportError: No module named oslo_utils

2015-05-07 Thread Morgan Fainberg
This is an issue with RDO and not Keystone itself based on the comment
here and the mailing list thread. I am marking this as invalid.

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1452976

Title:
  CenOS7 kilo keystone can not create endpoint with ImportError: No
  module named oslo_utils

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  I just follow the guide 
http://docs.openstack.org/kilo/install-guide/install/yum/content/ and when I 
create endpoint with: 
  openstack endpoint create \ 
  --publicurl http://controller:5000/v2.0 \ 
  --internalurl http://controller:5000/v2.0 \ 
  --adminurl http://controller:35357/v2.0 \ 
  --region RegionOne \ 
  identity

  I get the error: ImportError: No module named oslo_utils .
  And now it works well by repalce the repository from 
http://rdoproject.org/repos/openstack-kilo/rdo-testing-kilo.rpm to 
http://rdoproject.org/repos/openstack-kilo/rdo-testing-kilo.rpm ,which is 
adviced by Christian Berendt.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1452976/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1452976] [NEW] CenOS7 kilo keystone can not create endpoint with ImportError: No module named oslo_utils

2015-05-07 Thread Walter
Public bug reported:

I just follow the guide 
http://docs.openstack.org/kilo/install-guide/install/yum/content/ and when I 
create endpoint with: 
openstack endpoint create \ 
--publicurl http://controller:5000/v2.0 \ 
--internalurl http://controller:5000/v2.0 \ 
--adminurl http://controller:35357/v2.0 \ 
--region RegionOne \ 
identity

I get the error: ImportError: No module named oslo_utils .
And now it works well by repalce the repository from 
http://rdoproject.org/repos/openstack-kilo/rdo-testing-kilo.rpm to 
http://rdoproject.org/repos/openstack-kilo/rdo-testing-kilo.rpm ,which is 
adviced by Christian Berendt.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1452976

Title:
  CenOS7 kilo keystone can not create endpoint with ImportError: No
  module named oslo_utils

Status in OpenStack Identity (Keystone):
  New

Bug description:
  I just follow the guide 
http://docs.openstack.org/kilo/install-guide/install/yum/content/ and when I 
create endpoint with: 
  openstack endpoint create \ 
  --publicurl http://controller:5000/v2.0 \ 
  --internalurl http://controller:5000/v2.0 \ 
  --adminurl http://controller:35357/v2.0 \ 
  --region RegionOne \ 
  identity

  I get the error: ImportError: No module named oslo_utils .
  And now it works well by repalce the repository from 
http://rdoproject.org/repos/openstack-kilo/rdo-testing-kilo.rpm to 
http://rdoproject.org/repos/openstack-kilo/rdo-testing-kilo.rpm ,which is 
adviced by Christian Berendt.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1452976/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1452955] [NEW] Client does not catch exceptions when making a token authentication request

2015-05-07 Thread Julian Edwards
Public bug reported:

keystoneclient.auth.identity.v3.token.TokenMethod does a session.post()
without catching exceptions.

In my case, I had a misconfigured DNS which meant that this post() never
succeeded, however the error that ends up going back to Horizon is a
simplified:

"Login failed: An error occurred authenticating. Please try again
later."

which makes no mention of the underlying cause, nor do the keystone
logs. This caused me an enormous amount of wasted time debugging, the
error could certainly be improved here!

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1452955

Title:
  Client does not catch exceptions when making a token authentication
  request

Status in OpenStack Identity (Keystone):
  New

Bug description:
  keystoneclient.auth.identity.v3.token.TokenMethod does a
  session.post() without catching exceptions.

  In my case, I had a misconfigured DNS which meant that this post()
  never succeeded, however the error that ends up going back to Horizon
  is a simplified:

  "Login failed: An error occurred authenticating. Please try again
  later."

  which makes no mention of the underlying cause, nor do the keystone
  logs. This caused me an enormous amount of wasted time debugging, the
  error could certainly be improved here!

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1452955/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1452886] Re: Port stuck in BUILD state results in limited instance connectivity

2015-05-07 Thread Kevin Benton
** Project changed: neutron => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1452886

Title:
  Port stuck in BUILD state results in limited instance connectivity

Status in OpenStack Compute (Nova):
  New

Bug description:
  I am currently experiencing (random) cases of instances that are spun
  up having limited connectivity. There are about 650 instances in the
  environment and 45 networks.

  Network Info:
  - ML2/LinuxBridge/l2pop
  - VXLAN networks

  Symptoms:
  - On the local compute node, the instance tap is in the bridge. Everything 
looks good.
  - Instance is reachable from some, but not all, instances/devices in the same 
subnet across all compute and network nodes
  - On some compute nodes and network nodes, the ARP and FDB entries for the 
instance do not exist. Instances/devices on these nodes cannot communicate with 
the new instance.
  - No errors are logged

  Here are some observations for the non-working instances:
  - The corresponding Neutron port is stuck in a BUILD state
  - The binding:host_id value of the port (ie. compute-xxx) does not match the 
OS-EXT-SRV-ATTR:host value of the instance (ie. compute-zzz). For working 
instances, these values match.

  I am unable to replicate this consistently at this time, nor am I sure
  where to begin pinpointing the issue. Any help is appreciated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1452886/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420192] Re: Nova interface-attach command has optional arguments to add network details. It should be positional arguments otherwise command fails.

2015-05-07 Thread melanie witt
After chatting with Park in IRC, we determined there isn't a bug in
nova. I was led astray because the environment where I saw the issue has
cells enabled, which might be behaving differently than non-cells. If
that is indeed the case, it will be a different bug than this one.

** No longer affects: nova

** Tags removed: api

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1420192

Title:
  Nova interface-attach command has optional arguments to add network
  details. It should be positional arguments otherwise command fails.

Status in Python client library for Nova:
  Confirmed

Bug description:
  On execution of nova interface-attach command without optional
  arguments command fails.

  root@ubuntu:~# nova interface-attach vm1
  ERROR (ClientException): Failed to attach interface (HTTP 500) (Request-ID: 
req-ebca9af6-8d2f-4f68-8a80-ad002b03c2fc)
  root@ubuntu:~# 

  To add a network interface atleast one amongst the optional arguments
  must be provided. Thus, help message needs to be modified.

  root@ubuntu:~# nova help interface-attach
  usage: nova interface-attach [--port-id ] [--net-id ]
   [--fixed-ip ]
   

  Attach a network interface to a server.

  Positional arguments:
   Name or ID of server.

  Optional arguments:
--port-id Port ID.
--net-id   Network ID
--fixed-ip   Requested fixed IP.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-novaclient/+bug/1420192/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1452935] [NEW] Full stack tests leave agent resources behind

2015-05-07 Thread Assaf Muller
Public bug reported:

Full stack tests uses AsyncProcess to manage agents. When tearing down
the test environment it issues kill -9, stopping the agent dead,
sometimes before it's had time to clean up resources. Specifically in
fullstack/test_l3_agent.TestLegacyL3Agent.test_namespace_exists, the
router is deleted at the end of the test and the test is concluded.
Sometimes the router deletion RPC message is read by the agent in time,
sometime it isn't. When it is, the agent might not finish cleaning up
the router in time, leaving behind resources such as the namespace,
metadata proxy process, etc.

For example, here's the last line in one run of the l3 test agent during the 
test:
2015-05-07 18:42:03.444 12337 DEBUG neutron.agent.l3.agent 
[req-e22bbdee-96b2-46b1-a995-ac74f4cc32d3 - - - - -] Got router deleted 
notification for 17cf2d41-a7e2-4c16-a50e-a2c917b2a61f router_deleted 
/opt/openstack/neutron/neutron/agent/l3/agent.py:358

** Affects: neutron
 Importance: Medium
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1452935

Title:
  Full stack tests leave agent resources behind

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Full stack tests uses AsyncProcess to manage agents. When tearing down
  the test environment it issues kill -9, stopping the agent dead,
  sometimes before it's had time to clean up resources. Specifically in
  fullstack/test_l3_agent.TestLegacyL3Agent.test_namespace_exists, the
  router is deleted at the end of the test and the test is concluded.
  Sometimes the router deletion RPC message is read by the agent in
  time, sometime it isn't. When it is, the agent might not finish
  cleaning up the router in time, leaving behind resources such as the
  namespace, metadata proxy process, etc.

  For example, here's the last line in one run of the l3 test agent during the 
test:
  2015-05-07 18:42:03.444 12337 DEBUG neutron.agent.l3.agent 
[req-e22bbdee-96b2-46b1-a995-ac74f4cc32d3 - - - - -] Got router deleted 
notification for 17cf2d41-a7e2-4c16-a50e-a2c917b2a61f router_deleted 
/opt/openstack/neutron/neutron/agent/l3/agent.py:358

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1452935/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1452903] [NEW] KeyError in ovs_neutron_agent._bind_devices

2015-05-07 Thread Matt Riedemann
Public bug reported:

Seeing this all over the gate lately:

http://logs.openstack.org/95/176395/1/gate/gate-tempest-dsvm-neutron-
full/37e1139/logs/screen-q-agt.txt.gz?level=TRACE#_2015-05-07_19_27_38_157

2015-05-07 19:27:38.157 ERROR 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-0266e813-4a74-47c6-a1ed-6acc81317cb9 None None] Error while processing VIF 
ports
2015-05-07 19:27:38.157 21868 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Traceback (most recent call 
last):
2015-05-07 19:27:38.157 21868 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/new/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py",
 line 1637, in rpc_loop
2015-05-07 19:27:38.157 21868 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent ovs_restarted)
2015-05-07 19:27:38.157 21868 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/new/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py",
 line 1422, in process_network_ports
2015-05-07 19:27:38.157 21868 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
self._bind_devices(need_binding_devices)
2015-05-07 19:27:38.157 21868 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/new/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py",
 line 736, in _bind_devices
2015-05-07 19:27:38.157 21868 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent lvm = 
self.local_vlan_map[port_detail['network_id']]
2015-05-07 19:27:38.157 21868 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent KeyError: 
u'323b3bcb-1530-4c76-83b1-49a35f255179'
2015-05-07 19:27:38.157 21868 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRXJyb3Igd2hpbGUgcHJvY2Vzc2luZyBWSUYgcG9ydHNcIiBBTkQgbWVzc2FnZTpcIktleUVycm9yXCIgQU5EIHRhZ3M6XCJzY3JlZW4tcS1hZ3QudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MzEwMzIyNzE5MTR9

2321 hits in 7 days.  It's not directly related to failures, the jobs
are still 93% successful, but it's spiking in the last few days.

** Affects: neutron
 Importance: Undecided
 Status: Confirmed

** Changed in: neutron
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1452903

Title:
  KeyError in ovs_neutron_agent._bind_devices

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  Seeing this all over the gate lately:

  http://logs.openstack.org/95/176395/1/gate/gate-tempest-dsvm-neutron-
  full/37e1139/logs/screen-q-agt.txt.gz?level=TRACE#_2015-05-07_19_27_38_157

  2015-05-07 19:27:38.157 ERROR 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-0266e813-4a74-47c6-a1ed-6acc81317cb9 None None] Error while processing VIF 
ports
  2015-05-07 19:27:38.157 21868 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Traceback (most recent call 
last):
  2015-05-07 19:27:38.157 21868 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/new/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py",
 line 1637, in rpc_loop
  2015-05-07 19:27:38.157 21868 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent ovs_restarted)
  2015-05-07 19:27:38.157 21868 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/new/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py",
 line 1422, in process_network_ports
  2015-05-07 19:27:38.157 21868 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
self._bind_devices(need_binding_devices)
  2015-05-07 19:27:38.157 21868 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/new/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py",
 line 736, in _bind_devices
  2015-05-07 19:27:38.157 21868 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent lvm = 
self.local_vlan_map[port_detail['network_id']]
  2015-05-07 19:27:38.157 21868 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent KeyError: 
u'323b3bcb-1530-4c76-83b1-49a35f255179'
  2015-05-07 19:27:38.157 21868 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRXJyb3Igd2hpbGUgcHJvY2Vzc2luZyBWSUYgcG9ydHNcIiBBTkQgbWVzc2FnZTpcIktleUVycm9yXCIgQU5EIHRhZ3M6XCJzY3JlZW4tcS1hZ3QudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MzEwMzIyNzE5MTR9

  2321 hits in 7 days.  It's not directly related to failures, the jobs
  are still 93% successful, but it's spiking in the last few days.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1452903/+subscriptions

-- 
Mailing

[Yahoo-eng-team] [Bug 1452886] [NEW] Port stuck in BUILD state results in limited instance connectivity

2015-05-07 Thread James Denton
Public bug reported:

I am currently experiencing (random) cases of instances that are spun up
having limited connectivity. There are about 650 instances in the
environment and 45 networks.

Network Info:
- ML2/LinuxBridge/l2pop
- VXLAN networks

Symptoms:
- On the local compute node, the instance tap is in the bridge. Everything 
looks good.
- Instance is reachable from some, but not all, instances/devices in the same 
subnet across all compute and network nodes
- On some compute nodes and network nodes, the ARP and FDB entries for the 
instance do not exist. Instances/devices on these nodes cannot communicate with 
the new instance.
- No errors are logged

Here are some observations for the non-working instances:
- The corresponding Neutron port is stuck in a BUILD state
- The binding:host_id value of the port (ie. compute-xxx) does not match the 
OS-EXT-SRV-ATTR:host value of the instance (ie. compute-zzz). For working 
instances, these values match.

I am unable to replicate this consistently at this time, nor am I sure
where to begin pinpointing the issue. Any help is appreciated.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1452886

Title:
  Port stuck in BUILD state results in limited instance connectivity

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I am currently experiencing (random) cases of instances that are spun
  up having limited connectivity. There are about 650 instances in the
  environment and 45 networks.

  Network Info:
  - ML2/LinuxBridge/l2pop
  - VXLAN networks

  Symptoms:
  - On the local compute node, the instance tap is in the bridge. Everything 
looks good.
  - Instance is reachable from some, but not all, instances/devices in the same 
subnet across all compute and network nodes
  - On some compute nodes and network nodes, the ARP and FDB entries for the 
instance do not exist. Instances/devices on these nodes cannot communicate with 
the new instance.
  - No errors are logged

  Here are some observations for the non-working instances:
  - The corresponding Neutron port is stuck in a BUILD state
  - The binding:host_id value of the port (ie. compute-xxx) does not match the 
OS-EXT-SRV-ATTR:host value of the instance (ie. compute-zzz). For working 
instances, these values match.

  I am unable to replicate this consistently at this time, nor am I sure
  where to begin pinpointing the issue. Any help is appreciated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1452886/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441922] Re: Keystone V3 authentication return BadRequest: Malformed request url

2015-05-07 Thread Jin Liu
Code change is not needed on Cinder/Nova server, just some conf to use
keystone v3 authentication.

** Changed in: cinder
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1441922

Title:
  Keystone V3 authentication return BadRequest: Malformed request url

Status in Cinder:
  Invalid
Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  When using keystone V3 authentication for cinder and nova (see comment #3), I 
got error "BadRequest: Malformed request url (HTTP 400)".
  I am testing on Juno release, my keystone v3 env is like this,

  export OS_USERNAME="admin"
  export OS_PASSWORD="password"
  export OS_DOMAIN_NAME=default
  export OS_AUTH_URL="http://$MY_HOST:35357/v3";
  export OS_IDENTITY_API_VERSION=3

  My endpoint of cinder public URL is like 
http://**.**.**.**:8776/v1/cbe4b1d87fbb4318be379a79a570b7ec (I hided the real 
IP)
  When run command "openstack --debug volume list" or "openstack --debug volume 
create --size 1 jin", I got this BadRequest error. From debug info, this error 
comes from cinder server. I added log in cinder/api/openstack/wsgi.py function 
_process_stack(), found the context.project_id is None while project_id has a 
value, here return the error.

  if (context and project_id and (project_id != context.project_id)):
  msg = _("Malformed request url")
  return Fault(webob.exc.HTTPBadRequest(explanation=msg))

  I compared with another keystone V2 authentication server, the 
context.project_id is same as project_id. Maybe this is difference, in v2 
server the REQ has one more Project-id like "curl -i -H "X-Auth-Project-Id: 
admin".
  I found the cinder.context maybe come from cinder/api/middleware/auth.py, the 
project_id in cinder.context may not be assigned a value in keystone v3 
authentication scenario.

  ERROR log is as below:

  REQ: curl -i
  http://**.**.**.**:8776/v1/cbe4b1d87fbb4318be379a79a570b7ec/volumes/detail
  -X GET -H "User-Agent: python-cinderclient" -H "Accept:
  application/json" -H "X-Auth-Token: e883e05a887144d4ae70151c976ce666"

  INFO: requests.packages.urllib3.connectionpool Starting new HTTP connection 
(1): **.**.**.**
  DEBUG: requests.packages.urllib3.connectionpool "GET 
/v1/cbe4b1d87fbb4318be379a79a570b7ec/volumes/detail HTTP/1.1" 400 65
  DEBUG: cinderclient.client RESP: [400] {'date': 'Thu, 09 Apr 2015 00:35:30 
GMT', 'content-length': '65', 'content-type': 'application/json; 
charset=UTF-8', 'x-compute-request-id': 
'req-39a96150-b9ab-4753-8b02-d5730492b288', 'x-openstack-request-id': 
'req-39a96150-b9ab-4753-8b02-d5730492b288'}
  RESP BODY: {"badRequest": {"message": "Malformed request url", "code": 400}}

  ERROR: openstack Malformed request url (HTTP 400) (Request-ID: 
req-39a96150-b9ab-4753-8b02-d5730492b288)
  Traceback (most recent call last):
    File "/usr/lib/python2.7/site-packages/cliff/app.py", line 280, in 
run_subcommand
  result = cmd.run(parsed_args)
    File "/usr/lib/python2.7/site-packages/cliff/display.py", line 91, in run
  column_names, data = self.take_action(parsed_args)
    File 
"/usr/lib/python2.7/site-packages/openstackclient/volume/v1/volume.py", line 
255, in take_action
  data = volume_client.volumes.list(search_opts=search_opts)
    File "/usr/lib/python2.7/site-packages/cinderclient/v1/volumes.py", line 
220, in list
  "volumes")
    File "/usr/lib/python2.7/site-packages/cinderclient/base.py", line 70, in 
_list
  resp, body = self.api.client.get(url)
    File "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 302, 
in get
  return self._cs_request(url, 'GET', **kwargs)
    File "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 269, 
in _cs_request
  **kwargs)
    File "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 252, 
in request
  raise exceptions.from_response(resp, body)
  BadRequest: Malformed request url (HTTP 400) (Request-ID: 
req-39a96150-b9ab-4753-8b02-d5730492b288)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1441922/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450625] Re: common service chaining driver API

2015-05-07 Thread cathy Hong Zhang
** Changed in: neutron
   Status: Opinion => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1450625

Title:
  common service chaining driver API

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  This feature/bug is related to bug #1450617 (Neutron extension to
  support service chaining)

  Bug #1450617 is to add a neutron port chaining API and associated "neutron 
port chain manager" to support service chaining functionality.  Between the 
"neutron port manager" and the underlying service chain drivers,  a common 
service chain driver API shim layer is needed to allow for different types of 
service chain drivers  (eg. OVS driver, different SDN Controller drivers) to be 
integrated into the Neutron. Different service chain drivers may have different 
ways of constructing the service chain path and use different data path 
encapsulation and transport to steer the flow through the chain path. With one 
common interface between the Neutron service chain manager and various 
vendor-specific drivers, the driver design/implementation can be changed 
without changing the Neutron Service Chain Manager and the interface APIs.
   
  This interface should include the following entities:
   
   * An ordered list of service function instance clusters. Each service 
instance cluster represents a group of like service function instances which 
can be used for load distribution.
   * Traffic flow classification rules: It consists of a set of flow 
descriptors.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1450625/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1452840] [NEW] libvirt: nova's detach_volume silently fails sometimes

2015-05-07 Thread Nicolas Simonds
Public bug reported:

This behavior has been observed on the following platforms:

* Nova Icehouse, Debian 12.04, QEMU 1.5.3, libvirt 1.1.3.5, with the Cinder 
Icehouse NFS driver, CirrOS 0.3.2 guest
* Nova Icehouse, Debian 12.04, QEMU 1.5.3, libvirt 1.1.3.5, with the Cinder 
Icehouse RBD (Ceph) driver, CirrOS 0.3.2 guest
* Nova master, Debian 14.04, QEMU 2.0.0, libvirt 1.2.2, with the Cinder master 
iSCSI driver, CirrOS 0.3.2 guest

Nova's "detach_volume" fires the detach method into libvirt, which
claims success, but the device is still attached according to "virsh
domblklist".  Nova then finishes the teardown, releasing the resources,
which then causes

This appears to be a race condition, in that it does occasionally work
fine.

Steps to Reproduce:

This script will usually trigger the error condition:

#!/bin/bash -vx

: Setup
img=$(glance image-list --disk-format ami | awk '/cirros-0.3.2-x86_64-uec/ 
{print $2}')
vol1_id=$(cinder create 1 | awk '($2=="id"){print $4}')
sleep 5

: Launch
nova boot --flavor m1.tiny --image "$img" --block-device 
source=volume,id="$vol1_id",dest=volume,shutdown=preserve --poll test

: Measure
nova show test | grep "volumes_attached.*$vol1_id"

: Poke the bear
nova volume-detach test "$vol1_id"
sudo virsh list --all --uuid | xargs -r -n 1 sudo virsh domblklist
sleep 10
sudo virsh list --all --uuid | xargs -r -n 1 sudo virsh domblklist
vol2_id=$(cinder create 1 | awk '($2=="id"){print $4}')
nova volume-attach test "$vol2_id"
sleep 1

: Measure again
nova show test | grep "volumes_attached.*$vol2_id"

Expected behavior:

The volumes attach/detach/attach properly

Actual behavior:

The second attachment fails, and n-cpu throws the following exception:

Failed to attach volume at mountpoint: /dev/vdb
Traceback (most recent call last):
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 1057, in 
attach_volume
 virt_dom.attachDeviceFlags(conf.to_xml(), flags)
   File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 
183, in doit
 result = proxy_call(self._autowrap, f, *args, **kwargs)
   File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 
141, in proxy_call
 rv = execute(f, *args, **kwargs)
   File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 
122, in execute
 six.reraise(c, e, tb)
   File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 
80, in tworker
 rv = meth(*args, **kwargs)
   File "/usr/local/lib/python2.7/dist-packages/libvirt.py", line 517, in 
attachDeviceFlags
 if ret == -1: raise libvirtError ('virDomainAttachDeviceFlags() 
failed', dom=self)
 libvirtError: operation failed: target vdb already exists

Workaround:

"sudo virsh detach-disk $SOME_UUID $SOME_DISK_ID" appears to cause the
guest to properly detach the device, and also seems to ward off whatever
gremlins caused the problem in the first place; i.e., the problem gets
much less likely to present itself after firing a virsh command.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: libvirt volumes

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1452840

Title:
  libvirt: nova's detach_volume silently fails sometimes

Status in OpenStack Compute (Nova):
  New

Bug description:
  This behavior has been observed on the following platforms:

  * Nova Icehouse, Debian 12.04, QEMU 1.5.3, libvirt 1.1.3.5, with the Cinder 
Icehouse NFS driver, CirrOS 0.3.2 guest
  * Nova Icehouse, Debian 12.04, QEMU 1.5.3, libvirt 1.1.3.5, with the Cinder 
Icehouse RBD (Ceph) driver, CirrOS 0.3.2 guest
  * Nova master, Debian 14.04, QEMU 2.0.0, libvirt 1.2.2, with the Cinder 
master iSCSI driver, CirrOS 0.3.2 guest

  Nova's "detach_volume" fires the detach method into libvirt, which
  claims success, but the device is still attached according to "virsh
  domblklist".  Nova then finishes the teardown, releasing the
  resources, which then causes

  This appears to be a race condition, in that it does occasionally work
  fine.

  Steps to Reproduce:

  This script will usually trigger the error condition:

  #!/bin/bash -vx
  
  : Setup
  img=$(glance image-list --disk-format ami | awk 
'/cirros-0.3.2-x86_64-uec/ {print $2}')
  vol1_id=$(cinder create 1 | awk '($2=="id"){print $4}')
  sleep 5
  
  : Launch
  nova boot --flavor m1.tiny --image "$img" --block-device 
source=volume,id="$vol1_id",dest=volume,shutdown=preserve --poll test
  
  : Measure
  nova show test | grep "volumes_attached.*$vol1_id"
  
  : Poke the bear
  nova volume-detach test "$vol1_id"
  sudo virsh list --all --uuid | xargs -r -n 1 sudo virsh domblklist
  sleep 10
  sudo virsh list --al

[Yahoo-eng-team] [Bug 1452804] [NEW] _validate_mac_address do not check if it's input is None

2015-05-07 Thread QthCN
Public bug reported:

In neutron/api/v2/attributes.py I found a TODO near line 170:

def _validate_no_whitespace(data):
"""Validates that input has no whitespace."""
if re.search(r'\s', data):
msg = _("'%s' contains whitespace") % data
LOG.debug(msg)
raise n_exc.InvalidInput(error_message=msg)
return data


def _validate_mac_address(data, valid_values=None):
try:
valid_mac = netaddr.valid_mac(_validate_no_whitespace(data))
except Exception:
valid_mac = False
# TODO(arosen): The code in this file should be refactored
# so it catches the correct exceptions. _validate_no_whitespace
# raises AttributeError if data is None.
if not valid_mac:
msg = _("'%s' is not a valid MAC address") % data
LOG.debug(msg)
return msg

_validate_mac_address will be called in neutron/api/v2/attributes.py :

'type:mac_address': _validate_mac_address


if data is None in _validate_no_whitespace, re.search(r'\s', data) will raise 
an Exception and we will get TypeError instead of n_exc.InvalidInput:

>>> import re
>>> re.search(r'\s', None)
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/lib64/python2.7/re.py", line 142, in search
return _compile(pattern, flags).search(string)
TypeError: expected string or buffer

The traceback msg can confuse the caller. So I thinks this better to
warp TypeError into  n_exc.InvalidInput with a clear error msg to the
caller.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1452804

Title:
  _validate_mac_address do not check if it's input is None

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In neutron/api/v2/attributes.py I found a TODO near line 170:

  def _validate_no_whitespace(data):
  """Validates that input has no whitespace."""
  if re.search(r'\s', data):
  msg = _("'%s' contains whitespace") % data
  LOG.debug(msg)
  raise n_exc.InvalidInput(error_message=msg)
  return data

  
  def _validate_mac_address(data, valid_values=None):
  try:
  valid_mac = netaddr.valid_mac(_validate_no_whitespace(data))
  except Exception:
  valid_mac = False
  # TODO(arosen): The code in this file should be refactored
  # so it catches the correct exceptions. _validate_no_whitespace
  # raises AttributeError if data is None.
  if not valid_mac:
  msg = _("'%s' is not a valid MAC address") % data
  LOG.debug(msg)
  return msg

  _validate_mac_address will be called in neutron/api/v2/attributes.py :

  'type:mac_address': _validate_mac_address

  
  if data is None in _validate_no_whitespace, re.search(r'\s', data) will raise 
an Exception and we will get TypeError instead of n_exc.InvalidInput:

  >>> import re
  >>> re.search(r'\s', None)
  Traceback (most recent call last):
File "", line 1, in 
File "/usr/lib64/python2.7/re.py", line 142, in search
  return _compile(pattern, flags).search(string)
  TypeError: expected string or buffer

  The traceback msg can confuse the caller. So I thinks this better to
  warp TypeError into  n_exc.InvalidInput with a clear error msg to the
  caller.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1452804/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1241587] Re: Can not delete deleted tenant's default security group

2015-05-07 Thread Chris St. Pierre
** Changed in: nova
   Status: Invalid => Confirmed

** Changed in: nova
 Assignee: Jay Lau (jay-lau-513) => Chris St. Pierre (stpierre)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1241587

Title:
  Can not delete deleted tenant's default security group

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  $ keystone tenant-create --name foo
  +-+--+
  |   Property  |  Value   |
  +-+--+
  | description |  |
  |   enabled   |   True   |
  |  id | 7149cdf591364e17a15e30229f2e023e |
  | name|   foo|
  +-+--+

  $ keystone user-create --name foo --pass foo --tenant foo
  +--+--+
  | Property |  Value   |
  +--+--+
  |  email   |  |
  | enabled  |   True   |
  |id| e5a5cd548ab446d5b787e6b37415707d |
  |   name   |   foo|
  | tenantId | 7149cdf591364e17a15e30229f2e023e |
  +--+--+

  $nova --os-username foo --os-password foo --os-tenant-id 
7149cdf591364e17a15e30229f2e023e  secgroup-list
  +-+-+-+
  | Id  | Name| Description |
  +-+-+-+
  | 111 | default | default |
  +-+-+-+

  
  ### AS ADMIN ###
  $ keystone user-delete foo
  $ keystone tenant-delete foo
  $ nova secgroup-delete 111
  ERROR: Unable to delete system group 'default' (HTTP 400) (Request-ID: 
req-9f62f3fe-1cd7-46dc-801c-335900b6f903)

  As admin when the tenant does not exists I should be able to delete
  the security group (may be with an additional force argument)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1241587/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1452788] [NEW] cannot delete image with status='deleted' and deleted=0

2015-05-07 Thread Salvo Davide Rapisarda
Public bug reported:

Hi,

in my OpenStack Juno I have created a snapshot that gone into status='deleted' 
after creation.
When I try to delete this image I got a 404 "Image not found " and I cannot 
delete image in no way.

In "images" table DB the record fields are:

*** 1. row ***
  id: 5c79d539-f256-422c-b376-4c27c1d06b96
name: my_image
size: 901840896
  status: deleted
   is_public: 0
  created_at: 2015-05-07 15:24:45
  updated_at: 2015-05-07 15:25:18
  deleted_at: NULL
 deleted: 0
 disk_format: qcow2
container_format: bare
checksum: 4483088a8840fa463d518085ae515b7b
   owner: fdd21cc5a7b144b8920cc815ab8ea2a9
min_disk: 1
 min_ram: 0
   protected: 0
virtual_size: NULL


For replicate this issue, I have created an image and I have change the status 
(via mysql UPDATE) from "active" to "deleted".

UPDATE images SET status='deleted' WHERE id =
'5c79d539-f256-422c-b376-4c27c1d06b96';

and if I run:

# glance image-delete 5c79d539-f256-422c-b376-4c27c1d06b96

result is:


 
  404 Not Found
 
 
  404 Not Found
  Image 5c79d539-f256-422c-b376-4c27c1d06b96 not found.

 
 (HTTP 404): Unable to delete image 5c79d539-f256-422c-b376-4c27c1d06b96

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: delete juno

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1452788

Title:
  cannot delete image with status='deleted' and deleted=0

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Hi,

  in my OpenStack Juno I have created a snapshot that gone into 
status='deleted' after creation.
  When I try to delete this image I got a 404 "Image not found " and I cannot 
delete image in no way.

  In "images" table DB the record fields are:

  *** 1. row ***
id: 5c79d539-f256-422c-b376-4c27c1d06b96
  name: my_image
  size: 901840896
status: deleted
 is_public: 0
created_at: 2015-05-07 15:24:45
updated_at: 2015-05-07 15:25:18
deleted_at: NULL
   deleted: 0
   disk_format: qcow2
  container_format: bare
  checksum: 4483088a8840fa463d518085ae515b7b
 owner: fdd21cc5a7b144b8920cc815ab8ea2a9
  min_disk: 1
   min_ram: 0
 protected: 0
  virtual_size: NULL

  
  For replicate this issue, I have created an image and I have change the 
status (via mysql UPDATE) from "active" to "deleted".

  UPDATE images SET status='deleted' WHERE id =
  '5c79d539-f256-422c-b376-4c27c1d06b96';

  and if I run:

  # glance image-delete 5c79d539-f256-422c-b376-4c27c1d06b96

  result is:

  
   
404 Not Found
   
   
404 Not Found
Image 5c79d539-f256-422c-b376-4c27c1d06b96 not found.

   
   (HTTP 404): Unable to delete image 
5c79d539-f256-422c-b376-4c27c1d06b96

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1452788/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357586] Re: volume type allows name with only white spaces

2015-05-07 Thread OpenStack Infra
Fix proposed to branch: master
Review: https://review.openstack.org/181027

** Changed in: horizon
   Status: Won't Fix => In Progress

** Changed in: horizon
 Assignee: Gloria Gu (gloria-gu) => Zhenguo Niu (niu-zglinux)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1357586

Title:
  volume type allows name with only white spaces

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When create volume type, it allows name field with only white spaces.

  How to reproduce:

  go to admin -> volume to create a volume type with only white spaces
  as a name, the volume type shows up in the volume table an empty name.

  Expect:

  form should not allow empty name when create volume type

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1357586/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1452750] [NEW] dest_file in task convert is wrong

2015-05-07 Thread Flavio Percoco
Public bug reported:

https://github.com/openstack/glance/commit/1b144f4c12fd6da58d7c48348bf7bab5873388e9
#diff-f02c20aafcce326b4d31c938376f6c2aR78 -> head -> desk

The dest_path is not formated correctly, which ends up in converting the
image to a path that is completely ignored by other tasks.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1452750

Title:
  dest_file in task convert is wrong

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  
https://github.com/openstack/glance/commit/1b144f4c12fd6da58d7c48348bf7bab5873388e9
  #diff-f02c20aafcce326b4d31c938376f6c2aR78 -> head -> desk

  The dest_path is not formated correctly, which ends up in converting
  the image to a path that is completely ignored by other tasks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1452750/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1452737] [NEW] Full stack tests reuse devstack rabbit user

2015-05-07 Thread Assaf Muller
Public bug reported:

Currently fullstack tests are hardcoded to reuse the stackrabbit user,
and assume that the password is 127.0.0.1. However, the password is
chosen by user of devstack. If it is to be expected to be able to use
the same machine for devstack, as well as run fullstack tests (Not at
the same time, but still), then they should not use the same rabbit
credentials. The current approach makes it very hard to run fullstack
tests as developers tend to run stack.sh to configure their system, then
run the fullstack tests and expect them to work.

** Affects: neutron
 Importance: Medium
 Assignee: John Schwarz (jschwarz)
 Status: In Progress

** Changed in: neutron
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1452737

Title:
  Full stack tests reuse devstack rabbit user

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Currently fullstack tests are hardcoded to reuse the stackrabbit user,
  and assume that the password is 127.0.0.1. However, the password is
  chosen by user of devstack. If it is to be expected to be able to use
  the same machine for devstack, as well as run fullstack tests (Not at
  the same time, but still), then they should not use the same rabbit
  credentials. The current approach makes it very hard to run fullstack
  tests as developers tend to run stack.sh to configure their system,
  then run the fullstack tests and expect them to work.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1452737/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1452731] [NEW] [data processing] job templates panel contains too few info

2015-05-07 Thread Ken Chen
Public bug reported:

Currently Job templates panel only contains two columns: Name and
Description. We at least should show Type to users.

** Affects: horizon
 Importance: Undecided
 Assignee: Ken Chen (ken-chen-i)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Ken Chen (ken-chen-i)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1452731

Title:
  [data processing] job templates panel contains too few info

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Currently Job templates panel only contains two columns: Name and
  Description. We at least should show Type to users.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1452731/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1452718] [NEW] Create sg rule or delete sg rule, iptalbes can't be reload

2015-05-07 Thread shihanzhang
Public bug reported:

when we  create a new sg rule or delete a sg rule, the iptables can't be reload 
in compute node, this bug is introduced by patch: 
https://review.openstack.org/118274
I have found the reason, I will fix it tomorrow morning!

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1452718

Title:
  Create sg rule or delete sg rule,  iptalbes can't be reload

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  when we  create a new sg rule or delete a sg rule, the iptables can't be 
reload in compute node, this bug is introduced by patch: 
https://review.openstack.org/118274
  I have found the reason, I will fix it tomorrow morning!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1452718/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1268680] Re: Creating an image without container format queues image and fails with 400

2015-05-07 Thread Mike Fedosin
** Changed in: glance
 Assignee: (unassigned) => Mike Fedosin (mfedosin)

** Changed in: glance
   Status: Invalid => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1268680

Title:
  Creating an image without container format queues image and fails with
  400

Status in OpenStack Image Registry and Delivery Service (Glance):
  Confirmed

Bug description:
  Description of problem:

  Creating an image from CLI without --container format queued the image
  and fails with 400.

  Request returned failure status.
  400 Bad Request
  Invalid container format 'None' for image.
  (HTTP 400)

  
  How reproducible:
  # glance --debug image-create --name cirros --disk-format qcow2 --file 
/tmp/cirros-image.qcow2 --progress
  
  [=>] 100%

  HTTP/1.1 400 Bad Request
  date: Tue, 07 Jan 2014 14:13:54 GMT
  content-length: 64
  content-type: text/plain; charset=UTF-8
  x-openstack-request-id: req-11b4ecad-3a8d-4e44-9c37-a4d843805889

  400 Bad Request

  Invalid container format 'None' for image.


  # glance image-list
  
+---+---+-+++---+
  | ID  
  | Name| Disk Format | Container Format | Size  | Status|
  
+---+---+-+++---+
  | b2490dd2-b535-4b98-8647-cca428a63e01 | cirros | qcow2   |   
  | 307962880 | queued |
  
+---+---+-+++---+

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1268680/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1452697] [NEW] Error Instance can't be deleted when its host is NULL

2015-05-07 Thread yjx
Public bug reported:

 When I created an instance,it failed and VM-state is ERROR.I query the 
table which named nova.instances ,host is NULL,it results to that I can't be 
able to delete this instance .
the log is :
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack Traceback (most recent 
call last):
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/__init__.py", line 119, in 
__call__
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack return 
req.get_response(self.application)
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1296, in send
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack application, 
catch_exc_info=False)
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1260, in 
call_application
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack return resp(environ, 
start_response)
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py",
 line 690, in __call__
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack return self.app(env, 
start_response)
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack return resp(environ, 
start_response)
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack return resp(environ, 
start_response)
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/routes/middleware.py", line 131, in __call__
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack response = 
self.app(environ, start_response)
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack return resp(environ, 
start_response)
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack content_type, body, 
accept)
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py", line 997, in 
_process_stack
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py", line 1078, in 
dispatch
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack return 
method(req=request, **action_args)
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/servers.py", line 
989, in _delete
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack 
self.compute_api.delete(context, instance)
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 202, in wrapped
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack return func(self, 
context, target, *args, **kwargs)
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 192, in inner
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack return 
function(self, context, instance, *args, **kwargs)
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 219, in _wrapped
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack return fn(self, 
context, instance, *args, **kwargs)
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 173, in inner
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack return f(self, 
context, instance, *args, **kw)
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 1586, in delete
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack 
self._delete_instance(context, instance)
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 1576, in 
_delete_instance
2015-05-07 16:55:38.240 23178 TRACE nova.api.openstack 
task_state=task_states.DELETING)
2015-

[Yahoo-eng-team] [Bug 1452646] [NEW] nova quota for instances numbers may be incorrect

2015-05-07 Thread victorye81
Public bug reported:

I have a project and quotas set as default . Today the instances reached
10 and I terminated 7.Then there was only 3 instances via
'Instances',but there was 10 instances also via overview on dashboard.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1452646

Title:
  nova quota for instances numbers may  be incorrect

Status in OpenStack Compute (Nova):
  New

Bug description:
  I have a project and quotas set as default . Today the instances
  reached 10 and I terminated 7.Then there was only 3 instances via
  'Instances',but there was 10 instances also via overview on dashboard.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1452646/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1452580] [NEW] [sahara] job type is not shown in job details

2015-05-07 Thread Ken Chen
Public bug reported:

Currently in job detail information page, we only have Name, ID,
Description, Mains, Libs, and Create time shown. We need to show type
for easier check.

** Affects: horizon
 Importance: Undecided
 Assignee: Ken Chen (ken-chen-i)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Ken Chen (ken-chen-i)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1452580

Title:
  [sahara] job type is not shown in job details

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Currently in job detail information page, we only have Name, ID,
  Description, Mains, Libs, and Create time shown. We need to show type
  for easier check.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1452580/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1452582] [NEW] PluginReportStateAPI.report_state should provide searchable identifier

2015-05-07 Thread Eugene Nikanorov
Public bug reported:

When troubleshooting problems with cluster it would be very convenient
to have information about agent heartbeats logged with some searchable
identifier which could create 1-to-1 mapping between events in agent's
logs and server's logs.

Currently agent's heartbeats are not logged at all on server side.
Since on a large cluster that could create too much logging (even for 
troubleshooting cases), it might make sense to make this configurable both on 
neutron-server side and on agent-side.

** Affects: neutron
 Importance: Wishlist
 Assignee: Eugene Nikanorov (enikanorov)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1452582

Title:
  PluginReportStateAPI.report_state should provide searchable identifier

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When troubleshooting problems with cluster it would be very convenient
  to have information about agent heartbeats logged with some searchable
  identifier which could create 1-to-1 mapping between events in agent's
  logs and server's logs.

  Currently agent's heartbeats are not logged at all on server side.
  Since on a large cluster that could create too much logging (even for 
troubleshooting cases), it might make sense to make this configurable both on 
neutron-server side and on agent-side.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1452582/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp