[Yahoo-eng-team] [Bug 1416713] Re: Decompose the NCS ML2 driver

2015-01-31 Thread Henry Gessau
** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) = Henry Gessau (gessau)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1416713

Title:
  Decompose the NCS ML2 driver

Status in Cisco Vendor Code for OpenStack Neutron:
  New
Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Decompose the NCS mechanism driver in Neutron.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-cisco/+bug/1416713/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1402354] Re: Broken Pipe in vsphere store due to inactive session

2015-01-31 Thread nikhil komawar
** Also affects: glance-store
   Importance: Undecided
   Status: New

** Changed in: glance-store
   Status: New = In Progress

** Changed in: glance-store
   Importance: Undecided = High

** Changed in: glance-store
 Assignee: (unassigned) = Sabari Murugesan (smurugesan)

** Changed in: glance-store
Milestone: None = v0.1.11

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1402354

Title:
  Broken Pipe in vsphere store due to inactive session

Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in OpenStack Glance backend store-drivers library (glance_store):
  In Progress

Bug description:
  We are seeing the following error in glance-api configured with
  vsphere store. This happens after letting glance-api run for a while.

  
  2014-12-03 15:06:08.641 3005 DEBUG glance.api.v1.images [-] Uploading image 
data for image 10908257-eb9a-47cf-aaba-afa203c3e9f0 to vsphere store _upload 
/opt/stack/glance/glance/api/v1/images.py:630
  send: 'PUT 
/folder/openstack_glance/10908257-eb9a-47cf-aaba-afa203c3e9f0%3FdsName%3Dstore2%26dcPath%3DDatacenter1
 HTTP/1.1\r\nHost: 10.20.116.124\r\nAccept-Encoding: 
identity\r\nContent-Length: 41126400\r\nCookie: 
vmware_soap_session=523fa07a-301c-5312-a35b-7fb99d822720\r\n\r\n'
  send: glance_store._drivers.vmware_datastore._Reader object at 
0x7f238c1d74d0
  sendIng a read()able
  2014-12-03 15:06:08.677 3005 ERROR glance_store._drivers.vmware_datastore [-] 
Failed to upload content of image 10908257-eb9a-47cf-aaba-afa203c3e9f0
  2014-12-03 15:06:08.677 3005 TRACE glance_store._drivers.vmware_datastore 
Traceback (most recent call last):
  2014-12-03 15:06:08.677 3005 TRACE glance_store._drivers.vmware_datastore   
File /opt/stack/glance_store/glance_store/_drivers/vmware_datastore.py, line 
351, in add
  2014-12-03 15:06:08.677 3005 TRACE glance_store._drivers.vmware_datastore 
content=image_file)
  2014-12-03 15:06:08.677 3005 TRACE glance_store._drivers.vmware_datastore   
File /opt/stack/glance_store/glance_store/_drivers/vmware_datastore.py, line 
502, in _get_http_conn
  2014-12-03 15:06:08.677 3005 TRACE glance_store._drivers.vmware_datastore 
conn.request(method, url, content, headers)
  2014-12-03 15:06:08.677 3005 TRACE glance_store._drivers.vmware_datastore   
File /usr/lib/python2.7/httplib.py, line 973, in request
  2014-12-03 15:06:08.677 3005 TRACE glance_store._drivers.vmware_datastore 
self._send_request(method, url, body, headers)
  2014-12-03 15:06:08.677 3005 TRACE glance_store._drivers.vmware_datastore   
File /usr/lib/python2.7/httplib.py, line 1007, in _send_request
  2014-12-03 15:06:08.677 3005 TRACE glance_store._drivers.vmware_datastore 
self.endheaders(body)
  2014-12-03 15:06:08.677 3005 TRACE glance_store._drivers.vmware_datastore   
File /usr/lib/python2.7/httplib.py, line 969, in endheaders
  2014-12-03 15:06:08.677 3005 TRACE glance_store._drivers.vmware_datastore 
self._send_output(message_body)
  2014-12-03 15:06:08.677 3005 TRACE glance_store._drivers.vmware_datastore   
File /usr/lib/python2.7/httplib.py, line 833, in _send_output
  2014-12-03 15:06:08.677 3005 TRACE glance_store._drivers.vmware_datastore 
self.send(message_body)
  2014-12-03 15:06:08.677 3005 TRACE glance_store._drivers.vmware_datastore   
File /usr/lib/python2.7/httplib.py, line 802, in send
  2014-12-03 15:06:08.677 3005 TRACE glance_store._drivers.vmware_datastore 
self.sock.sendall(datablock)
  2014-12-03 15:06:08.677 3005 TRACE glance_store._drivers.vmware_datastore   
File /usr/local/lib/python2.7/dist-packages/eventlet/green/ssl.py, line 137, 
in sendall
  2014-12-03 15:06:08.677 3005 TRACE glance_store._drivers.vmware_datastore 
v = self.send(data[count:])
  2014-12-03 15:06:08.677 3005 TRACE glance_store._drivers.vmware_datastore   
File /usr/local/lib/python2.7/dist-packages/eventlet/green/ssl.py, line 113, 
in send
  2014-12-03 15:06:08.677 3005 TRACE glance_store._drivers.vmware_datastore 
super(GreenSSLSocket, self).send, data, flags)
  2014-12-03 15:06:08.677 3005 TRACE glance_store._drivers.vmware_datastore   
File /usr/local/lib/python2.7/dist-packages/eventlet/green/ssl.py, line 80, 
in _call_trampolining
  2014-12-03 15:06:08.677 3005 TRACE glance_store._drivers.vmware_datastore 
return func(*a, **kw)
  2014-12-03 15:06:08.677 3005 TRACE glance_store._drivers.vmware_datastore   
File /usr/lib/python2.7/ssl.py, line 298, in send
  2014-12-03 15:06:08.677 3005 TRACE glance_store._drivers.vmware_datastore 
v = self._sslobj.write(data)
  2014-12-03 15:06:08.677 3005 TRACE glance_store._drivers.vmware_datastore 
error: [Errno 32] Broken pipe

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1402354/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net

[Yahoo-eng-team] [Bug 1416783] [NEW] ML2 DVR port binding missing mechanism driver calls

2015-01-31 Thread Robert Kukura
Public bug reported:

When a binding is established for a DVR port on a node, the
update_port_precommit() and update_port_postcommit() methods are not
called on the registered mechanism drivers, as they are when bindings
are established for non-DVR ports. This prevents DVR's VLAN support from
working with mechanism drivers that depend on these calls, for instance
to enable trunking of the required VLAN to the node.

** Affects: neutron
 Importance: High
 Assignee: Robert Kukura (rkukura)
 Status: New

** Changed in: neutron
Milestone: None = kilo-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1416783

Title:
  ML2 DVR port binding missing mechanism driver calls

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When a binding is established for a DVR port on a node, the
  update_port_precommit() and update_port_postcommit() methods are not
  called on the registered mechanism drivers, as they are when bindings
  are established for non-DVR ports. This prevents DVR's VLAN support
  from working with mechanism drivers that depend on these calls, for
  instance to enable trunking of the required VLAN to the node.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1416783/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416807] [NEW] wrong version of oslo oslo.rootwrap in requirement.txt

2015-01-31 Thread Moshe Levi
Public bug reported:

oslo.rootwrap 1.5.0 as namespace change from oslo.rootwrap to
oslo_rootwrap  , but requirement.txt  allow installing oslo.rootwrap
1.3.0.

this causing  openvswitch not to start in Mellanox CI.

2015-02-01 09:07:31.259 28417 TRACE neutron Stderr: Traceback (most recent call 
last):
2015-02-01 09:07:31.259 28417 TRACE neutron   File 
/usr/local/bin/neutron-rootwrap, line 9, in module
2015-02-01 09:07:31.259 28417 TRACE neutron 
load_entry_point('neutron==2015.1.dev507', 'console_scripts', 
'neutron-rootwrap')()
2015-02-01 09:07:31.259 28417 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 519, 
in load_entry_point
2015-02-01 09:07:31.259 28417 TRACE neutron return 
get_distribution(dist).load_entry_point(group, name)
2015-02-01 09:07:31.259 28417 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 2630, 
in load_entry_point
2015-02-01 09:07:31.259 28417 TRACE neutron return ep.load()
2015-02-01 09:07:31.259 28417 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 2310, 
in load
2015-02-01 09:07:31.259 28417 TRACE neutron return self.resolve()
2015-02-01 09:07:31.259 28417 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 2316, 
in resolve
2015-02-01 09:07:31.259 28417 TRACE neutron module = 
__import__(self.module_name, fromlist=['__name__'], level=0)
2015-02-01 09:07:31.259 28417 TRACE neutron ImportError: No module named 
oslo_rootwrap.cmd
2015-02-01 09:07:31.259 28417 TRACE neutron 
2015-02-01 09:07:31.259 28417 TRACE neutron

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1416807

Title:
  wrong version of oslo oslo.rootwrap in requirement.txt

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  oslo.rootwrap 1.5.0 as namespace change from oslo.rootwrap to
  oslo_rootwrap  , but requirement.txt  allow installing oslo.rootwrap
  1.3.0.

  this causing  openvswitch not to start in Mellanox CI.

  2015-02-01 09:07:31.259 28417 TRACE neutron Stderr: Traceback (most recent 
call last):
  2015-02-01 09:07:31.259 28417 TRACE neutron   File 
/usr/local/bin/neutron-rootwrap, line 9, in module
  2015-02-01 09:07:31.259 28417 TRACE neutron 
load_entry_point('neutron==2015.1.dev507', 'console_scripts', 
'neutron-rootwrap')()
  2015-02-01 09:07:31.259 28417 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 519, 
in load_entry_point
  2015-02-01 09:07:31.259 28417 TRACE neutron return 
get_distribution(dist).load_entry_point(group, name)
  2015-02-01 09:07:31.259 28417 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 2630, 
in load_entry_point
  2015-02-01 09:07:31.259 28417 TRACE neutron return ep.load()
  2015-02-01 09:07:31.259 28417 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 2310, 
in load
  2015-02-01 09:07:31.259 28417 TRACE neutron return self.resolve()
  2015-02-01 09:07:31.259 28417 TRACE neutron   File 
/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py, line 2316, 
in resolve
  2015-02-01 09:07:31.259 28417 TRACE neutron module = 
__import__(self.module_name, fromlist=['__name__'], level=0)
  2015-02-01 09:07:31.259 28417 TRACE neutron ImportError: No module named 
oslo_rootwrap.cmd
  2015-02-01 09:07:31.259 28417 TRACE neutron 
  2015-02-01 09:07:31.259 28417 TRACE neutron

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1416807/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1414450] Re: nova reset-state server does not work using server name.

2015-01-31 Thread Davanum Srinivas (DIMS)
** Project changed: nova = python-novaclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1414450

Title:
  nova reset-state server does not work using server name.

Status in Python client library for Nova:
  New

Bug description:
  According to the help description, either the name or ID of the server
  can be used in nova reset-state server.

  Using the server name actually fails.

  localadmin@qa4:~/devstack$ nova help reset-state
  usage: nova reset-state [--active] server [server ...]

  Reset the state of a server.

  Positional arguments:
server  Name or ID of server(s).

  Optional arguments:
--active  Request the server be reset to active state instead of error
  state (the default).


  localadmin@qa4:~/devstack$ nova list
  
+--+--+++-+--+
  | ID   | Name | Status | Task State | Power 
State | Networks |
  
+--+--+++-+--+
  | fd2d1590-9993-43cc-b92d-48a4fdbc8d1c | vm-1 | ACTIVE | -  | Running 
| private=10.0.0.2 |
  
+--+--+++-+--+
  localadmin@qa4:~/devstack$ 
  localadmin@qa4:~/devstack$ 
  localadmin@qa4:~/devstack$ nova --debug --os-tenant-name admin --os-username 
admin reset-state vm-1
  REQ: curl -i 'http://172.29.172.161:5000/v2.0/tokens' -X POST -H Accept: 
application/json -H Content-Type: application/json -H User-Agent: 
python-novaclient -d '{auth: {tenantName: admin, passwordCredentials: 
{username: admin, password: 
{SHA1}5baa61e4c9b93f3f0682250b6cf8331b7ee68fd8}}}'
  INFO (connectionpool:258) Starting new HTTP connection (1): 172.29.172.161
  DEBUG (connectionpool:375) Setting read timeout to 600.0
  DEBUG (connectionpool:415) POST /v2.0/tokens HTTP/1.1 200 4288
  RESP: [200] CaseInsensitiveDict({'date': 'Sun, 25 Jan 2015 01:40:52 GMT', 
'vary': 'X-Auth-Token', 'content-length': '4288', 'content-type': 
'application/json', 'server': 'Apache/2.4.7 (Ubuntu)'})
  RESP BODY: {access: {token: {issued_at: 2015-01-25T01:40:52.884988, 
expires: 2015-01-25T02:40:52Z, id: 
{SHA1}b791da8356ec9fba2b86f8dbac9717bb35397cf3, tenant: {enabled: true, 
description: null, name: admin, id: 
fd72b5ea8d5942ea8927238a9da9f90f}, audit_ids: [fbeMQKJ1RRy1ACHDg9oFiw]}, 
serviceCatalog: [{endpoints_links: [], endpoints: [{adminURL: 
http://172.29.172.161:8774/v2.1/fd72b5ea8d5942ea8927238a9da9f90f;, region: 
RegionOne, publicURL: 
http://172.29.172.161:8774/v2.1/fd72b5ea8d5942ea8927238a9da9f90f;, 
internalURL: 
http://172.29.172.161:8774/v2.1/fd72b5ea8d5942ea8927238a9da9f90f;, id: 
0e005e1d2cf6444190e777b2c9fb6f14}], type: computev21, name: novav21}, 
{endpoints_links: [], endpoints: [{adminURL: 
http://172.29.172.161:8774/v2/fd72b5ea8d5942ea8927238a9da9f90f;, region: 
RegionOne, publicURL: 
http://172.29.172.161:8774/v2/fd72b5ea8d5942ea8927238a9da9f90f;, 
internalURL: http://172.29.172.161:8774/v2/fd
 72b5ea8d5942ea8927238a9da9f90f, id: 59afb4d03d5d4956956e4a46b4a37862}], 
type: compute, name: nova}, {endpoints_links: [], endpoints: 
[{adminURL: http://172.29.172.161:9696/;, region: RegionOne, 
publicURL: http://172.29.172.161:9696/;, internalURL: 
http://172.29.172.161:9696/;, id: 394bbcb86ccb4145912c9d1b2fad0251}], 
type: network, name: neutron}, {endpoints_links: [], endpoints: 
[{adminURL: http://172.29.172.161:8776/v2/fd72b5ea8d5942ea8927238a9da9f90f;, 
region: RegionOne, publicURL: 
http://172.29.172.161:8776/v2/fd72b5ea8d5942ea8927238a9da9f90f;, 
internalURL: 
http://172.29.172.161:8776/v2/fd72b5ea8d5942ea8927238a9da9f90f;, id: 
6843182678a341f3b4514fa4f8d4d292}], type: volumev2, name: cinderv2}, 
{endpoints_links: [], endpoints: [{adminURL: 
http://172.29.172.161:;, region: RegionOne, publicURL: 
http://172.29.172.161:;, internalURL: http://172.29.172.161:;, 
id: 69fb582ce2fe45a8aa10da
 a6477d465d}], type: s3, name: s3}, {endpoints_links: [], 
endpoints: [{adminURL: http://172.29.172.161:9292;, region: RegionOne, 
publicURL: http://172.29.172.161:9292;, internalURL: 
http://172.29.172.161:9292;, id: 6b35833a4c1f8b95b3d98006fa14}], 
type: image, name: glance}, {endpoints_links: [], endpoints: 
[{adminURL: http://172.29.172.161:8000/v1;, region: RegionOne, 
publicURL: http://172.29.172.161:8000/v1;, internalURL: 
http://172.29.172.161:8000/v1;, id: 03c808dabc484ad08bf1a70116b04db7}], 
type: cloudformation, name: heat-cfn}, {endpoints_links: [], 
endpoints: [{adminURL: 
http://172.29.172.161:8776/v1/fd72b5ea8d5942ea8927238a9da9f90f;, region: 
RegionOne, publicURL: 
http://172.29.172.161:8776/v1/fd72b5ea8d5942ea8927238a9da9f90f;, 
internalURL: 
http://172.29.172.161:8776/v1/fd72b5ea8d5942ea8927238a9da9f90f;, id: 

[Yahoo-eng-team] [Bug 1416326] Re: Add l3 agent to Ha router failed

2015-01-31 Thread shihanzhang
ok, thanks for your reminding!

** Changed in: neutron
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1416326

Title:
  Add l3 agent to Ha router failed

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  
  reproduce progress:
  1. create a Ha router(min_l3_agents_per_router=2)
  2. use 'l3-agent-router-remove' remove a agent from router
  3. use 'l3-agent-router-add' add the deleted agent to router

  you will find the error in neutron-server:
  2015-01-30 09:38:47.126 26402 INFO neutron.api.v2.resource 
[req-23ade291-165e-4c24-899f-40062005b216 None] create failed (client error): 
The router a081ae1d-ad5b-41b5-a60a-6c129ba3efa
  b has been already hosted by the L3 Agent 
3b61ea90-8373-4609-adda-c10118401f4a.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1416326/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1414530] Re: cwd might be set incorrectly when exceptions are thrown

2015-01-31 Thread Davanum Srinivas (DIMS)
** Also affects: oslo.utils
   Importance: Undecided
   Status: New

** Project changed: oslo.utils = oslo.concurrency

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1414530

Title:
  cwd might be set incorrectly when exceptions are thrown

Status in OpenStack Compute (Nova):
  New
Status in Oslo Concurrency Library:
  New

Bug description:
  CWD might be set incorrectly when exceptions are thrown

  The call to utils.execute ends up in /opt/stack/nova/nova/utils.py which
  ultimately calls processutils.execute() in the oslo_concurrency module.
  If there's an error when executing the command which calls an bash script
  then an exception ProcessExecutionError will be raised at #1. This means that
  the code at #2 will never be reached resulting in the Exception being 
propagated
  up to the call-stack but now one is still stuck with the wrong working
  directory which can lead to problems. One should catch the Exception and make 
sure
  that in all cases the working directory is reset to the original one. 

  /opt/stack/nova/nova/crypto.py

  def ensure_ca_filesystem():
  Ensure the CA filesystem exists.
  ca_dir = ca_folder()
  if not os.path.exists(ca_path()):
  genrootca_sh_path = os.path.abspath(
  os.path.join(os.path.dirname(__file__), 'CA',
  'genrootca.sh'))

  start = os.getcwd()
  fileutils.ensure_tree(ca_dir)
  os.chdir(ca_dir)
  utils.execute(sh, genrootca_sh_path) --- #1
  os.chdir(start)--- #2

  
  One can see in
  
https://github.com/openstack/oslo.concurrency/blob/master/oslo_concurrency/processutils.py
  that this Exception can indeed be thrown.

  Analogously there's a similar issue also in the aforementioned file in
  _ensure_project_folder.

  def _ensure_project_folder(project_id):
  if not os.path.exists(ca_path(project_id)):
  geninter_sh_path = os.path.abspath(
  os.path.join(os.path.dirname(__file__), 'CA',
  'geninter.sh'))
  start = os.getcwd()
  os.chdir(ca_folder())
  utils.execute('sh', geninter_sh_path, project_id,
_project_cert_subject(project_id))
  os.chdir(start)

  
  I'm not sure whether this has a potential security vulnerability impact or 
not. The potential risk is definitely there but it remains to be seen whether 
an attacker can actually reliably trigger this and then possibly gain something 
else by having a different working directory. That's why I didn't tag it as a 
security bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1414530/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416286] Re: In launch instance page, the select network page's layout is not as others

2015-01-31 Thread Davanum Srinivas (DIMS)
** Project changed: nova = horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1416286

Title:
  In launch instance page, the select network page's layout is not as
  others

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In launch instance page, other page layout: left is col-sm-6,right is 
col-sm-6.
  but in select network page the layout is not as that, and I look the page has
  css td class=actions, td class=help_text, there is no code about 
.actions  .help_text in
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/static/dashboard/scss/components/_workflow.scss.
  In juno it has those 
css(https://github.com/openstack/horizon/blob/stable/juno/openstack_dashboard/static/dashboard/scss/horizon_workflow.scss#L9).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1416286/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1406460] Re: anti-affinity property broken when instance unshelve

2015-01-31 Thread Davanum Srinivas (DIMS)
** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1406460

Title:
  anti-affinity property broken when instance unshelve

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  The instance can not be scheduled by anti-affinity property while
  unshelving.

  reproduce:
  1. create one server-group, use policy anti-affinity;
  nova server-group-create --policy anti-affinity server-group-test-anti
  2. boot two instances with the server-group;
  nova boot --flavor 1 --image f83026e6-86a3-4eaf-a24c-d0281217aba6 --nic 
net-id=3a68a059-3493-41d5-9063-773250e570b0 --hint 
group=36bc7998-ce69-42fc-a45b-e9130bd36f1e vm-anti-affinity-shelve-1
  nova boot --flavor 1 --image f83026e6-86a3-4eaf-a24c-d0281217aba6 --nic 
net-id=3a68a059-3493-41d5-9063-773250e570b0 --hint 
group=36bc7998-ce69-42fc-a45b-e9130bd36f1e
  vm-anti-affinity-shelve-2
  They were located at:
  vm-anti-affinity-shelve-1   hpc7000-slot10
  vm-anti-affinity-shelve-2   hpc7000-slot4
  3. shelve vm-anti-affinity-shelve-2, then shelve-offload, then unshelve;
  4. check vm-anti-affinity-shelve-2 location: hpc7000-slot10

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1406460/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416767] [NEW] event_type for role assignment notifications is incorrect

2015-01-31 Thread Steve Martinelli
Public bug reported:

the event_type for role_notification events is currently set as:

  identity.created.role_assignment (or on a delete operation,
identity.deleted.role_assignment)

To keep in sync with other openstack projects, it should be:

  identity.role_assignment.created (or on a delete operation,
identity.role_assignment.deleted)

** Affects: keystone
 Importance: Undecided
 Assignee: Steve Martinelli (stevemar)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1416767

Title:
  event_type for role assignment notifications is incorrect

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  the event_type for role_notification events is currently set as:

identity.created.role_assignment (or on a delete operation,
  identity.deleted.role_assignment)

  To keep in sync with other openstack projects, it should be:

identity.role_assignment.created (or on a delete operation,
  identity.role_assignment.deleted)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1416767/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1405743] Re: Cannot ensure cell id of guest numa in cpu/numa element starting from 0

2015-01-31 Thread Davanum Srinivas (DIMS)
*** This bug is a duplicate of bug 1397381 ***
https://bugs.launchpad.net/bugs/1397381

** This bug has been marked a duplicate of bug 1397381
   numa cell ids need to be normalized before creating  xml

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1405743

Title:
  Cannot ensure cell id of guest numa in cpu/numa element starting
  from 0

Status in OpenStack Compute (Nova):
  New

Bug description:
  Currently, the guest numa topology of instance can fit the chosen
  host's numa topology, the cell id in instance's numa topology may
  don't start from 0. For example, the xml is like this:

cpu mode='host-passthrough'
  topology sockets='1' cores='1' threads='2'/
  numa
cell id='1' cpus='0-1' memory='1048576'/
  cell id='1' cpus='0-1' memory='1048576'/
  /numa
/cpu

   But currently there isn't this issue in OpenStack, because the id in 
numa elemet is support only Since libvirt 1.2.7, see: 
https://libvirt.org/formatdomain.html#elementsCPU.
   
   if we use libvirt 1.2.7, the following error will be raised with above XML 
options:
   
   2014-12-26T00:24:44+00:00 localhost nova-compute ERROR [pid:55672] 
[GreenThread-5] [manager.py:2247 _build_resources] Instance faile
  d to spawn Traceback (most recent call last):   File 
/usr/lib64/python2.6/site-packages/nova/compute/manager.py, line 2241, in _bu
  ild_resources yield resources   File 
/usr/lib64/python2.6/site-packages/nova/compute/manager.py, line 2111, in 
_build_and_run_
  instance block_device_info=block_device_info)   File 
/usr/lib64/python2.6/site-packages/nova/virt/libvirt/driver.py, line 2627
  , in spawn block_device_info, disk_info=disk_info)   File 
/usr/lib64/python2.6/site-packages/nova/virt/libvirt/driver.py, line
   4425, in _create_domain_and_network power_on=power_on)   File 
/usr/lib64/python2.6/site-packages/nova/virt/libvirt/driver.py,
   line 4349, in _create_domain LOG.error(err)   File 
/usr/lib64/python2.6/site-packages/nova/openstack/common/excutils.py, line
   82, in __exit__ six.reraise(self.type_, self.value, self.tb)   File 
/usr/lib64/python2.6/site-packages/nova/virt/libvirt/drive
  r.py, line 4335, in _create_domain domain = self._conn.defineXML(xml)   
File /usr/lib64/python2.6/site-packages/eventlet/tpool
  .py, line 183, in doit result = proxy_call(self._autowrap, f, *args, 
**kwargs)   File /usr/lib64/python2.6/site-packages/event
  let/tpool.py, line 141, in proxy_call rv = execute(f, *args, **kwargs)   
File /usr/lib64/python2.6/site-packages/eventlet/tpoo
  l.py, line 122, in execute six.reraise(c, e, tb)   File 
/usr/lib64/python2.6/site-packages/eventlet/tpool.py, line 80, in two
  rker rv = meth(*args, **kwargs)   File 
/usr/lib64/python2.6/site-packages/libvirt.py, line 3405, in defineXML if 
ret is No
  ne:raise libvirtError('virDomainDefineXML() failed', conn=self) libvirtError: 
XML error: Exactly one 'cell' element per guest NUMA c
  ell allowed, non-contiguous ranges or ranges not starting from 0 are not 
allowed

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1405743/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1406598] Re: nova-cells doesn't url decode transport_url

2015-01-31 Thread Davanum Srinivas (DIMS)
** Also affects: oslo.messaging
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1406598

Title:
  nova-cells doesn't url decode transport_url

Status in OpenStack Compute (Nova):
  New
Status in Messaging API for OpenStack:
  New

Bug description:
  When creating a cell using the nova-manage cell create command, the
  transport_url generated in the database is url-encoded (i.e. '=' is
  changed to '%3D', etc.)  That's propably the correct behavior, since
  the connection string is stored as URL.

  However, nova-cells doesn't properly decode that string.  So for
  transport_url credentials that contain url-encodable characters, nova-
  cells uses the url encoded string, rather than the actual correct
  credentials.

  Steps to reproduce:

  - Create a cell using nova-manage with credentials containing url-
  encodable characters:

  nova-manage cell create  --name=cell_02 --cell_type=child
  --username='the=user' --password='the=password' --hostname='hostname'
  --port=5672 --virtual_host=/ --woffset=1 --wscale=1

  - nova.cells table now contains a url-encoded transport_url:

  mysql select * from cells \G
  *** 1. row ***
 created_at: 2014-12-30 17:30:41
 updated_at: NULL
 deleted_at: NULL
 id: 3
api_url: NULL
  weight_offset: 1
   weight_scale: 1
   name: cell_02
  is_parent: 0
deleted: 0
  transport_url: rabbit://the%3Duser:the%3Dpassword@hostname:5672//
  1 row in set (0.00 sec)

  - nova-cells uses the literal credentials 'the%3Duser' and
  'the%3Dpassword' to connect to RMQ, rather than the correct 'the=user'
  and 'the=password' credentials.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1406598/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416798] [NEW] Not using correct error class in neutron/plugins/ml2/plugin.py

2015-01-31 Thread Matthew Thode
Public bug reported:

specificly the MultipleResultsFound exception, should be sa_exc, not exc
:D

** Affects: neutron
 Importance: Undecided
 Assignee: Matthew Thode (prometheanfire)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Matthew Thode (prometheanfire)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1416798

Title:
  Not using correct error class in neutron/plugins/ml2/plugin.py

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  specificly the MultipleResultsFound exception, should be sa_exc, not
  exc :D

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1416798/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373183] Re: Enable GBP service plugin with Juno

2015-01-31 Thread Robert Kukura
The GBP service plugin is using monkeypatching instead.


** Changed in: neutron
   Status: In Progress = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1373183

Title:
  Enable GBP service plugin with Juno

Status in OpenStack Neutron (virtual network service):
  Won't Fix

Bug description:
  Due to the implementation of the Group-Based Policy blueprint
  (https://review.openstack.org/#/c/89469) that was approved for Juno
  not being merged, the Neutron Group Policy sub-team plans to deliver a
  GBP service plugin as an add-on to the Juno version of Neutron via a
  separate  StackForge repository, for easy consumption by early GBP
  adopters.

  Since the proposed patch to enable addition of service plugins without
  modifying Neutron code (https://review.openstack.org/#/c/116996/) was
  also not merged, it is not possible to use this GBP service plugin
  with Neutron without modifying Neutron code. Therefore, a couple
  constants need to be added now to neutron.plugins.common.constants, as
  shown in
  https://review.openstack.org/#/c/95900/31/neutron/plugins/common/constants.py,
  for inclusion in the Juno Neutron release.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1373183/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1396854] Re: fail to create an instance with specific ip

2015-01-31 Thread Jerry Zhao
This bug is breaking tripleo-cd prepare-ci-overcloud.

** Also affects: tripleo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1396854

Title:
  fail to create an instance with specific ip

Status in OpenStack Compute (Nova):
  In Progress
Status in tripleo - openstack on openstack:
  New

Bug description:
  When I using below command to create an instance with specific ip, it
  failed.

  nova boot --image cirros-0.3.2-x86_64-uec --flavor m1.nano --nic net-
  id=5b7930ae-ff24-4dcf-a429-e039cb7502dd,v4-fixed-ip=10.0.0.5 test

  My env is latest devstack on fedora20.



  Here is trace log from nova-compute.
  2014-11-27 11:15:09.565 ERROR nova.compute.manager [-] [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] Instance failed to spawn
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] Traceback (most recent call last):
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
/opt/stack/nova/nova/compute/manager.py, line 2247, in _build_resources
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] yield resources
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
/opt/stack/nova/nova/compute/manager.py, line 2117, in _build_and_run_instance
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] instance_type=instance_type)
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 2634, in spawn
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] admin_pass=admin_password)
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 3095, in _create_image
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] content=files, extra_md=extra_md, 
network_info=network_info)
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
/opt/stack/nova/nova/api/metadata/base.py, line 167, in __init__
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] 
ec2utils.get_ip_info_for_instance_from_nw_info(network_info)
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
/opt/stack/nova/nova/api/ec2/ec2utils.py, line 152, in 
get_ip_info_for_instance_from_nw_info
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] fixed_ips = nw_info.fixed_ips()
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
/opt/stack/nova/nova/network/model.py, line 450, in _sync_wrapper
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] self.wait()
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
/opt/stack/nova/nova/network/model.py, line 482, in wait
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] self[:] = self._gt.wait()
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
/usr/lib/python2.7/site-packages/eventlet/greenthread.py, line 173, in wait
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] return self._exit_event.wait()
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
/usr/lib/python2.7/site-packages/eventlet/event.py, line 125, in wait
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] current.throw(*self._exc)
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
/usr/lib/python2.7/site-packages/eventlet/greenthread.py, line 212, in main
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] result = function(*args, **kwargs)
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9]   File 
/opt/stack/nova/nova/compute/manager.py, line 1647, in _allocate_network_async
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: 
1a8295a2-80b5-4f5c-81bb-414aa832f6b9] dhcp_options=dhcp_options)
  2014-11-27 11:15:09.565 TRACE nova.compute.manager [instance: