[Yahoo-eng-team] [Bug 1289696] [NEW] request_id middleware uses wrong request ID value

2014-03-07 Thread Chris Buccella
Public bug reported:

The request_id middleware is designed to generate a request ID during
process_request() and attach this value to the as an HTTP header during
process_response(). Unfortunately, it stores the request ID value in a
variable within the RequestIdMiddleware class. This violates the "shared
nothing" rule, and can cause problems when several requests are run
concurrently. For example, if requests A and B come in back-to-back, and
A completes first, A will have B's request ID value in the HTTP
response.

This problem was discovered when running nova's
api.compute.servers.test_instance_actions test in parallel while working
on  https://review.openstack.org/#/c/66903/

** Affects: cinder
 Importance: Undecided
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: oslo
 Importance: Undecided
 Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

** Also affects: cinder
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1289696

Title:
  request_id middleware uses wrong request ID value

Status in Cinder:
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in Oslo - a Library of Common OpenStack Code:
  New

Bug description:
  The request_id middleware is designed to generate a request ID during
  process_request() and attach this value to the as an HTTP header
  during process_response(). Unfortunately, it stores the request ID
  value in a variable within the RequestIdMiddleware class. This
  violates the "shared nothing" rule, and can cause problems when
  several requests are run concurrently. For example, if requests A and
  B come in back-to-back, and A completes first, A will have B's request
  ID value in the HTTP response.

  This problem was discovered when running nova's
  api.compute.servers.test_instance_actions test in parallel while
  working on  https://review.openstack.org/#/c/66903/

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1289696/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1289660] [NEW] Unable to update quota with DbQuotaDriver

2014-03-07 Thread wgjohnson
Public bug reported:

I am unable to use `quantum quota-update` even while using
`quantum.db.quota_db.DbQuotaDriver`

[root@node-52 ~]# quantum --version
quantum 2.0

[root@node-52 ~]# cat /etc/issue
CentOS release 6.4 (Final)
Kernel \r on an \m

[root@node-52 ~]# grep quota_driver /etc/quantum/quantum.conf 
# quota_driver = quantum.quota.ConfDriver
quota_driver=quantum.db.quota_db.DbQuotaDriver

[root@node-52 ~]# quantum  quota-update --tenant-id 
22063cbee0304d579b45c79ad646ef4a --floatingip 100
{"QuantumError": "Access was denied to this resource."}

[root@node-52 ~]# quantum  quota-show --tenant-id 
22063cbee0304d579b45c79ad646ef4a
+-+---+
| Field   | Value |
+-+---+
| floatingip  | 50|
| network | 10|
| port| 50|
| router  | 10|
| security_group  | 10|
| security_group_rule | 100   |
| subnet  | 10|
+-+---+

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1289660

Title:
  Unable to update quota with DbQuotaDriver

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I am unable to use `quantum quota-update` even while using
  `quantum.db.quota_db.DbQuotaDriver`

  [root@node-52 ~]# quantum --version
  quantum 2.0

  [root@node-52 ~]# cat /etc/issue
  CentOS release 6.4 (Final)
  Kernel \r on an \m

  [root@node-52 ~]# grep quota_driver /etc/quantum/quantum.conf 
  # quota_driver = quantum.quota.ConfDriver
  quota_driver=quantum.db.quota_db.DbQuotaDriver

  [root@node-52 ~]# quantum  quota-update --tenant-id 
22063cbee0304d579b45c79ad646ef4a --floatingip 100
  {"QuantumError": "Access was denied to this resource."}

  [root@node-52 ~]# quantum  quota-show --tenant-id 
22063cbee0304d579b45c79ad646ef4a
  +-+---+
  | Field   | Value |
  +-+---+
  | floatingip  | 50|
  | network | 10|
  | port| 50|
  | router  | 10|
  | security_group  | 10|
  | security_group_rule | 100   |
  | subnet  | 10|
  +-+---+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1289660/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1289643] [NEW] FWaaS Agent KeyError exception with Router that has no i/f

2014-03-07 Thread Sridar Kandaswamy
Public bug reported:

When fw create is done on a tenant which has a router that has no i/f
added to it - the fw gets stuck in PENDING_CREATE state. On debugging
this when the agent gets the routers the router_info on this specific
router is not available and we take a KeyError exception.

Details:

http://paste.openstack.org/show/72867/

** Affects: neutron
 Importance: Undecided
 Assignee: Sridar Kandaswamy (skandasw)
 Status: New


** Tags: fwaas

** Changed in: neutron
 Assignee: (unassigned) => Sridar Kandaswamy (skandasw)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1289643

Title:
  FWaaS  Agent KeyError exception with Router that has no i/f

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When fw create is done on a tenant which has a router that has no i/f
  added to it - the fw gets stuck in PENDING_CREATE state. On debugging
  this when the agent gets the routers the router_info on this specific
  router is not available and we take a KeyError exception.

  Details:

  http://paste.openstack.org/show/72867/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1289643/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1289627] [NEW] VMware NoPermission faults do not log what permission was missing

2014-03-07 Thread Shawn Hartsock
Public bug reported:

NoPermission object has a privilegeId that tells us which permission the
user did not have. Presently the VMware nova driver does not log this
data. This is very useful for debugging user permissions problems on
vCenter or ESX.

http://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.wssdk.apiref.doc/vim.fault.NoPermission.html

** Affects: nova
 Importance: Low
 Status: Triaged


** Tags: vmware

** Changed in: nova
   Status: New => Triaged

** Changed in: nova
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1289627

Title:
  VMware NoPermission faults do not log what permission was missing

Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  NoPermission object has a privilegeId that tells us which permission
  the user did not have. Presently the VMware nova driver does not log
  this data. This is very useful for debugging user permissions problems
  on vCenter or ESX.

  
http://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.wssdk.apiref.doc/vim.fault.NoPermission.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1289627/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280055] Re: Replace nova.db.sqlalchemy.utils with openstack.common.db.sqlalchemy.utils

2014-03-07 Thread Matt Riedemann
** Also affects: oslo
   Importance: Undecided
   Status: New

** Changed in: oslo
 Assignee: (unassigned) => Matt Riedemann (mriedem)

** Changed in: oslo
   Importance: Undecided => Low

** Changed in: oslo
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1280055

Title:
  Replace nova.db.sqlalchemy.utils with
  openstack.common.db.sqlalchemy.utils

Status in OpenStack Compute (Nova):
  In Progress
Status in Oslo - a Library of Common OpenStack Code:
  In Progress

Bug description:
  Most of the code in nova.db.sqlalchemy.utils is also in oslo-
  incubator.openstack.common.db.sqlalchemy.utils, except for the
  modify_indexes method which is not actually even used in the nova db
  migration code anymore now that it's been compacted in icehouse.

  Also, the oslo.db code has been getting synced over to nova more
  frequently lately so rather than keep all of this duplicate code
  around we should move nova to using the oslo utils code and drop the
  internal nova one, with maybe moving the modify_indexes method to oslo
  first, then sync back to nova and then drop nova.db.sqlalchemy.utils
  from nova.

  We will have to make sure that there are no behavior differences in
  the oslo code such that it would change the nova db schema, but we
  should be able to use Dan Prince's nova/tools/db/schema_diff.py script
  to validate that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1280055/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1198171] Re: not able to authenticate with user from non-default domain, v3

2014-03-07 Thread Dean Troyer
** Changed in: python-openstackclient
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1198171

Title:
  not able to authenticate with user from non-default domain, v3

Status in OpenStack Identity (Keystone):
  Invalid
Status in OpenStack Command Line Client:
  Fix Released

Bug description:
  Here's steps to reproduce
  1) Set up keystone endpoints to v3
  2) openstack --os-identity-api-version 3 domain create mydomain
  3) openstack --os-identity-api-version 3 project create myproject --domain 
mydomain
  4) openstack --os-identity-api-version 3 user create myuser --password test 
--domain mydomain
  5) openstack --os-identity-api-version 3 role add Member --user myuser 
--project myproject
  6) openstack --os-identity-api-version 3 --os-username myuser 
--os-tenant-name myproject --os-password test user list
  ERROR: cliff.app Could not find project: myproject (HTTP 401)

  If I add user to tenant from default domain and try to authenticate again
  openstack --os-identity-api-version 3 --os-username myuser --os-tenant-name 
demo --os-password test user list
  ERROR: cliff.app Could not find user: myuser (HTTP 401)

  Well, looking at the code I see that user is searched within default
  domain, not mydomain

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1198171/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280941] Re: metadata agent throwing AttributeError: 'HTTPClient' object has no attribute 'auth_tenant_id' with latest release

2014-03-07 Thread James Page
CA update for havana

--
 python-neutronclient (1:2.3.0-0ubuntu1.1~cloud0) precise-havana; urgency=low
 
   * New update for the Ubuntu Cloud Archive.
 
 python-neutronclient (1:2.3.0-0ubuntu1.1) saucy-proposed; urgency=medium
 
   * debian/patches/fix-get-auth-info.patch: Fix regression introduced by
 stable/havanna update. (LP: #1280941)


** Changed in: cloud-archive
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1280941

Title:
  metadata agent throwing AttributeError: 'HTTPClient' object has no
  attribute 'auth_tenant_id' with latest release

Status in Ubuntu Cloud Archive:
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  In Progress
Status in Python client library for Neutron:
  Fix Released
Status in tripleo - openstack on openstack:
  In Progress
Status in “python-neutronclient” package in Ubuntu:
  Fix Released
Status in “python-neutronclient” source package in Saucy:
  Fix Released

Bug description:
  So we need a new release - this is fixed in:
  commit 02baef46968b816ac544b037297273ff6a4e8e1b

  but until a new release is done, anyone running trunk Neutron will
  have the metadata agent fail.

  And neutron itself is missing a versioned dep on the fixed client (but
  obviously that has to wait for the client release to be done)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1280941/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1289566] [NEW] Horizon requires mox for non-dev server

2014-03-07 Thread Doug Fish
Public bug reported:

The file
https://github.com/openstack/horizon/blob/master/horizon/site_urls.py is
statically importing horizon/test/jasmine/jasmine which ultimately
causes mox to be statically imported.  This would mean that mox has to
be installed on a non-dev server and probably isn't want we want.

I've been given the suggestion that putting the import inside of the if
settings.DEBUG: block would probably fix the issue.

** Affects: horizon
 Importance: Undecided
 Assignee: Doug Fish (drfish)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Doug Fish (drfish)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1289566

Title:
  Horizon requires mox for non-dev server

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The file
  https://github.com/openstack/horizon/blob/master/horizon/site_urls.py
  is statically importing horizon/test/jasmine/jasmine which ultimately
  causes mox to be statically imported.  This would mean that mox has to
  be installed on a non-dev server and probably isn't want we want.

  I've been given the suggestion that putting the import inside of the
  if settings.DEBUG: block would probably fix the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1289566/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1289565] [NEW] required field asterisk missing

2014-03-07 Thread Cindy Lu
Public bug reported:

Admin > Flavors > View Extra Specs > Create

"Key" field is missing the asterisk.  It will give the "This field is
required" error if you try to 'Create'.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1289565

Title:
  required field asterisk missing

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Admin > Flavors > View Extra Specs > Create

  "Key" field is missing the asterisk.  It will give the "This field is
  required" error if you try to 'Create'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1289565/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1289562] [NEW] Document REST API for Trust Extensions - Create Trust

2014-03-07 Thread Priti Desai
Public bug reported:

Openstack API reference does have APIs for Trust Extensions which was
introduced in Havana.

http://api.openstack.org/api-ref-identity.html#identity-v3

http://api.openstack.org/api-ref-identity.html#identity-v3-ext

Document Trust Extensions starting with Create Trust. This ticket can be
used to include documentation for all api calls.

** Affects: keystone
 Importance: Undecided
 Assignee: Priti Desai (priti-desai)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1289562

Title:
  Document REST API for Trust Extensions - Create Trust

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Openstack API reference does have APIs for Trust Extensions which was
  introduced in Havana.

  http://api.openstack.org/api-ref-identity.html#identity-v3

  http://api.openstack.org/api-ref-identity.html#identity-v3-ext

  Document Trust Extensions starting with Create Trust. This ticket can
  be used to include documentation for all api calls.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1289562/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1186177] Re: the PKI token generated by v3 api is too long

2014-03-07 Thread Dolph Mathews
** Changed in: keystone
 Assignee: Rui Chen (kiwik-chenrui) => Adam Young (ayoung)

** Project changed: keystone => python-keystoneclient

** Changed in: python-keystoneclient
   Status: Fix Released => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1186177

Title:
  the PKI token generated by v3 api is too long

Status in Python client library for Keystone:
  In Progress

Bug description:
  with keystone v3 api only

  I generated a PKI token by v3 api, token length is 17160 chars, then I
  describe server from nova with the long token in http head, nova
  response is "400 Header Line Too Long", I check nova eventlet module
  source code, eventlet wsgi.py will check http head length, default
  value is MAX_HEADER_LINE = 8192, eventlet will raise a http 400 when
  head length is too long.

  token generated by v2 api is ok in same case, v2 token length is 4108

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-keystoneclient/+bug/1186177/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1289343] Re: enabling SSL config in keystone not working with ssl_setup certs

2014-03-07 Thread Dolph Mathews
Agree with Haneef - this is expected behavior for the self-signed certs
produced by ssl_setup / pki_setup.

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1289343

Title:
  enabling SSL config in keystone not working with ssl_setup certs

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  I've been trying to get SSL enabled in keystone with default certs
  genereated from ssl_setup command, but not having much luck.

  Here is my setup:

  1. Updated the endpoint urls to https:

  public_endpoint = https://192.168.255.208:5000/v2.0/
  # admin_endpoint = http://localhost:%(admin_port)s/
  admin_endpoint = https://192.168.255.208:35357/v2.0/

  2. Updated SSL section :

  [ssl]
  enable = True
  certfile = /etc/keystone/ssl/certs/keystone.pem
  keyfile = /etc/keystone/ssl/private/keystonekey.pem
  ca_certs = /etc/keystone/ssl/certs/ca.pem
  ca_key = /etc/keystone/ssl/certs/cakey.pem
  key_size = 1024
  valid_days = 3650
  cert_required = False
  cert_subject= /C=US/ST=Unset/L=Unset/O=Unset/CN=192.168.255.208

  3. restart keystone

  4. keystone-manage ssl_setup --keystone-user keystone --keystone-group
  keystone

  5. # ls -lart /etc/keystone/ssl/*
  /etc/keystone/ssl/private:
  total 12
  drwxr-xr-x 4 keystone keystone 4096 Mar  6 15:34 ..
  drwxr-x--- 2 keystone keystone 4096 Mar  6 15:34 .
  -rw-r- 1 keystone keystone  891 Mar  6 15:34 keystonekey.pem

  /etc/keystone/ssl/certs:
  total 48
  -rw-r- 1 keystone keystone2 Mar  6 15:34 serial.old
  -rw-r- 1 keystone keystone 1920 Mar  6 15:34 openssl.conf
  -rw-r- 1 keystone keystone0 Mar  6 15:34 index.txt.old
  -rw-r- 1 keystone keystone  887 Mar  6 15:34 cakey.pem
  -rw-r--r-- 1 keystone keystone  908 Mar  6 15:34 ca.pem
  drwxr-xr-x 4 keystone keystone 4096 Mar  6 15:34 ..
  -rw-r--r-- 1 keystone keystone  676 Mar  6 15:34 req.pem
  -rw-r--r-- 1 keystone keystone3 Mar  6 15:34 serial
  -rw-r--r-- 1 keystone keystone 2842 Mar  6 15:34 keystone.pem
  -rw-r--r-- 1 keystone keystone   20 Mar  6 15:34 index.txt.attr
  -rw-r--r-- 1 keystone keystone   64 Mar  6 15:34 index.txt
  -rw-r--r-- 1 keystone keystone 2842 Mar  6 15:34 01.pem
  drwxr-xr-x 2 keystone keystone 4096 Mar  6 18:05 .

  6. My openrc has the following:

  #!/bin/sh
  export OS_NO_CACHE='true'
  export OS_TENANT_NAME='openstack'
  export OS_USERNAME='admin'
  export OS_PASSWORD='secret'
  #export OS_AUTH_URL='https://192.168.255.208:5000/v2.0/'
  #export OS_AUTH_TOKEN='keystone_admin_token'
  export OS_SERVICE_ENDPOINT='https://192.168.255.208:35357/v2.0/'
  export OS_SERVICE_TOKEN='keystone_admin_token'
  export OS_AUTH_STRATEGY='keystone'
  export OS_REGION_NAME='RegionOne'

  7.# keystone --debug role-list
  WARNING: Bypassing authentication using a token & endpoint (authentication 
credentials are being ignored).
  REQ: curl -i -X GET https://192.168.255.208:35357/v2.0/OS-KSADM/roles -H 
"User-Agent: python-keystoneclient" -H "X-Auth-Token: keystone_admin_token"
   (HTTP Unable to 
establish connection to https://192.168.255.208:35357/v2.0/OS-KSADM/roles)

  the same command with --insecure flag works.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1289343/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1289546] [NEW] test_create_backup times out while waiting for the image to be active

2014-03-07 Thread Matt Riedemann
Public bug reported:

Seems that this is unreported but there are other bugs for the same test
case having problems.

http://logs.openstack.org/73/76373/9/check/check-tempest-dsvm-
full/748db9b/console.html

2014-03-07 17:03:15.693 | Traceback (most recent call last):
2014-03-07 17:03:15.693 |   File 
"tempest/api/compute/servers/test_server_actions.py", line 265, in 
test_create_backup
2014-03-07 17:03:15.693 | 
self.os.image_client.wait_for_image_status(image2_id, 'active')
2014-03-07 17:03:15.693 |   File 
"tempest/services/image/v1/json/image_client.py", line 295, in 
wait_for_image_status
2014-03-07 17:03:15.693 | raise exceptions.TimeoutException(message)
2014-03-07 17:03:15.694 | TimeoutException: Request timed out
2014-03-07 17:03:15.694 | Details: Time Limit Exceeded! (196s)while waiting for 
active, but we got queued.

Looks like the error message started showing up around 3/2:

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVGltZSBMaW1pdCBFeGNlZWRlZCFcIiBBTkQgbWVzc2FnZTpcIndoaWxlIHdhaXRpbmcgZm9yIGFjdGl2ZSwgYnV0IHdlIGdvdCBxdWV1ZWQuXCIgQU5EIGZpbGVuYW1lOlwiY29uc29sZS5odG1sXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTQyMjA1ODY0NzZ9

Possibly related bugs:

bug 1288038 - Invalid backup: Backup status must be available or error
bug 1280937 - test_create_backup times out on waiting for resource delete
bug 1267326 - test_create_backup fails due to unexpected image number

** Affects: glance
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: testing

** Tags removed: volumes

** Also affects: glance
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1289546

Title:
  test_create_backup times out while waiting for the image to be active

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  Seems that this is unreported but there are other bugs for the same
  test case having problems.

  http://logs.openstack.org/73/76373/9/check/check-tempest-dsvm-
  full/748db9b/console.html

  2014-03-07 17:03:15.693 | Traceback (most recent call last):
  2014-03-07 17:03:15.693 |   File 
"tempest/api/compute/servers/test_server_actions.py", line 265, in 
test_create_backup
  2014-03-07 17:03:15.693 | 
self.os.image_client.wait_for_image_status(image2_id, 'active')
  2014-03-07 17:03:15.693 |   File 
"tempest/services/image/v1/json/image_client.py", line 295, in 
wait_for_image_status
  2014-03-07 17:03:15.693 | raise exceptions.TimeoutException(message)
  2014-03-07 17:03:15.694 | TimeoutException: Request timed out
  2014-03-07 17:03:15.694 | Details: Time Limit Exceeded! (196s)while waiting 
for active, but we got queued.

  Looks like the error message started showing up around 3/2:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVGltZSBMaW1pdCBFeGNlZWRlZCFcIiBBTkQgbWVzc2FnZTpcIndoaWxlIHdhaXRpbmcgZm9yIGFjdGl2ZSwgYnV0IHdlIGdvdCBxdWV1ZWQuXCIgQU5EIGZpbGVuYW1lOlwiY29uc29sZS5odG1sXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTQyMjA1ODY0NzZ9

  Possibly related bugs:

  bug 1288038 - Invalid backup: Backup status must be available or error
  bug 1280937 - test_create_backup times out on waiting for resource delete
  bug 1267326 - test_create_backup fails due to unexpected image number

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1289546/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280941] Re: metadata agent throwing AttributeError: 'HTTPClient' object has no attribute 'auth_tenant_id' with latest release

2014-03-07 Thread Launchpad Bug Tracker
This bug was fixed in the package python-neutronclient -
1:2.3.0-0ubuntu1.1

---
python-neutronclient (1:2.3.0-0ubuntu1.1) saucy-proposed; urgency=medium

  * debian/patches/fix-get-auth-info.patch: Fix regression introduced by
stable/havanna update. (LP: #1280941)
 -- Chuck ShortFri, 07 Mar 2014 12:04:11 -0500

** Changed in: python-neutronclient (Ubuntu Saucy)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1280941

Title:
  metadata agent throwing AttributeError: 'HTTPClient' object has no
  attribute 'auth_tenant_id' with latest release

Status in Ubuntu Cloud Archive:
  Fix Committed
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  In Progress
Status in Python client library for Neutron:
  Fix Released
Status in tripleo - openstack on openstack:
  In Progress
Status in “python-neutronclient” package in Ubuntu:
  Fix Released
Status in “python-neutronclient” source package in Saucy:
  Fix Released

Bug description:
  So we need a new release - this is fixed in:
  commit 02baef46968b816ac544b037297273ff6a4e8e1b

  but until a new release is done, anyone running trunk Neutron will
  have the metadata agent fail.

  And neutron itself is missing a versioned dep on the fixed client (but
  obviously that has to wait for the client release to be done)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1280941/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1241175] Re: make sample config generation work with oslo messaging

2014-03-07 Thread Tracy Jones
** Changed in: nova
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1241175

Title:
  make sample config generation work with oslo messaging

Status in OpenStack Compute (Nova):
  Fix Released
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released

Bug description:
  From: https://review.openstack.org/#/c/39929

  "All  of the messaging options are now removed from nova.conf.sample
  because  the sample config file generator doesn't yet know how to pick
  up these."

  We need this fixed before we release Icehouse

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1241175/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1289538] [NEW] Titles in the left-side menu looks inconsistnt

2014-03-07 Thread Akihiro Motoki
Public bug reported:

Panel titles in the left-side menu looks inconsistent.
Some panels are titled as "Manage " and some are not.

To me, "Manage" looks unnecessary because OpenStack Dashboard  is used
to manage compute, network, object storage,  and so on.

Thought?

** Affects: horizon
 Importance: Medium
 Status: New

** Changed in: horizon
Milestone: None => icehouse-rc1

** Changed in: horizon
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1289538

Title:
  Titles in the left-side menu looks inconsistnt

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Panel titles in the left-side menu looks inconsistent.
  Some panels are titled as "Manage " and some are not.

  To me, "Manage" looks unnecessary because OpenStack Dashboard  is used
  to manage compute, network, object storage,  and so on.

  Thought?

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1289538/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1247056] Re: Too many connections to nova-api (and not cleaning up)?

2014-03-07 Thread Matthias Runge
Hmm, this was released before Icehouse-3 release (merged to master on
Feb 25 th)

** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1247056

Title:
  Too many connections to nova-api (and not cleaning up)?

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Python client library for Nova:
  Fix Released

Bug description:
  We hit this bug while doing a tripleo/tuskar provision against 7
  baremetal machines. Basically after everything was up and running for
  a while, whilst using the Horizon UI to view the active instances (the
  7 baremetal machines that were provisioned as nova compute nodes)
  Horizon threw an error complaining about too may open files.

  
  [stack@ucl-control-live ~]$ sudo lsof -i :8774 | wc -l
  2073  

  Restarting openstack-nova-api closed them all  (put them all into
  FIN_WAIT2 / CLOSE_WAIT.)

  I was able to recreate this on a more 'standard' setup with devstack.
  To recreate:

  1. Run devstack
  2. Monitor connections to nova-api in a terminal: while true; sudo lsof -i 
:8774; date; sleep 2; done 
 At this point for me the output here was steady at 10.

  3. Log into Horizon and Launch an instance, or two.
  4. In Horizon, alternate between "Project-->Overview" and 
"Project-->Instances"
  5. Watch the output from lsof. In a short time I got this up to 150+. Leaving 
it idle (doing nothing more anywhere), the connections hang around (i.e. all in 
ESTABLISHED state). In fact they hang around even after I log out of Horizon.

  Is this expected behaviour?

  thanks, marios

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1247056/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1289496] [NEW] 2 error messages for 2 floating ips

2014-03-07 Thread Matthew D. Wood
Public bug reported:

When trying to associate a 2nd floating IP to an instance, to error
messages are displayed.

One error message is the neutron/quantum message, which isn't really
suitable for a normal user.  The second is a very generic "Unable to
associate IP..." message.

This should be cleaned up and a more informative message should be given
to the user.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1289496

Title:
  2 error messages for  2 floating ips

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When trying to associate a 2nd floating IP to an instance, to error
  messages are displayed.

  One error message is the neutron/quantum message, which isn't really
  suitable for a normal user.  The second is a very generic "Unable to
  associate IP..." message.

  This should be cleaned up and a more informative message should be
  given to the user.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1289496/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1209345] Re: Migration tests fail with sqlalchemy 0.8

2014-03-07 Thread Matt Riedemann
In talking with Thomas it sounds like there is a problem with using
latest sqlalchemy-migrate and sqlalchemy > 0.7.99 with nova, I do
remember Dan Prince needing to fix something in nova when we tried
updating to migrate 0.8.2 for sqla 0.8 support, we'd have to go back and
find that change and backport it to stable/havana.  It was pretty
trivial from what I remember, something about the migration test code
putting a cap on migrate/sqlalchemy for a now-invalid reason.

** Changed in: nova
   Status: Invalid => New

** Changed in: nova
   Importance: Wishlist => Undecided

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1209345

Title:
  Migration tests fail with sqlalchemy 0.8

Status in OpenStack Compute (Nova):
  New

Bug description:
   File 
"/��BUILDDIR��/nova-2013.2+git201308071233~saucy/nova/db/sqlalchemy/migrate_repo/versions/206_add_instance_cleaned.py",
 line 47, in downgrade
  instances.columns.cleaned.drop()
File "/usr/lib/python2.7/dist-packages/migrate/changeset/schema.py", line 
549, in drop
  engine._run_visitor(visitorcallable, self, connection, **kwargs)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
1479, in _run_visitor
  conn._run_visitor(visitorcallable, element, **kwargs)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
1122, in _run_visitor
  **kwargs).traverse_single(element)
File "/usr/lib/python2.7/dist-packages/migrate/changeset/ansisql.py", line 
53, in traverse_single
  ret = super(AlterTableVisitor, self).traverse_single(elem)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/sql/visitors.py", line 
111, in traverse_single
  return meth(obj, **kw)
File 
"/usr/lib/python2.7/dist-packages/migrate/changeset/databases/sqlite.py", line 
90, in visit_column
  super(SQLiteColumnDropper,self).visit_column(column)
File 
"/usr/lib/python2.7/dist-packages/migrate/changeset/databases/sqlite.py", line 
53, in visit_column
  self.recreate_table(table,column,delta)
File 
"/usr/lib/python2.7/dist-packages/migrate/changeset/databases/sqlite.py", line 
40, in recreate_table
  table.create(bind=self.connection)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/schema.py", line 614, in 
create
  checkfirst=checkfirst)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
1122, in _run_visitor
  **kwargs).traverse_single(element)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/sql/visitors.py", line 
111, in traverse_single
  return meth(obj, **kw)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/ddl.py", line 93, 
in visit_table
  self.traverse_single(index)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/sql/visitors.py", line 
111, in traverse_single
  return meth(obj, **kw)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/ddl.py", line 105, 
in visit_index
  self.connection.execute(schema.CreateIndex(index))
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
662, in execute
  params)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
720, in _execute_ddl
  compiled
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
874, in _execute_context
  context)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
1024, in _handle_dbapi_exception
  exc_info
File "/usr/lib/python2.7/dist-packages/sqlalchemy/util/compat.py", line 
195, in raise_from_cause
  reraise(type(exception), exception, tb=exc_tb)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
867, in _execute_context
  context)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 
324, in do_execute
  cursor.execute(statement, parameters)
  OperationalError: (OperationalError) table instances has no column named 
cleaned u'CREATE INDEX instances_host_deleted_cleaned_idx ON instances (host, 
deleted, cleaned)' ()
  ==
  FAIL: process-returncode
  tags: worker-0
  --

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1209345/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1182934] Re: xenapi: set hostname is failing with latest XS 6.1 PV tools

2014-03-07 Thread John Garbutt
This was fixed in the nova agent, and is now fixed. Clearly had another
bug or blueprint open for the fix.

** Changed in: nova
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1182934

Title:
  xenapi: set hostname is failing with latest XS 6.1 PV tools

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  XenServer 6.1 PV tools no longer set the hostname in the way expected
  by nova's XenAPI driver.

  Need to look at using the guest agent, or in the case of cloud-init not 
setting the hostname flag.
  Also, other network related xenstore values are not really relevant when the 
agent is not being used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1182934/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1188141] Re: Deleting images breaks rescue mode for servers built from said images

2014-03-07 Thread John Garbutt
There is a blueprint for the fix for this:
https://blueprints.launchpad.net/nova/+spec/allow-image-to-be-specified-during-rescue

** Changed in: nova
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1188141

Title:
  Deleting images breaks rescue mode for servers built from said images

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Rescue image is considered to be the image from which the instance is
  built from. If that image is deleted, the rescue fails

  Consider this scenario,
  The customer has a snapshot of an instance. He build another instance from 
the snapshot and deletes the snapshot. Now the customer tries to rescue the 
instance that was built from the snapshot, it fails because the rescue image is 
not available.

  Possible solutions -

  1. Recursively find the image that's available:
  The system_metadata of an instance has the "image_base_image_ref" as property 
set on it. This points to image from which it was built.
  Image has instance_uuid, if it's a snapshot. 
  We'll have to recursively find the base_install and use it as rescue image if 
the snapshot is deleted.

  2. Store the corresponding original image reference in all images. So
  that it's easier to find the rescue image.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1188141/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280941] Re: metadata agent throwing AttributeError: 'HTTPClient' object has no attribute 'auth_tenant_id' with latest release

2014-03-07 Thread James Page
SRU'ing the required fix  for neutronclient to Ubuntu 13.10 and the
Havana Cloud Archive.

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Changed in: cloud-archive
   Status: New => Triaged

** Changed in: cloud-archive
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1280941

Title:
  metadata agent throwing AttributeError: 'HTTPClient' object has no
  attribute 'auth_tenant_id' with latest release

Status in Ubuntu Cloud Archive:
  Triaged
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  In Progress
Status in Python client library for Neutron:
  Fix Released
Status in tripleo - openstack on openstack:
  In Progress
Status in “python-neutronclient” package in Ubuntu:
  Fix Released
Status in “python-neutronclient” source package in Saucy:
  Triaged

Bug description:
  So we need a new release - this is fixed in:
  commit 02baef46968b816ac544b037297273ff6a4e8e1b

  but until a new release is done, anyone running trunk Neutron will
  have the metadata agent fail.

  And neutron itself is missing a versioned dep on the fixed client (but
  obviously that has to wait for the client release to be done)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1280941/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280941] Re: metadata agent throwing AttributeError: 'HTTPClient' object has no attribute 'auth_tenant_id' with latest release

2014-03-07 Thread James Page
** Also affects: python-neutronclient (Ubuntu Saucy)
   Importance: Undecided
   Status: New

** Changed in: python-neutronclient (Ubuntu)
   Importance: Undecided => Critical

** Changed in: python-neutronclient (Ubuntu Saucy)
   Importance: Undecided => Critical

** Changed in: python-neutronclient (Ubuntu)
   Status: New => Fix Released

** Changed in: python-neutronclient (Ubuntu Saucy)
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1280941

Title:
  metadata agent throwing AttributeError: 'HTTPClient' object has no
  attribute 'auth_tenant_id' with latest release

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  In Progress
Status in Python client library for Neutron:
  Fix Released
Status in tripleo - openstack on openstack:
  In Progress
Status in “python-neutronclient” package in Ubuntu:
  Fix Released
Status in “python-neutronclient” source package in Saucy:
  Triaged

Bug description:
  So we need a new release - this is fixed in:
  commit 02baef46968b816ac544b037297273ff6a4e8e1b

  but until a new release is done, anyone running trunk Neutron will
  have the metadata agent fail.

  And neutron itself is missing a versioned dep on the fixed client (but
  obviously that has to wait for the client release to be done)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1280941/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1277583] Re: python-sqlalchemy version inconsistent between projects

2014-03-07 Thread James Carey
For Neutron https://review.openstack.org/#/c/71984/ was abandoned because it 
was superseded by 
https://review.openstack.org/#/c/75240/ which has merged.   The new review did 
not indicate that it fixed this bug, so I am manually going to set the status 
to Fix Released for the Neutron portion.

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1277583

Title:
  python-sqlalchemy version inconsistent between projects

Status in OpenStack Identity (Keystone):
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  When installing OpenStack on Fedora 19 or later, installation can fail
  due to different capped versions of python-sqlalchemy.

  Keystone and Neutron both require versions under 8: 
  https://github.com/openstack/keystone/blob/master/requirements.txt#L12 (being 
addressed by https://review.openstack.org/#/c/70240)
  https://github.com/openstack/neutron/blob/master/requirements.txt#L21

  Where other projects have bumped the version in requirements.txt: 
  https://github.com/openstack/nova/blob/master/requirements.txt#L2
  https://github.com/openstack/cinder/blob/master/requirements.txt#L25
  https://github.com/openstack/heat/blob/master/requirements.txt#L17
  https://github.com/openstack/glance/blob/master/requirements.txt#L9

  
  An installation may fail depending on the order the services are installed 
in. If Nova is installed before Neutron, the Neutron installation will fail 
because Nova installed python-sqlalchemy-0.8.* and Neutron requires 
python-sqlalchemy <=0.7.99

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1277583/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280941] Re: metadata agent throwing AttributeError: 'HTTPClient' object has no attribute 'auth_tenant_id' with latest release

2014-03-07 Thread James Page
** Also affects: python-neutronclient (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1280941

Title:
  metadata agent throwing AttributeError: 'HTTPClient' object has no
  attribute 'auth_tenant_id' with latest release

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  In Progress
Status in Python client library for Neutron:
  Fix Released
Status in tripleo - openstack on openstack:
  In Progress
Status in “python-neutronclient” package in Ubuntu:
  Fix Released
Status in “python-neutronclient” source package in Saucy:
  Triaged

Bug description:
  So we need a new release - this is fixed in:
  commit 02baef46968b816ac544b037297273ff6a4e8e1b

  but until a new release is done, anyone running trunk Neutron will
  have the metadata agent fail.

  And neutron itself is missing a versioned dep on the fixed client (but
  obviously that has to wait for the client release to be done)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1280941/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1289466] [NEW] Host aggregates missing the metadata column

2014-03-07 Thread Julie Pichon
Public bug reported:

When the Host Aggregates table was moved to its own panel, we lost the
'metadata' column. It contains useful information and should be
displayed in the table.

Lost column:
https://github.com/openstack/horizon/blob/028332da4a/openstack_dashboard/dashboards/admin/info/tables.py#L151

I'll open a separate bug about updating metadata.

** Affects: horizon
 Importance: Medium
 Assignee: Luis de Bethencourt (l2is)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1289466

Title:
  Host aggregates missing the metadata column

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When the Host Aggregates table was moved to its own panel, we lost the
  'metadata' column. It contains useful information and should be
  displayed in the table.

  Lost column:
  
https://github.com/openstack/horizon/blob/028332da4a/openstack_dashboard/dashboards/admin/info/tables.py#L151

  I'll open a separate bug about updating metadata.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1289466/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1073569] Re: Jenkins jobs fail because of incompatibility between sqlalchemy-migrate and the newest sqlalchemy-0.8.0b1

2014-03-07 Thread James Page
** No longer affects: keystone (Ubuntu)

** No longer affects: nova (Ubuntu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1073569

Title:
  Jenkins jobs fail because of incompatibility between sqlalchemy-
  migrate and the newest sqlalchemy-0.8.0b1

Status in Cinder:
  Fix Released
Status in Cinder folsom series:
  Fix Released
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance folsom series:
  Fix Released
Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone folsom series:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) folsom series:
  Fix Released
Status in “cinder” package in Ubuntu:
  Fix Released
Status in “glance” package in Ubuntu:
  Fix Released
Status in “glance” source package in Precise:
  Fix Released
Status in “keystone” source package in Precise:
  Fix Released
Status in “nova” source package in Precise:
  Fix Released
Status in “cinder” source package in Quantal:
  Fix Released
Status in “glance” source package in Quantal:
  Fix Released
Status in “keystone” source package in Quantal:
  Fix Released
Status in “nova” source package in Quantal:
  Fix Released

Bug description:
  Jenkins jobs currently fail with this error:

  09:52:21 ERROR: test_create_endpoint_404 (test_backend_sql.SqlCatalog)
  09:52:21 
--
  09:52:21 Traceback (most recent call last):
  09:52:21   File 
"/home/jenkins/workspace/gate-keystone-python27/tests/test_backend_sql.py", 
line 42, in setUp
  09:52:21 self.catalog_man = catalog.Manager()
  09:52:21   File 
"/home/jenkins/workspace/gate-keystone-python27/.tox/py27/local/lib/python2.7/site-packages/keystone/catalog/core.py",
 line 67, in __init__
  09:52:21 super(Manager, self).__init__(CONF.catalog.driver)
  09:52:21   File 
"/home/jenkins/workspace/gate-keystone-python27/.tox/py27/local/lib/python2.7/site-packages/keystone/common/manager.py",
 line 36, in __init__
  09:52:21 self.driver = importutils.import_object(driver_name)
  09:52:21   File 
"/home/jenkins/workspace/gate-keystone-python27/.tox/py27/local/lib/python2.7/site-packages/keystone/openstack/common/importutils.py",
 line 40, in import_object
  09:52:21 return import_class(import_str)(*args, **kwargs)
  09:52:21   File 
"/home/jenkins/workspace/gate-keystone-python27/.tox/py27/local/lib/python2.7/site-packages/keystone/openstack/common/importutils.py",
 line 30, in import_class
  09:52:21 __import__(mod_str)
  09:52:21   File 
"/home/jenkins/workspace/gate-keystone-python27/.tox/py27/local/lib/python2.7/site-packages/keystone/catalog/backends/sql.py",
 line 21, in 
  09:52:21 from keystone.common.sql import migration
  09:52:21   File 
"/home/jenkins/workspace/gate-keystone-python27/.tox/py27/local/lib/python2.7/site-packages/keystone/common/sql/migration.py",
 line 23, in 
  09:52:21 from migrate.versioning import api as versioning_api
  09:52:21   File 
"/home/jenkins/workspace/gate-keystone-python27/.tox/py27/local/lib/python2.7/site-packages/migrate/versioning/api.py",
 line 33, in 
  09:52:21 from migrate.versioning import (repository, schema, version,
  09:52:21   File 
"/home/jenkins/workspace/gate-keystone-python27/.tox/py27/local/lib/python2.7/site-packages/migrate/versioning/schema.py",
 line 10, in 
  09:52:21 from sqlalchemy import exceptions as sa_exceptions
  09:52:21 ImportError: cannot import name exceptions

  
  From the sqlalchemy-migrate webpage: SQLAlchemy-migrate 0.7.2 is compatible 
with SQLAlchemy 0.6.x and 0.7.x. 

  From SQLAlchemy's 0.8.0b1 Release Notes:

  [general] [removed] The “sqlalchemy.exceptions” synonym for
  “sqlalchemy.exc” is removed fully.(link)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1073569/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1021708] Re: no CLI interface to find all of the tenants which a given user belongs to

2014-03-07 Thread James Page
** Changed in: keystone (Ubuntu)
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1021708

Title:
  no CLI interface to find all of the tenants which a given user belongs
  to

Status in OpenStack Identity (Keystone):
  Fix Released
Status in OpenStack Command Line Client:
  Fix Released
Status in “keystone” package in Ubuntu:
  Fix Released

Bug description:
  There does not seem to be a CLI, nor does the python api appear to
  provide a way (other than iteration) to find all of the tenants to
  which a user belongs.  I basically want the rough equivalent of the
  following sql:

  SELECT tenant.* FROM tenant,user_tenant_membership WHERE
  user_tenant_membership.user_id = "$USERID" AND
  user_tenant_membersip.tenant_id = tenant.id;

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1021708/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1032633] Re: Keystone's token table grows unconditionally when using SQL backend.

2014-03-07 Thread James Page
** Changed in: keystone (Ubuntu)
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1032633

Title:
  Keystone's token table grows unconditionally when using SQL backend.

Status in OpenStack Identity (Keystone):
  Fix Released
Status in “keystone” package in Ubuntu:
  Fix Released

Bug description:
  Keystone's `token` table grows unconditionally with expired tokens
  when using the SQL backend.

  Keystone should provide a backend-agnostic method to find and delete
  these tokens. This could be run via a periodic task or supplied as a
  script to run as a cron job.

  An example SQL statement (if you're using a SQL backend) to workaround
  this problem:

  sql> DELETE FROM token WHERE expired <= NOW();

  It may be ideal to allow a date smear to allow older tokens to persist
  if needed.

  Choosing the `memcache` backend may workaround this issue, but SQL is
  the package default.

  System Information:

  $ dpkg-query --show keystone
  keystone2012.1+stable~20120608-aff45d6-0ubuntu1

  $ cat /etc/lsb-release
  DISTRIB_ID=Ubuntu
  DISTRIB_RELEASE=12.04
  DISTRIB_CODENAME=precise
  DISTRIB_DESCRIPTION="Ubuntu 12.04 LTS"

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1032633/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1289062] Re: LDAP read only config options are ignored

2014-03-07 Thread Eric Brown
These are used.  I recently fixed a bug that is related.

See
https://github.com/openstack/keystone/blob/master/keystone/common/ldap/core.py#L298

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1289062

Title:
  LDAP read only config options are ignored

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  The LDAP configuration includes a number of options such as:

  [ldap]
  user_allow_create = False
  user_allow_update = False
  user_allow_delete = False

  tenant_allow_create = True
  tenant_allow_update = True
  tenant_allow_delete = True

  role_allow_create = True
  role_allow_update = True
  role_allow_delete = True

  From what i can gather these were added in the Essex release but are
  currently being completely ignored. We either need to enforce these
  values or remove them from the configuration files as they are
  misleading to our  users.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1289062/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1262288] Re: ignored VimFaultException showing up in nova unit test logs

2014-03-07 Thread Tracy Jones
** Changed in: openstack-vmwareapi-team
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1262288

Title:
  ignored VimFaultException showing up in nova unit test logs

Status in OpenStack Compute (Nova):
  Fix Released
Status in The OpenStack VMwareAPI subTeam:
  Fix Released

Bug description:
  This started showing up recently in the nova unit tests:

  Exception nova.virt.vmwareapi.error_util.VimFaultException: 
VimFaultException() in > ignored
  Exception nova.virt.vmwareapi.error_util.VimFaultException: 
VimFaultException() in > ignored
  Exception nova.virt.vmwareapi.error_util.VimFaultException: 
VimFaultException() in > ignored

  It doesn't cause a failure, but it's ugly and should be cleaned up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1262288/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279032] Re: openstack-dashboard not fully updated for Django 1.5

2014-03-07 Thread James Page
This is actually a problem with the way juju introduces the cloud-tools
pocket for lxc and mongodb; this was resolved in one of the recent
1.17.x releases.

** Changed in: horizon (Ubuntu)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1279032

Title:
  openstack-dashboard not fully updated for Django 1.5

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in “horizon” package in Ubuntu:
  Invalid

Bug description:
  Installing Grizzly on Ubuntu 12.04 and using the Cloud Archive, when
  accessing the horizon I get an Internal Server Error caused by using
  "from django.utils.translation import force_unicode" in /usr/share
  /openstack-
  dashboard/openstack_dashboard/dashboards/admin/users/forms.py

  Full traceback: http://pastebin.ubuntu.com/6916728/

  It seems to be fixed upstream:
  
https://github.com/openstack/horizon/commit/5d32caf3af3b11fcf496ebb04ccfc44f49cbe0b9

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1279032/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1079611] Re: Download Juju Environment Config gives wrong credentials

2014-03-07 Thread James Page
We dropped this feature a couple of releases ago; marking won't fix.

** Changed in: horizon (Ubuntu)
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1079611

Title:
  Download Juju Environment Config gives wrong credentials

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in “horizon” package in Ubuntu:
  Won't Fix

Bug description:
  When I download the Juju configuration using "Download Juju
  Environment Config" in the dashboard, the EC2 credentials (combined-
  key and secret-key) are wrong. If I download using "Download EC2
  Credentials" then they are correct.

  Comparing the code for these two, they are suspiciously different:

  EC2:

  def find_or_create_access_keys(request, tenant_id):
  keys = api.keystone.list_ec2_credentials(request, request.user.id)
  for key in keys:
  if key.tenant_id == tenant_id:
  return key
  return api.keystone.create_ec2_credentials(request,

  Juju:

  def find_or_create_access_keys(request, tenant_id):
  keys = api.keystone.list_ec2_credentials(request, request.user.id)
  if keys:
  return keys[0]
  else:
  return api.keystone.create_ec2_credentials(request,

  Copying the EC2 logic into the Juju file fixes it. This is with
  version 2012.2-0ubuntu2~cloud0, but it looks like it's the same in
  trunk.

  Thanks,

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1079611/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1288814] Re: limits API raises TypeError with NoopQuotaDriver

2014-03-07 Thread Matthew Edmonds
*** This bug is a duplicate of bug 1244842 ***
https://bugs.launchpad.net/bugs/1244842

I was using havana, apparently pre-backport. Marked this as a dup. Thank
you.

** This bug has been marked a duplicate of bug 1244842
   NoopQuotaDriver returns usages incorrect format

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1288814

Title:
  limits API raises TypeError with NoopQuotaDriver

Status in OpenStack Compute (Nova):
  Incomplete

Bug description:
  when quota_driver=nova.quota.NoopQuotaDriver in nova.conf, a GET
  v2/{tenant_id}/limits request fails with HTTP 400 and api.log shows
  the following stacktrace:

  2014-03-03 04:16:31.468 3182 TRACE nova.api.openstack.wsgi Traceback (most 
recent call last):
  2014-03-03 04:16:31.468 3182 TRACE nova.api.openstack.wsgi File 
"/usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py", line 1012, in 
_process_stack
  2014-03-03 04:16:31.468 3182 TRACE nova.api.openstack.wsgi action_result = 
self.dispatch(meth, request, action_args)
  2014-03-03 04:16:31.468 3182 TRACE nova.api.openstack.wsgi File 
"/usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py", line 1093, in 
dispatch
  2014-03-03 04:16:31.468 3182 TRACE nova.api.openstack.wsgi return 
method(req=request, **action_args)
  2014-03-03 04:16:31.468 3182 TRACE nova.api.openstack.wsgi File 
"/usr/lib/python2.6/site-packages/nova/api/openstack/compute/limits.py", line 
96, in index
  2014-03-03 04:16:31.468 3182 TRACE nova.api.openstack.wsgi abs_limits = 
dict((k, v['limit']) for k, v in quotas.items())
  2014-03-03 04:16:31.468 3182 TRACE nova.api.openstack.wsgi File 
"/usr/lib/python2.6/site-packages/nova/api/openstack/compute/limits.py", line 
96, in 
  2014-03-03 04:16:31.468 3182 TRACE nova.api.openstack.wsgi abs_limits = 
dict((k, v['limit']) for k, v in quotas.items())
  2014-03-03 04:16:31.468 3182 TRACE nova.api.openstack.wsgi TypeError: 'int' 
object is unsubscriptable

  This appears to be because v = -1 for all resources with the
  NoopQuotaDriver

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1288814/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1287983] Re: hyperv check failure

2014-03-07 Thread Octavian Ciuhandu
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1287983

Title:
  hyperv check failure

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  http://64.119.130.115/78028/3/

  
  http://64.119.130.115/78028/3/Hyper-V_logs/hv-compute1/nova-compute.log.gz 
has lot of stacktraces

  2014-03-04 18:48:39.417 4516 TRACE nova.compute.manager [instance: 
db1c9be4-318c-450f-b4bf-5de90cc09c43] ConnectionFailed: Connection to neutron 
failed: Maximum attempts reached

  
  2014-03-04 18:54:53.631 4516 TRACE nova.virt.hyperv.vmops HyperVException: 
Operation failed with return value: 32775

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1287983/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1289361] [NEW] xenapi: unable to create instances with ephemeral disks

2014-03-07 Thread John Garbutt
Public bug reported:

The resize ephemeral disk blueprint has regressed the ability to spawn
instances with ephemeral disks.

** Affects: nova
 Importance: High
 Assignee: John Garbutt (johngarbutt)
 Status: Triaged


** Tags: xenserver

** Tags added: xenserver

** Changed in: nova
Milestone: None => icehouse-rc1

** Changed in: nova
 Assignee: (unassigned) => John Garbutt (johngarbutt)

** Changed in: nova
   Importance: Undecided => High

** Changed in: nova
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1289361

Title:
  xenapi: unable to create instances with ephemeral disks

Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  The resize ephemeral disk blueprint has regressed the ability to spawn
  instances with ephemeral disks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1289361/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1289343] [NEW] enabling SSL config in keystone not working with ssl_setup certs

2014-03-07 Thread Pradeep Kilambi
Public bug reported:

I've been trying to get SSL enabled in keystone with default certs
genereated from ssl_setup command, but not having much luck.

Here is my setup:

1. Updated the endpoint urls to https:

public_endpoint = https://192.168.255.208:5000/v2.0/
# admin_endpoint = http://localhost:%(admin_port)s/
admin_endpoint = https://192.168.255.208:35357/v2.0/

2. Updated SSL section :

[ssl]
enable = True
certfile = /etc/keystone/ssl/certs/keystone.pem
keyfile = /etc/keystone/ssl/private/keystonekey.pem
ca_certs = /etc/keystone/ssl/certs/ca.pem
ca_key = /etc/keystone/ssl/certs/cakey.pem
key_size = 1024
valid_days = 3650
cert_required = False
cert_subject= /C=US/ST=Unset/L=Unset/O=Unset/CN=192.168.255.208

3. restart keystone

4. keystone-manage ssl_setup --keystone-user keystone --keystone-group
keystone

5. # ls -lart /etc/keystone/ssl/*
/etc/keystone/ssl/private:
total 12
drwxr-xr-x 4 keystone keystone 4096 Mar  6 15:34 ..
drwxr-x--- 2 keystone keystone 4096 Mar  6 15:34 .
-rw-r- 1 keystone keystone  891 Mar  6 15:34 keystonekey.pem

/etc/keystone/ssl/certs:
total 48
-rw-r- 1 keystone keystone2 Mar  6 15:34 serial.old
-rw-r- 1 keystone keystone 1920 Mar  6 15:34 openssl.conf
-rw-r- 1 keystone keystone0 Mar  6 15:34 index.txt.old
-rw-r- 1 keystone keystone  887 Mar  6 15:34 cakey.pem
-rw-r--r-- 1 keystone keystone  908 Mar  6 15:34 ca.pem
drwxr-xr-x 4 keystone keystone 4096 Mar  6 15:34 ..
-rw-r--r-- 1 keystone keystone  676 Mar  6 15:34 req.pem
-rw-r--r-- 1 keystone keystone3 Mar  6 15:34 serial
-rw-r--r-- 1 keystone keystone 2842 Mar  6 15:34 keystone.pem
-rw-r--r-- 1 keystone keystone   20 Mar  6 15:34 index.txt.attr
-rw-r--r-- 1 keystone keystone   64 Mar  6 15:34 index.txt
-rw-r--r-- 1 keystone keystone 2842 Mar  6 15:34 01.pem
drwxr-xr-x 2 keystone keystone 4096 Mar  6 18:05 .

6. My openrc has the following:

#!/bin/sh
export OS_NO_CACHE='true'
export OS_TENANT_NAME='openstack'
export OS_USERNAME='admin'
export OS_PASSWORD='secret'
#export OS_AUTH_URL='https://192.168.255.208:5000/v2.0/'
#export OS_AUTH_TOKEN='keystone_admin_token'
export OS_SERVICE_ENDPOINT='https://192.168.255.208:35357/v2.0/'
export OS_SERVICE_TOKEN='keystone_admin_token'
export OS_AUTH_STRATEGY='keystone'
export OS_REGION_NAME='RegionOne'

7.# keystone --debug role-list
WARNING: Bypassing authentication using a token & endpoint (authentication 
credentials are being ignored).
REQ: curl -i -X GET https://192.168.255.208:35357/v2.0/OS-KSADM/roles -H 
"User-Agent: python-keystoneclient" -H "X-Auth-Token: keystone_admin_token"
 (HTTP Unable to 
establish connection to https://192.168.255.208:35357/v2.0/OS-KSADM/roles)

the same command with --insecure flag works.

** Affects: keystone
 Importance: Undecided
 Status: New

** Description changed:

  I've been trying to get SSL enabled in keystone with default certs
  genereated from ssl_setup command, but not having much luck.
  
  Here is my setup:
  
  1. Updated the endpoint urls to https:
  
  public_endpoint = https://192.168.255.208:5000/v2.0/
  # admin_endpoint = http://localhost:%(admin_port)s/
  admin_endpoint = https://192.168.255.208:35357/v2.0/
  
  2. Updated SSL section :
  
  [ssl]
  enable = True
  certfile = /etc/keystone/ssl/certs/keystone.pem
  keyfile = /etc/keystone/ssl/private/keystonekey.pem
  ca_certs = /etc/keystone/ssl/certs/ca.pem
  ca_key = /etc/keystone/ssl/certs/cakey.pem
  key_size = 1024
  valid_days = 3650
  cert_required = False
  cert_subject= /C=US/ST=Unset/L=Unset/O=Unset/CN=192.168.255.208
  
  3. restart keystone
  
  4. keystone-manage ssl_setup --keystone-user keystone --keystone-group
  keystone
  
  5. # ls -lart /etc/keystone/ssl/*
  /etc/keystone/ssl/private:
  total 12
  drwxr-xr-x 4 keystone keystone 4096 Mar  6 15:34 ..
  drwxr-x--- 2 keystone keystone 4096 Mar  6 15:34 .
  -rw-r- 1 keystone keystone  891 Mar  6 15:34 keystonekey.pem
  
  /etc/keystone/ssl/certs:
  total 48
  -rw-r- 1 keystone keystone2 Mar  6 15:34 serial.old
  -rw-r- 1 keystone keystone 1920 Mar  6 15:34 openssl.conf
  -rw-r- 1 keystone keystone0 Mar  6 15:34 index.txt.old
  -rw-r- 1 keystone keystone  887 Mar  6 15:34 cakey.pem
  -rw-r--r-- 1 keystone keystone  908 Mar  6 15:34 ca.pem
  drwxr-xr-x 4 keystone keystone 4096 Mar  6 15:34 ..
  -rw-r--r-- 1 keystone keystone  676 Mar  6 15:34 req.pem
  -rw-r--r-- 1 keystone keystone3 Mar  6 15:34 serial
  -rw-r--r-- 1 keystone keystone 2842 Mar  6 15:34 keystone.pem
  -rw-r--r-- 1 keystone keystone   20 Mar  6 15:34 index.txt.attr
  -rw-r--r-- 1 keystone keystone   64 Mar  6 15:34 index.txt
  -rw-r--r-- 1 keystone keystone 2842 Mar  6 15:34 01.pem
  drwxr-xr-x 2 keystone keystone 4096 Mar  6 18:05 .
  
- 
  6. My openrc has the following:
  
  #!/bin/sh
  export OS_NO_CACHE='true'
  export OS_TENANT_NAME='openstack'
  export OS_USERNAME='admin'
- export OS_PASSWORD='Cisco123'
+ export OS_PASSWORD='secret'
  #export OS_AUTH_URL='https://192.1

[Yahoo-eng-team] [Bug 1289346] [NEW] images from unreachable HTTP source are silently discarded

2014-03-07 Thread Bernhard M. Wiedemann
Public bug reported:

When trying to import an image from an HTTP URL that happens to be
unresolvable or unreachable, glance reports it as queued with an exit-
code of 0 (success), but a following glance image-list shows, that the
new image is completely missing.

# glance image-create --name=foobar --disk-format=qcow2 --container-
format=bare --copy-from http://unreachable.example.org/yourimage.qcow2

...
| size | 0|
| status   | queued   |
# echo $?
0

an explicit
# glance image-show $ID
shows status killed

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1289346

Title:
  images from unreachable HTTP source are silently discarded

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  When trying to import an image from an HTTP URL that happens to be
  unresolvable or unreachable, glance reports it as queued with an exit-
  code of 0 (success), but a following glance image-list shows, that the
  new image is completely missing.

  # glance image-create --name=foobar --disk-format=qcow2 --container-
  format=bare --copy-from http://unreachable.example.org/yourimage.qcow2

  ...
  | size | 0|
  | status   | queued   |
  # echo $?
  0

  an explicit
  # glance image-show $ID
  shows status killed

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1289346/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1289309] [NEW] collie not found error in failed gate test

2014-03-07 Thread Steven Hardy
Public bug reported:

Hit this error which is not whitelisted so causes the gate test to fail:

2014-03-07 10:54:32.474 31118 ERROR glance.store.sheepdog [-] Error in store 
configuration: Unexpected error while running command.
Command: None
Exit code: -
Stdout: "Unexpected error while running command.\nCommand: collie\nExit code: 
127\nStdout: ''\nStderr: '/bin/sh: 1: collie: not found\\n'"
Stderr: None
2014-03-07 10:54:32.475 31118 WARNING glance.store.base [-] Failed to configure 
store correctly: Store sheepdog could not be configured correctly. Reason: 
Error in store configuration: Unexpected error while running command.
Command: None
Exit code: -
Stdout: "Unexpected error while running command.\nCommand: collie\nExit code: 
127\nStdout: ''\nStderr: '/bin/sh: 1: collie: not found\\n'"
Stderr: None Disabling add method.

http://logs.openstack.org/62/72762/14/check/check-grenade-
dsvm/fb2420e/logs/new/screen-g-api.txt.gz

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1289309

Title:
  collie not found error in failed gate test

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Hit this error which is not whitelisted so causes the gate test to
  fail:

  2014-03-07 10:54:32.474 31118 ERROR glance.store.sheepdog [-] Error in store 
configuration: Unexpected error while running command.
  Command: None
  Exit code: -
  Stdout: "Unexpected error while running command.\nCommand: collie\nExit code: 
127\nStdout: ''\nStderr: '/bin/sh: 1: collie: not found\\n'"
  Stderr: None
  2014-03-07 10:54:32.475 31118 WARNING glance.store.base [-] Failed to 
configure store correctly: Store sheepdog could not be configured correctly. 
Reason: Error in store configuration: Unexpected error while running command.
  Command: None
  Exit code: -
  Stdout: "Unexpected error while running command.\nCommand: collie\nExit code: 
127\nStdout: ''\nStderr: '/bin/sh: 1: collie: not found\\n'"
  Stderr: None Disabling add method.

  http://logs.openstack.org/62/72762/14/check/check-grenade-
  dsvm/fb2420e/logs/new/screen-g-api.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1289309/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1289305] [NEW] Plugins can not override templates

2014-03-07 Thread Matthias Runge
Public bug reported:

currently, there is no way to extend/override e.g base.html to pull in a
custom theme via the plugin mechanism.

Templates from apps added via openstack_dashboard/enabled are searched
with last priority, i.e. if anything else provides base.html, that one
will be taken.

** Affects: horizon
 Importance: Medium
 Assignee: Matthias Runge (mrunge)
 Status: In Progress

** Changed in: horizon
Milestone: None => icehouse-rc1

** Changed in: horizon
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1289305

Title:
  Plugins can not override templates

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  currently, there is no way to extend/override e.g base.html to pull in
  a custom theme via the plugin mechanism.

  Templates from apps added via openstack_dashboard/enabled are searched
  with last priority, i.e. if anything else provides base.html, that one
  will be taken.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1289305/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1289283] [NEW] Cannot boot from volume that was created from an image

2014-03-07 Thread Julie Pichon
Public bug reported:

I'm using Devstack. I discovered this using the dashboard, here's how to
reproduce on the command line.  The image is a raw Fedora 20 cloud image
(
http://download.fedoraproject.org/pub/fedora/linux/releases/20/Images/x86_64
/Fedora-x86_64-20-20131211.1-sda.raw.xz ).

$ cinder create --image-id 16e4aba3-7980-4b4a-aea5-787be62b03ef 1

Then using the id of the new volume:

$ nova boot --boot-volume 2e42b0ac-5910-4e11-ab07-5968d984b364 --flavor
1 test

The instances errors, from nova show:
| fault| {"message": "'name'", "code": 500, 
"created": "2014-03-07T11:03:53Z"} |

Nova CPU logs:

2014-03-07 11:03:53.735 DEBUG oslo.messaging._drivers.amqp [-] UNIQUE_ID is 
4ea53d662ce44c7aaae28fb8feecd08e. from (pid=11493) _add_unique_id 
/opt/stack/oslo.messaging/oslo/messaging/_drivers/amqp.py:338
2014-03-07 11:03:53.820 DEBUG nova.compute.utils 
[req-6e38fca5-410f-421a-87fc-b83fd3afede0 demo demo] [instance: 
82f1d951-b5d0-43fa-a244-922beae98ead] 'name' from (pid=11493) 
notify_about_instance_usage /opt/stack/nova/nova/compute/utils.py:335
2014-03-07 11:03:53.820 TRACE nova.compute.utils [instance: 
82f1d951-b5d0-43fa-a244-922beae98ead] Traceback (most recent call last):
2014-03-07 11:03:53.820 TRACE nova.compute.utils [instance: 
82f1d951-b5d0-43fa-a244-922beae98ead]   File 
"/opt/stack/nova/nova/compute/manager.py", line 971, in _run_instance
2014-03-07 11:03:53.820 TRACE nova.compute.utils [instance: 
82f1d951-b5d0-43fa-a244-922beae98ead] extra_usage_info = {"image_name": 
image_meta['name']}
2014-03-07 11:03:53.820 TRACE nova.compute.utils [instance: 
82f1d951-b5d0-43fa-a244-922beae98ead] KeyError: 'name'
2014-03-07 11:03:53.820 TRACE nova.compute.utils [instance: 
82f1d951-b5d0-43fa-a244-922beae98ead] 
2014-03-07 11:03:53.821 DEBUG nova.openstack.common.lockutils 
[req-6e38fca5-410f-421a-87fc-b83fd3afede0 demo demo] Semaphore / lock released 
"do_run_instance" from (pid=11493) inner 
/opt/stack/nova/nova/openstack/common/lockutils.py:252
2014-03-07 11:03:53.822 DEBUG oslo.messaging._drivers.amqpdriver [-] MSG_ID is 
fd7da43b1fcf429b9b6e7dbe964e8aba from (pid=11493) _send 
/opt/stack/oslo.messaging/oslo/messaging/_drivers/amqpdriver.py:377
2014-03-07 11:03:53.822 DEBUG oslo.messaging._drivers.amqp [-] UNIQUE_ID is 
e3df4171e97c47ce89096f0593cb02cf. from (pid=11493) _add_unique_id 
/opt/stack/oslo.messaging/oslo/messaging/_drivers/amqp.py:338
2014-03-07 11:03:53.846 DEBUG oslo.messaging._drivers.amqpdriver [-] MSG_ID is 
d8f418d961fd49d092767a310df0348c from (pid=11493) _send 
/opt/stack/oslo.messaging/oslo/messaging/_drivers/amqpdriver.py:377
2014-03-07 11:03:53.847 DEBUG oslo.messaging._drivers.amqp [-] UNIQUE_ID is 
d3eebe0a77294d209d9918a4633a6fa0. from (pid=11493) _add_unique_id 
/opt/stack/oslo.messaging/oslo/messaging/_drivers/amqp.py:338
2014-03-07 11:03:53.885 DEBUG oslo.messaging._drivers.amqpdriver [-] MSG_ID is 
08a04f80a6294785947e6c07ed1b6c57 from (pid=11493) _send 
/opt/stack/oslo.messaging/oslo/messaging/_drivers/amqpdriver.py:377
2014-03-07 11:03:53.885 DEBUG oslo.messaging._drivers.amqp [-] UNIQUE_ID is 
1f76132cc21d4d8280c45ea661b9d4fb. from (pid=11493) _add_unique_id 
/opt/stack/oslo.messaging/oslo/messaging/_drivers/amqp.py:338
2014-03-07 11:03:53.957 ERROR oslo.messaging.rpc.dispatcher [-] Exception 
during message handling: 'name'
2014-03-07 11:03:53.957 TRACE oslo.messaging.rpc.dispatcher Traceback (most 
recent call last):
2014-03-07 11:03:53.957 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/oslo.messaging/oslo/messaging/rpc/dispatcher.py", line 133, in 
_dispatch_and_reply
2014-03-07 11:03:53.957 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-03-07 11:03:53.957 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/oslo.messaging/oslo/messaging/rpc/dispatcher.py", line 176, in 
_dispatch
2014-03-07 11:03:53.957 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-03-07 11:03:53.957 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/oslo.messaging/oslo/messaging/rpc/dispatcher.py", line 122, in 
_do_dispatch
2014-03-07 11:03:53.957 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-03-07 11:03:53.957 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/exception.py", line 88, in wrapped
2014-03-07 11:03:53.957 TRACE oslo.messaging.rpc.dispatcher payload)
2014-03-07 11:03:53.957 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/openstack/common/excutils.py", line 68, in __exit__
2014-03-07 11:03:53.957 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-03-07 11:03:53.957 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/exception.py", line 71, in wrapped
2014-03-07 11:03:53.957 TRACE oslo.messaging.rpc.dispatcher return f(self, 
context, *args, **kw)
2014-03-07 11:03:53.957 TRACE oslo.messaging.rpc.dispatcher   Fil

[Yahoo-eng-team] [Bug 1289270] [NEW] run_tests requires selenium no matter if the flag --with-selenium is used

2014-03-07 Thread Matthias Runge
Public bug reported:

When building horizon i-3 within a clean environment, no network connectivity, 
i.e. a build system):
+ ./run_tests.sh -N -P
Running Horizon application tests
SS.S..
--
Ran 106 tests in 81.812s
OK (SKIP=3)
nosetests --verbosity 1 horizon --nocapture --nologcapture 
--exclude-dir=horizon/conf/ --exclude-dir=horizon/test/customization 
--cover-package=horizon --cover-inclusive --all-modules
Creating test database for alias 'default'...
Destroying test database for alias 'default'...
Running openstack_dashboard tests
SS...SS.SSS...EES...
==
ERROR: Failure: ImportError (No module named selenium)
--
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/nose/loader.py", line 413, in 
loadTestsFromName
addr.filename, addr.module)
  File "/usr/lib/python2.7/site-packages/nose/importer.py", line 47, in 
importFromPath
return self.importFromDir(dir_path, fqname)
  File "/usr/lib/python2.7/site-packages/nose/importer.py", line 94, in 
importFromDir
mod = load_module(part_fqname, fh, filename, desc)
  File 
"/builddir/build/BUILD/horizon-2014.1.b3/openstack_dashboard/test/integration_tests/helpers.py",
 line 17, in 
import selenium
ImportError: No module named selenium
==
ERROR: Failure: ImportError (No module named selenium.webdriver.common.keys)
--
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/nose/loader.py", line 413, in 
loadTestsFromName
addr.filename, addr.module)
  File "/usr/lib/python2.7/site-packages/nose/importer.py", line 47, in 
importFromPath
return self.importFromDir(dir_path, fqname)
  File "/usr/lib/python2.7/site-packages/nose/importer.py", line 94, in 
importFromDir
mod = load_module(part_fqname, fh, filename, desc)
  File 
"/builddir/build/BUILD/horizon-2014.1.b3/openstack_dashboard/test/integration_tests/tests/test_login.py",
 line 15, in 
import selenium.webdriver.common.keys as keys
ImportError: No module named selenium.webdriver.common.keys
nosetests --verbosity 1 openstack_dashboard --nocapture --nologcapture 
--cover-package=openstack_dashboard --cover-inclusive --all-modules
Creating test database for alias 'default'...
Destroying test database for alias 'default'...
--
Ran 860 tests in 213.722s
FAILED (SKIP=8, errors=2)
Tests failed.
error: Bad exit status from /var/tmp/rpm-tmp.lhS6LB (%check)
RPM build errors:
Bad exit status from /var/tmp/rpm-tmp.lhS6LB (%check)
Child return code was: 1
EXCEPTION: Command failed. See logs for output.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1289270

Title:
  run_tests requires selenium no matter if the flag --with-selenium is
  used

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When building horizon i-3 within a clean environment, no network 
connectivity, i.e. a build system):
  + ./run_tests.sh -N -P
  Running Horizon application tests
  
SS.S..
  --
  Ran 106 tests in 81.812s
  OK (SKIP=3)
  nosetests --verbosity 1 horizon --nocapture --nologcapture 
--exclude-dir=horizon/conf/ --exclude-dir=horizon/test/customization 
--cover-package=horizon --cover-inclusive --all-modules
  Creating test database for alias 'default'...
  Destroying test database for alias 'default'...
  Running openstack_dashboard tests
  

[Yahoo-eng-team] [Bug 1289256] [NEW] Incorrect usage of sqlalchemy type Integer

2014-03-07 Thread Ann Kamyshnikova
Public bug reported:

In migration folsom_initial in cisco_upgrade function table nexusport_bindings 
create column vlan_id with incorrect type Integer(255). It causes the following 
exception:
  File 
"/opt/stack/neutron/neutron/db/migration/alembic_migrations/versions/folsom_initial.py",
 line 87, in upgrade
upgrade_cisco()
  File 
"/opt/stack/neutron/neutron/db/migration/alembic_migrations/versions/folsom_initial.py",
 line 455, in upgrade_cisco
sa.Column('vlan_id', sa.Integer(255)),
TypeError: object() takes no parameters

** Affects: neutron
 Importance: Undecided
 Assignee: Ann Kamyshnikova (akamyshnikova)
 Status: In Progress


** Tags: cisco db

** Changed in: neutron
 Assignee: (unassigned) => Ann Kamyshnikova (akamyshnikova)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1289256

Title:
  Incorrect usage of sqlalchemy type Integer

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  In migration folsom_initial in cisco_upgrade function table 
nexusport_bindings create column vlan_id with incorrect type Integer(255). It 
causes the following exception:
File 
"/opt/stack/neutron/neutron/db/migration/alembic_migrations/versions/folsom_initial.py",
 line 87, in upgrade
  upgrade_cisco()
File 
"/opt/stack/neutron/neutron/db/migration/alembic_migrations/versions/folsom_initial.py",
 line 455, in upgrade_cisco
  sa.Column('vlan_id', sa.Integer(255)),
  TypeError: object() takes no parameters

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1289256/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1289230] Re: Conversion types is missing in some strings

2014-03-07 Thread Kiyohiro Adachi
** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) => Kiyohiro Adachi (adachi)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1289230

Title:
  Conversion types is missing in some strings

Status in Cinder:
  In Progress
Status in OpenStack Compute (Nova):
  New

Bug description:
  Ex.:
   except nexenta.NexentaException as exc:
  -LOG.warning(_('Cannot delete snapshot %(origin): %(exc)s'),
  +LOG.warning(_('Cannot delete snapshot %(origin)s: %(exc)s'),
   {'origin': origin, 'exc': exc})

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1289230/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1289236] [NEW] BigSwitch: HTTPException type caught is too narrow

2014-03-07 Thread Kevin Benton
Public bug reported:

The BigSwitch plugin currently determines if it needs to reconnect by
checking for an httplib ImproperConnectionState exception. However, this
exception is too narrow and does not cover the other httplib exceptions
that indicate a reconnection is necessary (e.g. NotConnected).

It should just catch httplib.HTTPExceptions.

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1289236

Title:
  BigSwitch: HTTPException type caught is too narrow

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The BigSwitch plugin currently determines if it needs to reconnect by
  checking for an httplib ImproperConnectionState exception. However,
  this exception is too narrow and does not cover the other httplib
  exceptions that indicate a reconnection is necessary (e.g.
  NotConnected).

  It should just catch httplib.HTTPExceptions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1289236/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1286209] Re: unhandled trace if no namespaces in metering agent

2014-03-07 Thread James Page
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1286209

Title:
  unhandled trace if no namespaces in metering agent

Status in OpenStack Neutron (virtual network service):
  New
Status in “neutron” package in Ubuntu:
  New

Bug description:
  If network node has no active routers on it l3-agent, metering-agent
  tracing:

  
  2014-02-28 17:04:51.286 1121 DEBUG 
neutron.services.metering.agents.metering_agent [-] Get router traffic counters 
_get_traffic_counters 
/usr/lib/python2.7/dist-packages/neutron/services/metering/agents/metering_agent.py:214
  2014-02-28 17:04:51.286 1121 DEBUG neutron.openstack.common.lockutils [-] Got 
semaphore "metering-agent" for method "_invoke_driver"... inner 
/usr/lib/python2.7/dist-packages/neutron/openstack/common/lockutils.py:191
  2014-02-28 17:04:51.286 1121 DEBUG neutron.common.log [-] 
neutron.services.metering.drivers.iptables.iptables_driver.IptablesMeteringDriver
 method get_traffic_counters called with arguments 
(, [{u'status': u'ACTIVE', 
u'name': u'r', u'gw_port_id': u'86be6088-d967-45a8-bf69-8af76d956a3e', 
u'admin_state_up': True, u'tenant_id': u'1483a06525a5485e8a7dd64abaa66619', 
u'_metering_labels': [{u'rules': [{u'remote_ip_prefix': u'0.0.0.0/0', 
u'direction': u'ingress', u'metering_label_id': 
u'19de35e4-ea99-4d84-9fbf-7b0c7a390540', u'id': 
u'3991421b-50ce-46ea-b264-74bb47d09e65', u'excluded': False}, 
{u'remote_ip_prefix': u'0.0.0.0/0', u'direction': u'egress', 
u'metering_label_id': u'19de35e4-ea99-4d84-9fbf-7b0c7a390540', u'id': 
u'706e55db-e2f7-4eb9-940a-67144a075a2c', u'excluded': False}], u'id': 
u'19de35e4-ea99-4d84-9fbf-7b0c7a390540'}], u'id': 
u'5ccfe6b8-9c3b-44c4-9580-da0d74ccdcf8'}]) {} wrapper 
/usr/lib/python2.7/dist-packages/neutron/common/log.py:33
  2014-02-28 17:04:51.286 1121 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', 'ip', 'netns', 'exec', 
'qrouter-5ccfe6b8-9c3b-44c4-9580-da0d74ccdcf8', 'iptables', '-t', 'filter', 
'-L', 'neutron-meter-l-19de35e4-ea9', '-n', '-v', '-x', '-Z'] execute 
/usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py:43
  2014-02-28 17:04:51.291 1121 DEBUG neutron.agent.linux.utils [-]
  Command: ['sudo', 'ip', 'netns', 'exec', 
'qrouter-5ccfe6b8-9c3b-44c4-9580-da0d74ccdcf8', 'iptables', '-t', 'filter', 
'-L', 'neutron-meter-l-19de35e4-ea9', '-n', '-v', '-x', '-Z']
  Exit code: 1
  Stdout: ''
  Stderr: 'Cannot open network namespace: No such file or directory\n' execute 
/usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py:60
  2014-02-28 17:04:51.291 1121 ERROR neutron.openstack.common.loopingcall [-] 
in fixed duration looping call
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall 
Traceback (most recent call last):
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall   
File 
"/usr/lib/python2.7/dist-packages/neutron/openstack/common/loopingcall.py", 
line 78, in _inner
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall 
self.f(*self.args, **self.kw)
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall   
File 
"/usr/lib/python2.7/dist-packages/neutron/services/metering/agents/metering_agent.py",
 line 163, in _metering_loop
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall 
self._add_metering_infos()
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall   
File 
"/usr/lib/python2.7/dist-packages/neutron/services/metering/agents/metering_agent.py",
 line 155, in _add_metering_infos
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall 
accs = self._get_traffic_counters(self.context, self.routers.values())
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall   
File 
"/usr/lib/python2.7/dist-packages/neutron/services/metering/agents/metering_agent.py",
 line 215, in _get_traffic_counters
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall 
return self._invoke_driver(context, routers, 'get_traffic_counters')
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall   
File "/usr/lib/python2.7/dist-packages/neutron/openstack/common/lockutils.py", 
line 247, in inner
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall 
retval = f(*args, **kwargs)
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall   
File 
"/usr/lib/python2.7/dist-packages/neutron/services/metering/agents/metering_agent.py",
 line 180, in _invoke_driver
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall 
{'driver': cfg.CONF.metering_driver,
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall   
File "/usr/lib/python2.7/dist-packages/oslo/config/cfg.py", line 1648, in 
__getattr__
  2014-02-28 17:04:51.291 1121 TR

[Yahoo-eng-team] [Bug 1289205] [NEW] VMware: 404 Not Found error when create snapshot

2014-03-07 Thread David Geng
Public bug reported:

When try to create instance snapshot via 'nova image-create', I hint
following error in the nova log file:

2014-03-07 03:04:35.436 7213 DEBUG nova.virt.vmwareapi.vmware_images 
[req-d82fb9c2-9325-43a4-814a-4ea1bae14c5c 8f4bfb793af946b6a25288456143e920 
e4a913fb2d1e4a33a4acacc806dd6153] [instance: 
23261217-c2e6-472c-85e9-9afbd708391d] Uploading image 
64a0b47b-ba53-4e7b-9877-f56186916d73 to the Glance image server upload_image 
/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vmware_images.py:141
2014-03-07 03:04:35.437 7213 DEBUG nova.virt.vmwareapi.read_write_util 
[req-d82fb9c2-9325-43a4-814a-4ea1bae14c5c 8f4bfb793af946b6a25288456143e920 
e4a913fb2d1e4a33a4acacc806dd6153] 
aaa-https://172.16.151.254/folder/vmware_temp/be5acd3b-b68b-4e6e-9039-43dd7bf910bf-flat.vmdk?dsName=datastore1&dcPath=smartlcouds
 __init__ 
/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/read_write_util.py:162
2014-03-07 03:04:35.438 7213 DEBUG nova.virt.vmwareapi.read_write_util 
[req-d82fb9c2-9325-43a4-814a-4ea1bae14c5c 8f4bfb793af946b6a25288456143e920 
e4a913fb2d1e4a33a4acacc806dd6153] bbb- __init__ 
/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/read_write_util.py:163
2014-03-07 03:04:35.477 7213 DEBUG nova.compute.manager 
[req-d82fb9c2-9325-43a4-814a-4ea1bae14c5c 8f4bfb793af946b6a25288456143e920 
e4a913fb2d1e4a33a4acacc806dd6153] [instance: 
23261217-c2e6-472c-85e9-9afbd708391d] Cleaning up image 
64a0b47b-ba53-4e7b-9877-f56186916d73 decorated_function 
/usr/lib/python2.6/site-packages/nova/compute/manager.py:317
2014-03-07 03:04:35.477 7213 TRACE nova.compute.manager [instance: 
23261217-c2e6-472c-85e9-9afbd708391d] Traceback (most recent call last):
2014-03-07 03:04:35.477 7213 TRACE nova.compute.manager [instance: 
23261217-c2e6-472c-85e9-9afbd708391d]   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 313, in 
decorated_function
2014-03-07 03:04:35.477 7213 TRACE nova.compute.manager [instance: 
23261217-c2e6-472c-85e9-9afbd708391d] *args, **kwargs)
2014-03-07 03:04:35.477 7213 TRACE nova.compute.manager [instance: 
23261217-c2e6-472c-85e9-9afbd708391d]   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 2525, in 
snapshot_instance
2014-03-07 03:04:35.477 7213 TRACE nova.compute.manager [instance: 
23261217-c2e6-472c-85e9-9afbd708391d] task_states.IMAGE_SNAPSHOT)
2014-03-07 03:04:35.477 7213 TRACE nova.compute.manager [instance: 
23261217-c2e6-472c-85e9-9afbd708391d]   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 2560, in 
_snapshot_instance
2014-03-07 03:04:35.477 7213 TRACE nova.compute.manager [instance: 
23261217-c2e6-472c-85e9-9afbd708391d] update_task_state)
2014-03-07 03:04:35.477 7213 TRACE nova.compute.manager [instance: 
23261217-c2e6-472c-85e9-9afbd708391d]   File 
"/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py", line 614, in 
snapshot
2014-03-07 03:04:35.477 7213 TRACE nova.compute.manager [instance: 
23261217-c2e6-472c-85e9-9afbd708391d] _vmops.snapshot(context, instance, 
name, update_task_state)
2014-03-07 03:04:35.477 7213 TRACE nova.compute.manager [instance: 
23261217-c2e6-472c-85e9-9afbd708391d]   File 
"/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vmops.py", line 939, in 
snapshot
2014-03-07 03:04:35.477 7213 TRACE nova.compute.manager [instance: 
23261217-c2e6-472c-85e9-9afbd708391d] _upload_vmdk_to_image_repository()
2014-03-07 03:04:35.477 7213 TRACE nova.compute.manager [instance: 
23261217-c2e6-472c-85e9-9afbd708391d]   File 
"/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vmops.py", line 933, in 
_upload_vmdk_to_image_repository
2014-03-07 03:04:35.477 7213 TRACE nova.compute.manager [instance: 
23261217-c2e6-472c-85e9-9afbd708391d] file_path="%s/%s-flat.vmdk" % 
(self._tmp_folder, random_name))
2014-03-07 03:04:35.477 7213 TRACE nova.compute.manager [instance: 
23261217-c2e6-472c-85e9-9afbd708391d]   File 
"/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vmware_images.py", line 
147, in upload_image
2014-03-07 03:04:35.477 7213 TRACE nova.compute.manager [instance: 
23261217-c2e6-472c-85e9-9afbd708391d] kwargs.get("file_path"))
2014-03-07 03:04:35.477 7213 TRACE nova.compute.manager [instance: 
23261217-c2e6-472c-85e9-9afbd708391d]   File 
"/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/read_write_util.py", line 
164, in __init__
2014-03-07 03:04:35.477 7213 TRACE nova.compute.manager [instance: 
23261217-c2e6-472c-85e9-9afbd708391d] conn = urllib2.urlopen(request)
2014-03-07 03:04:35.477 7213 TRACE nova.compute.manager [instance: 
23261217-c2e6-472c-85e9-9afbd708391d]   File "/usr/lib64/python2.6/urllib2.py", 
line 126, in urlopen
2014-03-07 03:04:35.477 7213 TRACE nova.compute.manager [instance: 
23261217-c2e6-472c-85e9-9afbd708391d] return _opener.open(url, data, 
timeout)
2014-03-07 03:04:35.477 7213 TRACE nova.compute.manager [instance: 
23261217-c2e6-472c-85e9-9afbd708391d]   File "/usr/lib64/python2.

[Yahoo-eng-team] [Bug 1289192] [NEW] BigSwitch: certificate file helper functions incorrect

2014-03-07 Thread Kevin Benton
Public bug reported:

The BigSwitch plugin has helper methods for writing certificates to the
file system that are incorrectly defined.

They are missing the self argument that will be passed in. 
https://github.com/openstack/neutron/blob/7255e056092f034daaeb4246a812900645d46911/neutron/plugins/bigswitch/servermanager.py#L368
https://github.com/openstack/neutron/blob/7255e056092f034daaeb4246a812900645d46911/neutron/plugins/bigswitch/servermanager.py#L319

The unit tests were also incorrect in this case since they were
refactored at right before the merge to avoid any file-system writes
during unit tests.

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1289192

Title:
  BigSwitch: certificate file helper functions incorrect

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The BigSwitch plugin has helper methods for writing certificates to
  the file system that are incorrectly defined.

  They are missing the self argument that will be passed in. 
  
https://github.com/openstack/neutron/blob/7255e056092f034daaeb4246a812900645d46911/neutron/plugins/bigswitch/servermanager.py#L368
  
https://github.com/openstack/neutron/blob/7255e056092f034daaeb4246a812900645d46911/neutron/plugins/bigswitch/servermanager.py#L319

  The unit tests were also incorrect in this case since they were
  refactored at right before the merge to avoid any file-system writes
  during unit tests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1289192/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp