[Yahoo-eng-team] [Bug 1389583] [NEW] _handle_saml2_tokens() should be renamed to something more generic

2014-11-05 Thread Steve Martinelli
Public bug reported:

In file
https://github.com/openstack/keystone/blob/master/keystone/token/providers/common.py#L433
there is a function called _handle_saml2_tokens(), but since bp generic-
mapping-federation and spec http://specs.openstack.org/openstack
/keystone-specs/specs/juno/generic-mapping-federation.html, the
federation process was standardized. So it should be renamed
accordingly, possibly either _handle_federation_tokens() or
_handle_mapped_tokens()

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1389583

Title:
  _handle_saml2_tokens() should be renamed to something more generic

Status in OpenStack Identity (Keystone):
  New

Bug description:
  In file
  
https://github.com/openstack/keystone/blob/master/keystone/token/providers/common.py#L433
  there is a function called _handle_saml2_tokens(), but since bp
  generic-mapping-federation and spec
  http://specs.openstack.org/openstack/keystone-specs/specs/juno
  /generic-mapping-federation.html, the federation process was
  standardized. So it should be renamed accordingly, possibly either
  _handle_federation_tokens() or _handle_mapped_tokens()

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1389583/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1389586] [NEW] lack of debug logging for federation flows

2014-11-05 Thread Steve Martinelli
Public bug reported:

There is a distinct lack of debug logging in the federation branch of
the code, making debugging certain mapping assertions harder than it
needs to be:

steve:keystone$ grep 'LOG' -r keystone/contrib/federation/*
keystone/contrib/federation/core.py:LOG = logging.getLogger(__name__)
keystone/contrib/federation/idp.py:LOG = log.getLogger(__name__)
keystone/contrib/federation/idp.py:LOG.error(msg)
keystone/contrib/federation/idp.py:LOG.error(msg)
keystone/contrib/federation/utils.py:LOG = log.getLogger(__name__)
keystone/contrib/federation/utils.py:
LOG.warning(_('Ignoring user name %s'),

We should add some more debug logging.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1389586

Title:
  lack of debug logging for federation flows

Status in OpenStack Identity (Keystone):
  New

Bug description:
  There is a distinct lack of debug logging in the federation branch of
  the code, making debugging certain mapping assertions harder than it
  needs to be:

  steve:keystone$ grep 'LOG' -r keystone/contrib/federation/*
  keystone/contrib/federation/core.py:LOG = logging.getLogger(__name__)
  keystone/contrib/federation/idp.py:LOG = log.getLogger(__name__)
  keystone/contrib/federation/idp.py:LOG.error(msg)
  keystone/contrib/federation/idp.py:LOG.error(msg)
  keystone/contrib/federation/utils.py:LOG = log.getLogger(__name__)
  keystone/contrib/federation/utils.py:
LOG.warning(_('Ignoring user name %s'),

  We should add some more debug logging.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1389586/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1388230] Re: Checks for DB models and migrations sync not working

2014-11-05 Thread Ann Kamyshnikova
1) If you run db-manage revision --autogenerate on master with
PostgreSQL it won't show any extra changes that are needed
http://paste.openstack.org/show/129531/, but on the MySQL it will show
some extra indexes http://paste.openstack.org/show/129532/. This indexes
is specific of MySQL which creates them with primary keys or foreign
keys. As it was decided earlier this should not be fixed as this dialect
specific (https://review.openstack.org/80518), so test skip this
difference.

2) I'm not sure that alembic have any checks for changing of primary
keys, and if it is so this does't expect to work as both autogenerate
and test rely on alembic with this checks. So, this is not Neutron bug.

** Changed in: neutron
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1388230

Title:
  Checks for DB models and migrations sync not working

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  I noticed a couple of issues, which might be related.

  
  1. db-manage revision --autogenerate on master with no code changes 
generates:

  def upgrade():
  op.drop_index('idx_autoinc_vr_id', 
table_name='ha_router_vrid_allocations')

  
  2. With the following change to the IPAllocation() model, the revision is not 
detected. Also, the unit tests for model/migration sync do not give an error.

  diff --git a/neutron/db/models_v2.py b/neutron/db/models_v2.py
  --- a/neutron/db/models_v2.py
  +++ b/neutron/db/models_v2.py
  @@ -98,8 +98,8 @@ class IPAllocation(model_base.BASEV2):
   
   port_id = sa.Column(sa.String(36), sa.ForeignKey('ports.id',
ondelete=CASCADE),
  -nullable=True)
  -ip_address = sa.Column(sa.String(64), nullable=False, primary_key=True)
  +nullable=True, primary_key=True)
  +ip_address = sa.Column(sa.String(64), nullable=False)
   subnet_id = sa.Column(sa.String(36), sa.ForeignKey('subnets.id',
  ondelete=CASCADE),
 nullable=False, primary_key=True)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1388230/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1389601] [NEW] Replace nova entries in iptables_manager with neutron

2014-11-05 Thread Elena Ezhova
Public bug reported:

In iptables_manager docstrings there are still some references to nova
left from nova/network/linux_net.py. These references need to be removed
and the docstrings updated.

** Affects: neutron
 Importance: Undecided
 Assignee: Elena Ezhova (eezhova)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Elena Ezhova (eezhova)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1389601

Title:
  Replace nova entries in iptables_manager with neutron

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In iptables_manager docstrings there are still some references to nova
  left from nova/network/linux_net.py. These references need to be
  removed and the docstrings updated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1389601/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1389618] [NEW] glance member-create CLI adds non existing member/tenant with image

2014-11-05 Thread Ankur Gupta
Public bug reported:

glance member-create CLI Shares a specific image with a tenant.

usage: glance member-create [--can-share] IMAGE TENANT_ID

Positional arguments:
  IMAGE  Image to add member to.
  TENANT_ID  Tenant to add as member

If I pass a non-existing tenant as TENANT ID, it also accept that and adds as 
a member of that image while it should raise
error as No tenant with a name or ID of '-' exists.

Below are the command execution logs-

$ keystone tenant-list
+--+--+-+
|id|   name   | enabled |
+--+--+-+
| a1c37cc595024369aa2124b50adaa0b8 |  admin   |   True  |
| 31dd5bdca08e4ce0b208ef618142875b | cephtest |   True  |
| 944ffc3c82f088eb7f61bc77bef0 |   demo   |   True  |
| ed34d901e2314ab6a93e01ebad44e445 | service  |   True  |
+--+--+-+

$ glance image-list
+--++-+--+++
| ID   | Name   | Disk 
Format | Container Format | Size   | Status |
+--++-+--+++
| 90368993-bd57-4b99-b371-98ff771b9c3f | ceph-test-image| raw   
  | bare | 13147648   | active |
| e3d88dcf-8c96-425e-b5eb-0d64c737d193 | ceph-test-snapshot | raw   
  | bare | 1073741824 | active |
| 8ead940a-6ee8-43db-b45d-8895c5c59805 | ceph-test-yatin| raw   
  | bare | 13147648   | active |

$ glance member-create  8ead940a-6ee8-43db-b45d-8895c5c59805
a1c37cc595024369 -- with incomplete or wrong Tenant-ID

$ glance member-list  --image-id 8ead940a-6ee8-43db-b45d-8895c5c59805
+--+--+---+
| Image ID | Member ID| Can Share |
+--+--+---+
| 8ead940a-6ee8-43db-b45d-8895c5c59805 | a1c37cc595024369 |   |
+--+--+---+

** Affects: glance
 Importance: Undecided
 Status: New

** Description changed:

  glance member-create CLI Shares a specific image with a tenant.
  
  usage: glance member-create [--can-share] IMAGE TENANT_ID
  
  Positional arguments:
-   IMAGE  Image to add member to.
-   TENANT_ID  Tenant to add as member
+   IMAGE  Image to add member to.
+   TENANT_ID  Tenant to add as member
  
- 
- If I pass a non-existing tenant as TENANT ID, it also accept that and adds 
as a member of that image while it should raise 
+ If I pass a non-existing tenant as TENANT ID, it also accept that and adds 
as a member of that image while it should raise
  error as No tenant with a name or ID of '-' exists.
  
  Below are the command execution logs-
  
  $ keystone tenant-list
  +--+--+-+
  |id|   name   | enabled |
  +--+--+-+
  | a1c37cc595024369aa2124b50adaa0b8 |  admin   |   True  |
  | 31dd5bdca08e4ce0b208ef618142875b | cephtest |   True  |
  | 944ffc3c82f088eb7f61bc77bef0 |   demo   |   True  |
  | ed34d901e2314ab6a93e01ebad44e445 | service  |   True  |
  +--+--+-+
  
  $ glance image-list
  
+--++-+--+++
  | ID   | Name   | Disk 
Format | Container Format | Size   | Status |
  
+--++-+--+++
  | 90368993-bd57-4b99-b371-98ff771b9c3f | ceph-test-image| raw 
| bare | 13147648   | active |
  | e3d88dcf-8c96-425e-b5eb-0d64c737d193 | ceph-test-snapshot | raw 
| bare | 1073741824 | active |
  | 8ead940a-6ee8-43db-b45d-8895c5c59805 | ceph-test-yatin| raw 
| bare | 13147648   | active |
  
+ $ glance member-create  8ead940a-6ee8-43db-b45d-8895c5c59805
+ a1c37cc595024369 -- with incomplete or wrong Tenant-ID
  
- $ glance member-create  8ead940a-6ee8-43db-b45d-8895c5c59805 a1c37cc595024369 
-- with incomplete or wrong Tenant-ID
- 
- necadmin@nechldcst-PowerEdge-2950:~$ glance member-list  --image-id 
8ead940a-6ee8-43db-b45d-8895c5c59805 
+ $ glance member-list  --image-id 8ead940a-6ee8-43db-b45d-8895c5c59805
  +--+--+---+
  | Image ID | Member ID| Can Share |
  +--+--+---+
  | 

[Yahoo-eng-team] [Bug 1338885] Re: fwaas: admin should not be able to create firewall rule for non existing tenant

2014-11-05 Thread Eugene Nikanorov
I doubt this fits neutron, at least for now.
Neutron is not tenant-aware in the sense that it doesn't verify tenants against 
keystone.
And I don't think that's what we could do to fix this issue.

** Changed in: neutron
   Status: Confirmed = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1338885

Title:
  fwaas: admin should not be able to create firewall rule for non
  existing tenant

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
   Admin should not be able to create resources for non existing tenant.


  Steps to Reproduce:

  Actual Results: 
   
  root@IGA-OSC:~# neutron firewall-rule-create --protocol tcp --action deny 
--tenant-id bf4fbb928d574829855ebfd9e5d0e --(non existing tenant-id. changed 
the last few characters)
  Created a new firewall_rule:
  ++--+
  | Field  | Value|
  ++--+
  | action | deny |
  | description|  |
  | destination_ip_address |  |
  | destination_port   |  |
  | enabled| True |
  | firewall_policy_id |  |
  | id | 7264e5a6-5752-4518-b26b-7c7395173747 |
  | ip_version | 4|
  | name   |  |
  | position   |  |
  | protocol   | tcp  |
  | shared | False|
  | source_ip_address  |  |
  | source_port|  |
  | tenant_id  | bf4fbb928d574829855ebfd9e5d0e|
  ++--+
  root@IGA-OSC:~# ktl
  +--+-+-+
  |id|   name  | enabled |
  +--+-+-+
  | 0ad385e00e97476e9456945c079a21ea |  admin  |   True  |
  | 43af7b7c0dbc40bd90d03cc08df201ce | service |   True  |
  | d9481c57a11c46eea62886938b5378a7 | tenant1 |   True  |
  | bf4fbb928d574829855ebfd9e5d0e58c | tenant2 |   True  |
  +--+-+-+
   
  ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1338885/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1389623] [NEW] Duplicate code in test_v3_federation

2014-11-05 Thread Henry Nash
Public bug reported:

In the set set up of sample data, there exists the following:

self.TOKEN_SCOPE_DOMAIN_B_FROM_CUSTOMER = self._scope_request(
self.tokens['CUSTOMER_ASSERTION'], 'domain', self.domainB['id'])

self.TOKEN_SCOPE_DOMAIN_B_FROM_CUSTOMER = self._scope_request(
self.tokens['CUSTOMER_ASSERTION'], 'domain',
self.domainB['id']

The second statement is a duplicate of the first (formatting aside).

** Affects: keystone
 Importance: Low
 Assignee: Henry Nash (henry-nash)
 Status: New

** Changed in: keystone
   Importance: Undecided = Low

** Changed in: keystone
 Assignee: (unassigned) = Henry Nash (henry-nash)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1389623

Title:
  Duplicate code in test_v3_federation

Status in OpenStack Identity (Keystone):
  New

Bug description:
  In the set set up of sample data, there exists the following:

  self.TOKEN_SCOPE_DOMAIN_B_FROM_CUSTOMER = self._scope_request(
  self.tokens['CUSTOMER_ASSERTION'], 'domain', self.domainB['id'])

  self.TOKEN_SCOPE_DOMAIN_B_FROM_CUSTOMER = self._scope_request(
  self.tokens['CUSTOMER_ASSERTION'], 'domain',
  self.domainB['id']

  The second statement is a duplicate of the first (formatting aside).

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1389623/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1389634] [NEW] MultiTenant Swift store not working

2014-11-05 Thread Malyshev Alex
Public bug reported:

There are some fails when using multi-tenant swift glance storage for stable 
Juno.
At now to make it work, i made some fixes in files:

1. 
https://github.com/openstack/glance_store/commit/3bb94dcad84ed4204d8809fcf95b7713daa2189b
2. 
https://github.com/openstack/glance/commit/867b696d884e9db707683eecd321789843798efd
3. And in glance_store/_drivers/swift/store.py i have replaced 
'preauthtoken=context.auth_token' to 'preauthtoken=context.auth_tok'

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1389634

Title:
  MultiTenant Swift store not working

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  There are some fails when using multi-tenant swift glance storage for stable 
Juno.
  At now to make it work, i made some fixes in files:

  1. 
https://github.com/openstack/glance_store/commit/3bb94dcad84ed4204d8809fcf95b7713daa2189b
  2. 
https://github.com/openstack/glance/commit/867b696d884e9db707683eecd321789843798efd
  3. And in glance_store/_drivers/swift/store.py i have replaced 
'preauthtoken=context.auth_token' to 'preauthtoken=context.auth_tok'

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1389634/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1389636] [NEW] EHOSTUNREACH/ ConnectionError(sockerr) occurs from time to time

2014-11-05 Thread Pete Revales
Public bug reported:

This error doesn't occurs all time, sometimes it's OK sometimes not.

This error occurred in ML2 plugin for create network.
Tested using 2014.3 Icehouse

==
2014-10-28 21:16:48.961 2347 TRACE root File 
/usr/lib/python2.7/site-packages/requests/api.py, line 44, in request
2014-10-28 21:16:48.961 2347 TRACE root return session.request(method=method, 
url=url, **kwargs)
2014-10-28 21:16:48.961 2347 TRACE root File 
/usr/lib/python2.7/site-packages/requests/sessions.py, line 288, in request
2014-10-28 21:16:48.961 2347 TRACE root resp = self.send(prep, stream=stream, 
timeout=timeout, verify=verify, cert=cert, proxies=proxies)
2014-10-28 21:16:48.961 2347 TRACE root File 
/usr/lib/python2.7/site-packages/requests/sessions.py, line 383, in send
2014-10-28 21:16:48.961 2347 TRACE root r = adapter.send(request, **kwargs)
2014-10-28 21:16:48.961 2347 TRACE root File 
/usr/lib/python2.7/site-packages/requests/adapters.py, line 206, in send
2014-10-28 21:16:48.961 2347 TRACE root raise ConnectionError(sockerr)
2014-10-28 21:16:48.961 2347 TRACE root ConnectionError: [Errno 113] 
EHOSTUNREACH
==

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ml2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1389636

Title:
  EHOSTUNREACH/ ConnectionError(sockerr) occurs from time to time

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  This error doesn't occurs all time, sometimes it's OK sometimes not.

  This error occurred in ML2 plugin for create network.
  Tested using 2014.3 Icehouse

  ==
  2014-10-28 21:16:48.961 2347 TRACE root File 
/usr/lib/python2.7/site-packages/requests/api.py, line 44, in request
  2014-10-28 21:16:48.961 2347 TRACE root return session.request(method=method, 
url=url, **kwargs)
  2014-10-28 21:16:48.961 2347 TRACE root File 
/usr/lib/python2.7/site-packages/requests/sessions.py, line 288, in request
  2014-10-28 21:16:48.961 2347 TRACE root resp = self.send(prep, stream=stream, 
timeout=timeout, verify=verify, cert=cert, proxies=proxies)
  2014-10-28 21:16:48.961 2347 TRACE root File 
/usr/lib/python2.7/site-packages/requests/sessions.py, line 383, in send
  2014-10-28 21:16:48.961 2347 TRACE root r = adapter.send(request, **kwargs)
  2014-10-28 21:16:48.961 2347 TRACE root File 
/usr/lib/python2.7/site-packages/requests/adapters.py, line 206, in send
  2014-10-28 21:16:48.961 2347 TRACE root raise ConnectionError(sockerr)
  2014-10-28 21:16:48.961 2347 TRACE root ConnectionError: [Errno 113] 
EHOSTUNREACH
  ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1389636/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1389690] [NEW] Unable to ping router

2014-11-05 Thread Venkata Seshadri
Public bug reported:

I am unable to Ping my Router and when i go to dashboard and see my
external network state is DOWN. I followed all the steps in open-stack
documentation .i am Using CentOS 7 . what is the issue??

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1389690

Title:
  Unable to ping router

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I am unable to Ping my Router and when i go to dashboard and see my
  external network state is DOWN. I followed all the steps in open-stack
  documentation .i am Using CentOS 7 . what is the issue??

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1389690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1389694] [NEW] Unable to list my networks

2014-11-05 Thread Venkata Seshadri
Public bug reported:

I am unable to list my external network in
PROJECTSNetwork-Network Topology

But the network is available in Admin section.

What is the ISSUE??

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1389694

Title:
  Unable to list my networks

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I am unable to list my external network in
  PROJECTSNetwork-Network Topology

  But the network is available in Admin section.

  What is the ISSUE??

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1389694/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1389728] [NEW] cinder do not import image from glance

2014-11-05 Thread Vitalii
Public bug reported:

Steps to reproduce:

1. Create raw bare glance image
2. Create LVM volume group
3. Boot instance with nova:

nova --debug boot --flavor m1.small --block-device
source=image,id=GLANCE IMAGE
ID,dest=volume,size=10,shutdown=preserve,bootindex=0 --nic net-
id=NEUTRON NET ID test

In cinder volume log I can see the following:

2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 134, in _dispatch_and_reply
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 177, in _dispatch
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 123, in _do_dispatch
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/osprofiler/profiler.py,
 line 105, in wrapper
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher return 
f(*args, **kwargs)
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/cinder/volume/manager.py,
 line 381, in create_volume
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher 
_run_flow()
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/cinder/volume/manager.py,
 line 374, in _run_flow
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher 
flow_engine.run()
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py,
 line 99, in run
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher for 
_state in self.run_iter():
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py,
 line 156, in run_iter
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher 
misc.Failure.reraise_if_any(failures.values())
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/taskflow/utils/misc.py,
 line 733, in reraise_if_any
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher 
failures[0].reraise()
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/taskflow/utils/misc.py,
 line 740, in reraise
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(*self._exc_info)
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py,
 line 35, in _execute_task
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher result = 
task.execute(**arguments)
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/cinder/volume/flows/manager/create_volume.py,
 line 638, in execute
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher 
**volume_spec)
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/cinder/volume/flows/manager/create_volume.py,
 line 590, in _create_from_image
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher image_id, 
image_location, image_service)
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher   File 
/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/cinder/volume/flows/manager/create_volume.py,
 line 492, in _copy_image_to_volume
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher raise 
exception.ImageCopyFailure(reason=ex.stderr)
2014-11-05 14:30:45.271 17750 TRACE oslo.messaging.rpc.dispatcher 
ImageCopyFailure: Failed to copy image to volume: qemu-img: error writing 
zeroes at sector 0: Invalid argument

The reason is this code:
cinder/image/image_utils.py:86

Several lines above, in the same function ther's:
   cmd = ('qemu-img', 'convert',
   '-O', out_format, source, dest)


[Yahoo-eng-team] [Bug 1389752] [NEW] Project tokens issued from a saml2 auth are missing inherited group roles

2014-11-05 Thread Henry Nash
Public bug reported:

When building the roles in a Keystone token from a saml2 token, we call
assignment_api.get_roles_for_groups() to add in any group roles. This
appears to ignore the inheritance flag on the assignment - and puts in
all group roles whether inherited or not. This means the wrong roles can
end up in the resulting Keystone token.

The implication is that project scoped tokens would not get any group
roles that should be inherited from the domain.

** Affects: keystone
 Importance: High
 Assignee: Henry Nash (henry-nash)
 Status: New

** Changed in: keystone
   Importance: Undecided = High

** Changed in: keystone
 Assignee: (unassigned) = Henry Nash (henry-nash)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1389752

Title:
  Project tokens issued from a saml2 auth are missing inherited group
  roles

Status in OpenStack Identity (Keystone):
  New

Bug description:
  When building the roles in a Keystone token from a saml2 token, we
  call assignment_api.get_roles_for_groups() to add in any group roles.
  This appears to ignore the inheritance flag on the assignment - and
  puts in all group roles whether inherited or not. This means the wrong
  roles can end up in the resulting Keystone token.

  The implication is that project scoped tokens would not get any group
  roles that should be inherited from the domain.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1389752/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362766] Re: ConnectionFailed: Connection to neutron failed: 'HTTPSConnectionPool' object has no attribute 'insecure'

2014-11-05 Thread nikhil komawar
https://review.openstack.org/#/c/110574/9

** Also affects: glance
   Importance: Undecided
   Status: New

** Also affects: python-glanceclient
   Importance: Undecided
   Status: New

** No longer affects: glance

** Changed in: python-glanceclient
   Status: New = In Progress

** Changed in: python-glanceclient
 Assignee: (unassigned) = Flavio Percoco (flaper87)

** Changed in: python-glanceclient
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1362766

Title:
  ConnectionFailed: Connection to neutron failed: 'HTTPSConnectionPool'
  object has no attribute 'insecure'

Status in Python client library for Glance:
  In Progress
Status in Python client library for Neutron:
  Incomplete

Bug description:
  While compute manager was trying to authenticate with neutronclient,
  we see the following:

  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager Traceback 
(most recent call last):
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager   File 
/usr/lib/python2.7/site-packages/powervc_nova/compute/manager.py, line 672, 
in _populate_admin_context
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager 
nclient.authenticate()
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager   File 
/usr/lib/python2.7/site-packages/neutronclient/client.py, line 231, in 
authenticate
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager 
self._authenticate_keystone()
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager   File 
/usr/lib/python2.7/site-packages/neutronclient/client.py, line 209, in 
_authenticate_keystone
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager 
allow_redirects=True)
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager   File 
/usr/lib/python2.7/site-packages/neutronclient/client.py, line 113, in 
_cs_request
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager raise 
exceptions.ConnectionFailed(reason=e)
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager 
ConnectionFailed: Connection to neutron failed: 'HTTPSConnectionPool' object 
has no attribute 'insecure'

  Setting a pdb breakpoint and stepping into the code, I see that the
  requests library is getting a connection object from a pool.  The
  interesting thing is that the connection object is actually from
  glanceclient.common.https.HTTPSConnectionPool.  It seems odd to me
  that neutronclient is using a connection object from glanceclient
  pool, but I do not know this requests code.  Here is the stack just
  before failure:

/usr/lib/python2.7/site-packages/neutronclient/client.py(234)authenticate()
  - self._authenticate_keystone()

/usr/lib/python2.7/site-packages/neutronclient/client.py(212)_authenticate_keystone()
  - allow_redirects=True)
/usr/lib/python2.7/site-packages/neutronclient/client.py(106)_cs_request()
  - resp, body = self.request(*args, **kargs)
/usr/lib/python2.7/site-packages/neutronclient/client.py(151)request()
  - **kwargs)
/usr/lib/python2.7/site-packages/requests/api.py(44)request()
  - return session.request(method=method, url=url, **kwargs)
/usr/lib/python2.7/site-packages/requests/sessions.py(335)request()
  - resp = self.send(prep, **send_kwargs)
/usr/lib/python2.7/site-packages/requests/sessions.py(438)send()
  - r = adapter.send(request, **kwargs)
/usr/lib/python2.7/site-packages/requests/adapters.py(292)send()
  - timeout=timeout
/usr/lib/python2.7/site-packages/urllib3/connectionpool.py(454)urlopen()
  - conn = self._get_conn(timeout=pool_timeout)
/usr/lib/python2.7/site-packages/urllib3/connectionpool.py(272)_get_conn()
  - return conn or self._new_conn()
   
/usr/lib/python2.7/site-packages/glanceclient/common/https.py(100)_new_conn()
  - return VerifiedHTTPSConnection(host=self.host,

  The code about to run there is this:

  class HTTPSConnectionPool(connectionpool.HTTPSConnectionPool):
  
  HTTPSConnectionPool will be instantiated when a new
  connection is requested to the HTTPSAdapter.This
  implementation overwrites the _new_conn method and
  returns an instances of glanceclient's VerifiedHTTPSConnection
  which handles no compression.

  ssl_compression is hard-coded to False because this will
  be used just when the user sets --no-ssl-compression.
  

  scheme = 'https'

  def _new_conn(self):
  self.num_connections += 1
  return VerifiedHTTPSConnection(host=self.host,
 port=self.port,
 key_file=self.key_file,
 cert_file=self.cert_file,
 cacert=self.ca_certs,
 insecure=self.insecure,
   

[Yahoo-eng-team] [Bug 1389850] [NEW] libvirt: Custom disk_bus setting is being lost when migration is reverted

2014-11-05 Thread Vladik Romanovsky
Public bug reported:

When migration is being reverted on a host, the default disk_bus setting
are lost .

finish_revert_migration() should use image_meta, if it exist, when constructing
the disk_info.

** Affects: nova
 Importance: Undecided
 Assignee: Vladik Romanovsky (vladik-romanovsky)
 Status: New


** Tags: libvirt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1389850

Title:
  libvirt: Custom disk_bus setting is being lost when migration is
  reverted

Status in OpenStack Compute (Nova):
  New

Bug description:
  When migration is being reverted on a host, the default disk_bus
  setting are lost .

  finish_revert_migration() should use image_meta, if it exist, when 
constructing
  the disk_info.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1389850/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1389868] [NEW] Horizon asks Sahara for object with id='None'

2014-11-05 Thread Andrew Lazarev
Public bug reported:

Steps to repro:

1. Create cluster without cluster template and node group templates
2. View cluster details

Logs from Sahara:
2014-11-05 13:22:24.700 79534 DEBUG sahara.utils.api [-] Not Found exception 
occurred: error_code=404, error_message=Object with {'id': u'None'} not found, 
error_name=NOT_FOUND not_found 
/Users/alazarev/openstack/sahara/sahara/utils/api.py:255
2014-11-05 13:22:24.700 79534 INFO sahara.cli.sahara_all [-] 127.0.0.1 - - 
[05/Nov/2014 13:22:24] GET 
/v1.1/8e44eb2ce32b4c72b82070abbcd61ba8/node-group-templates/None HTTP/1.1 404 
244 0.006754

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: sahara

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1389868

Title:
  Horizon asks Sahara for object with id='None'

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Steps to repro:

  1. Create cluster without cluster template and node group templates
  2. View cluster details

  Logs from Sahara:
  2014-11-05 13:22:24.700 79534 DEBUG sahara.utils.api [-] Not Found exception 
occurred: error_code=404, error_message=Object with {'id': u'None'} not found, 
error_name=NOT_FOUND not_found 
/Users/alazarev/openstack/sahara/sahara/utils/api.py:255
  2014-11-05 13:22:24.700 79534 INFO sahara.cli.sahara_all [-] 127.0.0.1 - - 
[05/Nov/2014 13:22:24] GET 
/v1.1/8e44eb2ce32b4c72b82070abbcd61ba8/node-group-templates/None HTTP/1.1 404 
244 0.006754

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1389868/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1389880] [NEW] VM loses connectivity on floating ip association when using DVR

2014-11-05 Thread Daniel Gauthier
Public bug reported:


Presence: Juno 2014.2-1 RDO , ubuntu 12.04
openvswitch version on ubuntu is 2.0.2


Description:

Whenever create FIP on a VM, it adds the FIP to ALL other compute nodes, a 
routing prefix in the FIP namespace, and IP interface alias on the qrouter.
However, the iptables gets updated normally with only the DNAT for the 
particular IP of the VM on that compute node
This causes the FIP proxy arp to answer ARP requests for ALL VM's on ALL 
compute nodes which results in compute nodes answering ARPs where they do not 
have
the VM effectively blackholing traffic to that ip.


 
Here is a demonstration of the problem:


Before  adding a vm+fip on compute4

[root@compute2 ~]# ip netns exec fip-616a6213-c339-4164-9dff-344ae9e04929 
ip route show
default via 173.209.44.1 dev fg-6ede0596-3a
169.254.31.28/31 dev fpr-3a90aae6-3  proto kernel  scope link  src 
169.254.31.29
173.209.44.0/24 dev fg-6ede0596-3a  proto kernel  scope link  src 
173.209.44.6
173.209.44.4 via 169.254.31.28 dev fpr-3a90aae6-3


[root@compute3 neutron]# ip netns exec 
fip-616a6213-c339-4164-9dff-344ae9e04929 ip route show
default via 173.209.44.1 dev fg-26bef858-6b
169.254.31.238/31 dev fpr-3a90aae6-3  proto kernel  scope link  src 
169.254.31.239
173.209.44.0/24 dev fg-26bef858-6b  proto kernel  scope link  src 
173.209.44.5
173.209.44.3 via 169.254.31.238 dev fpr-3a90aae6-3


[root@compute4 ~]# ip netns exec fip-616a6213-c339-4164-9dff-344ae9e04929 
ip route show
default via 173.209.44.1 dev fg-2919b6be-f4
173.209.44.0/24 dev fg-2919b6be-f4  proto kernel  scope link  src 
173.209.44.8


after creating a new vm on compute4 and attaching a floating IP to it, we get 
this result.
of course at this point, only the vm on compute4 is able to ping the public 
network 


[root@compute2 ~]# ip netns exec fip-616a6213-c339-4164-9dff-344ae9e04929 
ip route show
default via 173.209.44.1 dev fg-6ede0596-3a
169.254.31.28/31 dev fpr-3a90aae6-3  proto kernel  scope link  src 
169.254.31.29
173.209.44.0/24 dev fg-6ede0596-3a  proto kernel  scope link  src 
173.209.44.6
173.209.44.4 via 169.254.31.28 dev fpr-3a90aae6-3
173.209.44.7 via 169.254.31.28 dev fpr-3a90aae6-3


[root@compute3 neutron]# ip netns exec 
fip-616a6213-c339-4164-9dff-344ae9e04929 ip route show
default via 173.209.44.1 dev fg-26bef858-6b
169.254.31.238/31 dev fpr-3a90aae6-3  proto kernel  scope link  src 
169.254.31.239
173.209.44.0/24 dev fg-26bef858-6b  proto kernel  scope link  src 
173.209.44.5
173.209.44.3 via 169.254.31.238 dev fpr-3a90aae6-3
173.209.44.7 via 169.254.31.238 dev fpr-3a90aae6-3


[root@compute4 ~]# ip netns exec fip-616a6213-c339-4164-9dff-344ae9e04929 
ip route show
default via 173.209.44.1 dev fg-2919b6be-f4
169.254.30.20/31 dev fpr-3a90aae6-3  proto kernel  scope link  src 
169.254.30.21
173.209.44.0/24 dev fg-2919b6be-f4  proto kernel  scope link  src 
173.209.44.8
173.209.44.3 via 169.254.30.20 dev fpr-3a90aae6-3
173.209.44.4 via 169.254.30.20 dev fpr-3a90aae6-3
173.209.44.7 via 169.254.30.20 dev fpr-3a90aae6-3


 **when we deleted the extra FIP from each Compute Nodes Namespace,
everything starts to work just fine**


 
Following are the router, floating IP information and config files : 


+---+--+
| Field | Value 

   |

+---+--+
| admin_state_up| True  

   |
| distributed   | True  

   |
| external_gateway_info | {network_id: 
616a6213-c339-4164-9dff-344ae9e04929, enable_snat: true, 
external_fixed_ips: [{subnet_id: 0077e2d5-3c3d-4cd2-b55c-ee380fba7867, 
ip_address: 173.209.44.2}]} |
| ha| False 

   |
| id| 3a90aae6-3107-49e4-a190-92ed37a43b1a  
  

[Yahoo-eng-team] [Bug 1389899] [NEW] nova delete shouldn't remove instance from DB if host is not up

2014-11-05 Thread Christine Wang
Public bug reported:

Under nova/compute/api.py, it will delete instance from DB if compute
node is not up. I think we should utilize force-delete to handle compute
node is not up scenario. So, if compute node is not up, only force-
delete can delete the instance.

Code flow:
delete - _delete_instance - _delete

_delete(...) code snippet: 
..
if not is_up:
# If compute node isn't up, just delete from DB
self._local_delete(context, instance, bdms, delete_type, cb)
quotas.commit()

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1389899

Title:
  nova delete shouldn't remove instance from DB if host is not up

Status in OpenStack Compute (Nova):
  New

Bug description:
  Under nova/compute/api.py, it will delete instance from DB if compute
  node is not up. I think we should utilize force-delete to handle
  compute node is not up scenario. So, if compute node is not up, only
  force-delete can delete the instance.

  Code flow:
  delete - _delete_instance - _delete

  _delete(...) code snippet: 
  ..
  if not is_up:
  # If compute node isn't up, just delete from DB
  self._local_delete(context, instance, bdms, delete_type, cb)
  quotas.commit()

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1389899/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1389933] [NEW] cell create api failed with string number

2014-11-05 Thread Alex Xu
Public bug reported:

When request as below:
curl -i 
'http://cloudcontroller:8774/v2/04e2ab93c10a4c2dbef1c648d04567cc/os-cells' -X 
POST -H Accept: application/json -H Content-Type: application/json -H 
User-Agent: python-novaclient -H X-Auth-Project-Id: admin -H X-Auth-Token: 
016d26c590ab4a0b91de718d01d7a649 -d '{cell: {name: abc, rpc_port: 
123}}'

Get error as below:
2014-11-06 10:41:37.099 TRACE nova.api.openstack.wsgi Traceback (most recent 
call last):
2014-11-06 10:41:37.099 TRACE nova.api.openstack.wsgi   File 
/opt/stack/nova/nova/api/openstack/wsgi.py, line 950, in _process_stack
2014-11-06 10:41:37.099 TRACE nova.api.openstack.wsgi action_result = 
self.dispatch(meth, request, action_args)
2014-11-06 10:41:37.099 TRACE nova.api.openstack.wsgi   File 
/opt/stack/nova/nova/api/openstack/wsgi.py, line 1034, in dispatch
2014-11-06 10:41:37.099 TRACE nova.api.openstack.wsgi return 
method(req=request, **action_args)
2014-11-06 10:41:37.099 TRACE nova.api.openstack.wsgi   File 
/opt/stack/nova/nova/api/openstack/compute/contrib/cells.py, line 360, in 
create
2014-11-06 10:41:37.099 TRACE nova.api.openstack.wsgi 
self._normalize_cell(cell)
2014-11-06 10:41:37.099 TRACE nova.api.openstack.wsgi   File 
/opt/stack/nova/nova/api/openstack/compute/contrib/cells.py, line 340, in 
_normalize_cell
2014-11-06 10:41:37.099 TRACE nova.api.openstack.wsgi cell['transport_url'] 
= str(transport_url)
2014-11-06 10:41:37.099 TRACE nova.api.openstack.wsgi   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/transport.py, line 318, 
in __str__
2014-11-06 10:41:37.099 TRACE nova.api.openstack.wsgi netloc += ':%d' % port
2014-11-06 10:41:37.099 TRACE nova.api.openstack.wsgi TypeError: %d format: a 
number is required, not unicode

** Affects: nova
 Importance: Undecided
 Assignee: Alex Xu (xuhj)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Alex Xu (xuhj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1389933

Title:
  cell create api failed with string number

Status in OpenStack Compute (Nova):
  New

Bug description:
  When request as below:
  curl -i 
'http://cloudcontroller:8774/v2/04e2ab93c10a4c2dbef1c648d04567cc/os-cells' -X 
POST -H Accept: application/json -H Content-Type: application/json -H 
User-Agent: python-novaclient -H X-Auth-Project-Id: admin -H X-Auth-Token: 
016d26c590ab4a0b91de718d01d7a649 -d '{cell: {name: abc, rpc_port: 
123}}'

  Get error as below:
  2014-11-06 10:41:37.099 TRACE nova.api.openstack.wsgi Traceback (most recent 
call last):
  2014-11-06 10:41:37.099 TRACE nova.api.openstack.wsgi   File 
/opt/stack/nova/nova/api/openstack/wsgi.py, line 950, in _process_stack
  2014-11-06 10:41:37.099 TRACE nova.api.openstack.wsgi action_result = 
self.dispatch(meth, request, action_args)
  2014-11-06 10:41:37.099 TRACE nova.api.openstack.wsgi   File 
/opt/stack/nova/nova/api/openstack/wsgi.py, line 1034, in dispatch
  2014-11-06 10:41:37.099 TRACE nova.api.openstack.wsgi return 
method(req=request, **action_args)
  2014-11-06 10:41:37.099 TRACE nova.api.openstack.wsgi   File 
/opt/stack/nova/nova/api/openstack/compute/contrib/cells.py, line 360, in 
create
  2014-11-06 10:41:37.099 TRACE nova.api.openstack.wsgi 
self._normalize_cell(cell)
  2014-11-06 10:41:37.099 TRACE nova.api.openstack.wsgi   File 
/opt/stack/nova/nova/api/openstack/compute/contrib/cells.py, line 340, in 
_normalize_cell
  2014-11-06 10:41:37.099 TRACE nova.api.openstack.wsgi 
cell['transport_url'] = str(transport_url)
  2014-11-06 10:41:37.099 TRACE nova.api.openstack.wsgi   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/transport.py, line 318, 
in __str__
  2014-11-06 10:41:37.099 TRACE nova.api.openstack.wsgi netloc += ':%d' % 
port
  2014-11-06 10:41:37.099 TRACE nova.api.openstack.wsgi TypeError: %d format: a 
number is required, not unicode

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1389933/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1389941] [NEW] reload not change listen and listen_port

2014-11-05 Thread Tiantian Gao
Public bug reported:

Action reload means a daemon nova-api service receives a SIGHUP
signal, it will reload config files and restart itself.

Ideally, reload should reflect any changed config. But configs like
'osapi_compute_listen', 'osapi_compute_listen_port' not work currently.

[reproduct]
1. run nova-api as a daemon
2. change 'osapi_compute_listen_port' in /etc/nova/nova.conf
3. kill -HUP $pid_of_nova_api_parent

Then you can find the bind address and bind port still not change.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1389941

Title:
  reload not change listen and listen_port

Status in OpenStack Compute (Nova):
  New

Bug description:
  Action reload means a daemon nova-api service receives a SIGHUP
  signal, it will reload config files and restart itself.

  Ideally, reload should reflect any changed config. But configs like
  'osapi_compute_listen', 'osapi_compute_listen_port' not work
  currently.

  [reproduct]
  1. run nova-api as a daemon
  2. change 'osapi_compute_listen_port' in /etc/nova/nova.conf
  3. kill -HUP $pid_of_nova_api_parent

  Then you can find the bind address and bind port still not change.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1389941/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1389694] Re: Unable to list my networks

2014-11-05 Thread Mithil Arun
** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1389694

Title:
  Unable to list my networks

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  I am unable to list my external network in
  PROJECTSNetwork-Network Topology

  But the network is available in Admin section.

  What is the ISSUE??

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1389694/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp