[Yahoo-eng-team] [Bug 1606822] [NEW] can not update lbaas pool name

2016-07-27 Thread li,chen
Public bug reported:

Steps to reproduce:
1. Create a lb

2. Create a pool:
neutron lbaas-pool-create --name pool1 --lb-algorithm ROUND_ROBIN --protocol 
TCP --loadbalancer lb1

3. Update pool name:
neutron lbaas-pool-update --name pool2 pool1

Expected:
pool name been updated

Actual result:

usage: neutron lbaas-pool-update [-h] [--request-format {json}]
 [--admin-state-up {True,False}]
 [--session-persistence 
type=TYPE[,cookie_name=COOKIE_NAME]]
 [--description DESCRIPTION] [--name NAME]
 --lb-algorithm
 {ROUND_ROBIN,LEAST_CONNECTIONS,SOURCE_IP}
 POOL
neutron lbaas-pool-update: error: argument --lb-algorithm is required
Try 'neutron help lbaas-pool-update' for more information.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1606822

Title:
  can not update lbaas pool name

Status in neutron:
  New

Bug description:
  Steps to reproduce:
  1. Create a lb

  2. Create a pool:
  neutron lbaas-pool-create --name pool1 --lb-algorithm ROUND_ROBIN --protocol 
TCP --loadbalancer lb1

  3. Update pool name:
  neutron lbaas-pool-update --name pool2 pool1

  Expected:
  pool name been updated

  Actual result:

  usage: neutron lbaas-pool-update [-h] [--request-format {json}]
   [--admin-state-up {True,False}]
   [--session-persistence 
type=TYPE[,cookie_name=COOKIE_NAME]]
   [--description DESCRIPTION] [--name NAME]
   --lb-algorithm
   {ROUND_ROBIN,LEAST_CONNECTIONS,SOURCE_IP}
   POOL
  neutron lbaas-pool-update: error: argument --lb-algorithm is required
  Try 'neutron help lbaas-pool-update' for more information.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1606822/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1498348] Re: lbaas:Error message appears when updating backend member from one pool to another pool.

2015-12-13 Thread li,chen
This bug do not exist on master code.

** Changed in: neutron
   Status: Confirmed => Invalid

** Changed in: neutron
   Status: Invalid => New

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1498348

Title:
  lbaas:Error message appears when updating backend member from one pool
  to another pool.

Status in neutron:
  Invalid

Bug description:
  1. Create two LB pools(pool1&2) and vip.
  2. Add backend servers for this pool.
  3. Update(Edit) member, select a member and change its pool name from pool1 
to pool2. then an error is printed.

  Error: Failed to update member 43fb9987-7b3f-444e-b274-43b67b01f72d

  actually, it has been updated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1498348/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1498314] Re: lbaas healthmonitor is associated to pool from dashboard even there is no healthmonitor associated.

2015-12-13 Thread li,chen
This bug should been fixed by https://review.openstack.org/#/c/179895/

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1498314

Title:
  lbaas healthmonitor is associated to pool from dashboard even there is
  no healthmonitor associated.

Status in neutron:
  Invalid

Bug description:
  1. Create 4 healthmonitors PING/HTTP/HTTPS/TCP and associate to 4 pools.
  2. Create a new pool and vip,
  3. Get the new created pool details from dashboard.  you will find all 
healthmonitors are associated to this new pool,actually,there is no 
healthmonitor associated if check it from CLI.

  Pool Details

  ID
  f4dc7898-5862-44e4-a3d4-bb35937c4650
  Name
  ssh
  Description
  -
  Project ID
  3c931fc81f3f436095cdf747b517c3c9
  VIP
  ssh
  Provider
  haproxy
  Subnet
  (a363a442-f065) 2.0.1.0/24
  Protocol
  TCP
  Load Balancing Method
  SOURCE_IP
  Members
  2.0.1.104:22
  2.0.1.107:22
  Health Monitors
  PING delay:2 retries:3 timeout:2
  TCP delay:2 retries:3 timeout:2
  HTTP: url:/ method:GET codes:200 delay:2 retries:3 timeout:2
  HTTPS: url:/ method:GET codes:200 delay:2 retries:3 timeout:2
  Admin State Up
  Yes
  Status
  ACTIVE

  root@nsj5:~# neutron lb-pool-show ssh
  ++--+
  | Field  | Value|
  ++--+
  | admin_state_up | True |
  | description|  |
  | health_monitors|  | 
  | health_monitors_status |  |
  | id | f4dc7898-5862-44e4-a3d4-bb35937c4650 |
  | lb_method  | SOURCE_IP|
  | members| c0b927da-d371-473a-9615-365a05ad2de3 |
  || c8a37d10-6720-4c7f-ba9d-1e72966df677 |
  | name   | ssh  |
  | protocol   | TCP  |
  | provider   | haproxy  |
  | status | ACTIVE   |
  | status_description |  |
  | subnet_id  | a363a442-f065-45d0-9078-127d8ac78d54 |
  | tenant_id  | 3c931fc81f3f436095cdf747b517c3c9 |
  | vip_id | 1eef91db-185d-483d-9f9e-ae4d0b6f7232 |
  ++--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1498314/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1267724] Re: How to enable gre/vxlan/vlan/flat network at one cloud at the same time ?

2014-06-05 Thread li,chen
** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1267724

Title:
  How to enable gre/vxlan/vlan/flat network at one cloud at the same
  time  ?

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Hi all,

  I’m doing some function test based on neutron + ml2 plugin.

  I want my cloud can support all kind of network, so I can do further
  comparison tests between different type of network.

  So, I create 4 networks:

  neutron net-list
  
+--+-+--+
  | id   | name 
   | subnets
   |
  
+--+-+--+
  | 1314f7bb-9b52-4db8-a677-a751e52aad0e | gre-1| 
c0774200-7aff-44bd-b122-4264368947da 20.1.100.0/24  |
  | 4e7d06f0-3547-446d-98ca-3adac416e370  | flat-1| 
83df18e1-ab2e-4983-8892-66d7699c4e9a 192.168.13.0/24 |
  | c7e26ebc-078b-4375-b313-795a89a9d8bd | vlan-1  | 
22789dfc-e41e-412c-a325-10a210f176c5 30.1.100.0/24   |
  | fcd5c1a8-34ab-4e0c-9e4d-d99d168aa300  | vxlan-3 | 
534558b0-c0a4-4c7e-add5-1f0abcb91cc3 40.1.100.0/24  |
  
+--+-+--+

  Because my machine only have 1 NIC port can be used for instances data
  network, so I start two dhcp agents:

  neutron agent-list
  
+--++-+---++
  | id   | agent_type | host| 
alive | admin_state_up |
  
+--++-+---++
  | 05e23822-0966-4c7c-9b16-687484385383 | Open vSwitch agent | b-compute05 | 
:-)   | True   |
  | 1267a2c6-f7cb-49d9-b579-18e986139878 | Open vSwitch agent | b-compute06 | 
:-)   | True   |
  | 55f457bf-9ffe-417b-ad50-5878c8a71aab | DHCP agent | b-compute05 | 
:-)   | True   |
  | 928495d3-fac0-4fbf-b958-36c3627d9b18 | Open vSwitch agent | b-compute01 | 
:-)   | True   |
  | 934c721b-8c7d-4605-8e03-400676665afc | Open vSwitch agent | b-network01 | 
:-)   | True   |
  | bd491c90-3597-45ea-b4a0-f37610f2ed9b | DHCP agent | b-network01 | 
:-)   | True   |
  | e07c8133-a3f6-4864-adb2-318f2233fe63 | Linux bridge agent | b-compute02 | 
xxx   | True   |
  | e1070c1e-fcb6-43fc-b2a0-a81e688b814a | Open vSwitch agent | b-compute02 | 
:-)   | True   |
  
+--++-+---++

  The DHCP agent started on b-compute05 is working for network flat-1 and 
vlan-1.
  The DHCP agent started on b-network01  is working for network gre-1 and 
vxlan-3.

  The Open vSwitch agent on b-compute05 and b-compute06 is configured to 
working for flat and vlan.
  The  Open vSwitch agent on b-compute01 and b-compute02 is configured to 
working for vxlan and gre.

  Then I start to create new instances.

  Here comes the issues:

  1.Network will not be auto scheduled to the right DHCP agent.
  It just randomly chose one of the active DHCP agent, and ignore whether the 
DHCP agent can work for that type of network or not.
  And no error message can be found in /var/log/neutron/dhcp-agent.log. 
  Everything looks just fine.
  Only, active instances will never get IP addresses from DHCP.
  I have to assign network to the right DHCP by hand.

  2.Similar issues to nova-scheduler.
  Because nova-scheduler scheduler instances without awareness of what type of 
network compute node support.
  So, it will scheduler instances to the wrong compute node that do not 
actually support the kind of network.
  These instances will end with error status, and with error message in 
/var/log/nova/compute.log:

  2014-01-10 14:59:48.454 9085 ERROR nova.compute.manager 
[req-f3863a12-30e9-420d-a44a-0dd9c0bd1412 c4633e89685d41c4a2d20a2234b5025e 
45c69667e2a64c889719ef8d8e0dd098] [instance: 
d477a7c1-590b-485a-ac1a-055a6fdaca3a] Instance failed to spawn
  2014-01-10 14:59:48.454 9085 TRACE nova.compute.manager [instance: 
d477a7c1-590b-485a-ac1a-055a6fdaca3a] Traceback (most recent call last):
  2014-01-10 14:59:48.454 9085 TRACE nova.compute.manager [instance: 
d477a7c1-590b-485a-ac1a-055a6fdaca3a]   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 1413, in _spawn
  2014-01-10 14:59:48.454 9085 TRACE nova.compute.manager [instance: 
d477a7c1-590b-485a-ac1a-055a6fdaca3a] block_device_info)
  2014-01-10 14:59:48.454 9085 TRACE nova.compute.manager [instance: 

[Yahoo-eng-team] [Bug 1288506] Re: issue when I using PKI for token format

2014-03-10 Thread li,chen
Yes, after run keystone-manage pki_setup`  and change the correct
directory in [signing] , the issue is gone.

** Changed in: keystone
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1288506

Title:
  issue when I using PKI for token format

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  Hi,

  I'm working under CentOS 6.4 + Havana, my keystone version is:
openstack-keystone.noarch 2013.2.2-1.el6  
@openstack-havana

  When I run command keystone user-list, I get error:
   Authorization Failed: Unable to sign token. (HTTP 500)

  I can get error information in both keystone-startup.log and
  keystone.log:

  2014-03-06 09:31:29.999 18693 ERROR keystone.common.cms [-] Signing error: 
Unable to load certificate - ensure you've configured PKI with 'keystone-manage 
pki_setup'
  2014-03-06 09:31:29.999 18693 ERROR keystone.token.providers.pki [-] Unable 
to sign token
  2014-03-06 09:31:29.999 18693 TRACE keystone.token.providers.pki Traceback 
(most recent call last):
  2014-03-06 09:31:29.999 18693 TRACE keystone.token.providers.pki   File 
/usr/lib/python2.6/site-packages/keystone/token/providers/pki.py, line 39, in 
_get_token_id
  2014-03-06 09:31:29.999 18693 TRACE keystone.token.providers.pki 
CONF.signing.keyfile)
  2014-03-06 09:31:29.999 18693 TRACE keystone.token.providers.pki   File 
/usr/lib/python2.6/site-packages/keystone/common/cms.py, line 144, in 
cms_sign_token
  2014-03-06 09:31:29.999 18693 TRACE keystone.token.providers.pki output = 
cms_sign_text(text, signing_cert_file_name, signing_key_file_name)
  2014-03-06 09:31:29.999 18693 TRACE keystone.token.providers.pki   File 
/usr/lib/python2.6/site-packages/keystone/common/cms.py, line 139, in 
cms_sign_text
  2014-03-06 09:31:29.999 18693 TRACE keystone.token.providers.pki raise 
environment.subprocess.CalledProcessError(retcode, openssl)
  2014-03-06 09:31:29.999 18693 TRACE keystone.token.providers.pki 
CalledProcessError: Command 'openssl' returned non-zero exit status 3
  2014-03-06 09:31:29.999 18693 TRACE keystone.token.providers.pki
  2014-03-06 09:31:30.000 18693 WARNING keystone.common.wsgi [-] Unable to sign 
token.
  ~


  Anyone know why this happened ???

  
  Thanks.
  -chen

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1288506/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1289078] [NEW] can't not use file-based backend for catalog

2014-03-06 Thread li,chen
Public bug reported:

Hi list,
 
I’m working under CentOS 6.4 + Havana.
 
I want to use the file based back-end for keystone catalog.
But, after I configured that, when I run command “keystone service list” and 
“keystone endpoint-list”, I get nothing.
 
Anyone know why this happened ???

If I’m using 
[catalog]
driver = keystone.catalog.backends.sql.Catalog

Everything is fine.

While I’m using file-based backend, even I’m using
env |grep SERVICE
SERVICE_ENDPOINT=http://host-keystone:35357/v2.0
SERVICE_TOKEN=ADMIN

I get nothing from command “keystone endpoint-list”….

 
I used to be successfully to this on Grizzly.
 
 
Thanks.
-chen


 
Here is my /etc/keystone/keystone.conf:

[DEFAULT]
 [sql]
connection = mysql://keystone:keystone@host-db/keystone
 
 [identity]
 
 [credential]
 
 [trust]
 
 [os_inherit]
 
 [catalog]
driver = keystone.catalog.backends.templated.TemplatedCatalog
template_file = /etc/keystone/default_catalog.templates
 
 [endpoint_filter]
 
 [token]
driver = keystone.token.backends.memcache.Token
 
 [cache]
 [policy]
 [ec2]
 [assignment]
 [oauth1]
 [ssl]
 [signing]
token_format = UUID
 
 [ldap]
 
 [auth]
methods = external,password,token,oauth1
password = keystone.auth.plugins.password.Password
token = keystone.auth.plugins.token.Token
oauth1 = keystone.auth.plugins.oauth1.OAuth
 
 [paste_deploy]
 
 


 
Here is my /etc/keystone/default_catalog.templates:

catalog.RegionOne.identity.publicURL = http://host-keystone:$(public_port)s/v2.0
catalog.RegionOne.identity.adminURL = http://host-keystone:$(admin_port)s/v2.0
catalog.RegionOne.identity.internalURL = 
http://host-keystone:$(public_port)s/v2.0
catalog.RegionOne.identity.name = Identity Service
 
catalog.RegionOne.compute.publicURL = 
http://host-nova:$(compute_port)s/v1.1/$(tenant_id)s
catalog.RegionOne.compute.adminURL = 
http://host-nova:$(compute_port)s/v1.1/$(tenant_id)s
catalog.RegionOne.compute.internalURL = 
http://host-nova:$(compute_port)s/v1.1/$(tenant_id)s
catalog.RegionOne.compute.name = Compute Service
 
catalog.RegionOne.volume.publicURL = http://host-cinder:8776/v1/$(tenant_id)s
catalog.RegionOne.volume.adminURL = http://host-cinder:8776/v1/$(tenant_id)s
catalog.RegionOne.volume.internalURL = http://host-cinder:8776/v1/$(tenant_id)s
catalog.RegionOne.volume.name = Volume Service
 
 
catalog.RegionOne.image.publicURL = http://host-glance:9292/v1
catalog.RegionOne.image.adminURL = http://host-glance:9292/v1
catalog.RegionOne.image.internalURL = http://host-glance:9292/v1
catalog.RegionOne.image.name = Image Service
 
catalog.RegionOne.network.publicURL = http://host-neutron:9696/
catalog.RegionOne.network.adminURL = http://host-neutron:9696/
catalog.RegionOne.network.internalURL = http://host-neutron:9696/
catalog.RegionOne.network.name = Network Service
 


keystone --debug endpoint-list

 
REQ: curl -i -X POST http://host-keystone:5000/v2.0/tokens -H Content-Type: 
application/json -H User-Agent: python-keystoneclient
REQ BODY: {auth: {tenantName: test, passwordCredentials: {username: 
lichen, password: lichen}}}
 
RESP: [200] {'date': 'Thu, 06 Mar 2014 02:14:28 GMT', 'content-type': 
'application/json', 'content-length': '1897', 'vary': 'X-Auth-Token'}
RESP BODY: {access: {token: {issued_at: 2014-03-06T02:14:28.417502, 
expires: 2014-03-07T02:14:28Z, id: 1a4f03fbec6a41ddbff76afe9d238f83, 
tenant: {description: null, enabled: true, id: 
1e57be810f854bcdb73901567140ac48, name: test}}, serviceCatalog: 
[{endpoints: [{adminURL: 
http://host-cinder:8776/v1/1e57be810f854bcdb73901567140ac48;, region: 
RegionOne, publicURL: 
http://host-cinder:8776/v1/1e57be810f854bcdb73901567140ac48;, internalURL: 
http://host-cinder:8776/v1/1e57be810f854bcdb73901567140ac48}], 
endpoints_links: [], type: volume, name: Volume Service}, 
{endpoints: [{adminURL: http://host-glance:9292/v1;, region: 
RegionOne, publicURL: http://host-glance:9292/v1;, internalURL: 
http://host-glance:9292/v1}], endpoints_links: [], type: image, name: 
Image Service}, {endpoints: [{adminURL: 
http://host-nova:8774/v1.1/1e57be810f854bcdb73901567140ac48;, region: 
RegionOne, publicURL: 
http://host-nova:8774/v1.1/1e57be810f854bcdb73901567140ac48;, internalURL: 
http://host-nova:8774/v1.1/1e57be810f854bcdb73901567140ac48}], 
endpoints_links: [], type: compute, name: Compute Service}, 
{endpoints: [{adminURL: http://host-neutron:9696/;, region: RegionOne, 
publicURL: http://host-neutron:9696/;, internalURL: 
http://host-neutron:9696/}], endpoints_links: [], type: network, 
name: Network Service}, {endpoints: [{adminURL: 
http://host-keystone:35357/v2.0;, region: RegionOne, publicURL: 
http://host-keystone:5000/v2.0;, internalURL: 
http://host-keystone:5000/v2.0}], endpoints_links: [], type: identity, 
name: Identity Service}], 

[Yahoo-eng-team] [Bug 1288508] Re: issue when I using pki as the token provider

2014-03-06 Thread li,chen
Wrong configurations for provider = keystone.token.providers.pki.

It should be 
provider = keystone.token.providers.pki.Provider


** Changed in: keystone
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1288508

Title:
  issue when I using pki as the token provider

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  Hi,

  I'm working under CentOS + Havana.

  When I start keystone, I get error in both keystone.log and 
keystone-startup.log:
 2014-03-06 09:38:18.214 20199 INFO keystone.common.environment [-] 
Environment configured as: eventlet
  2014-03-06 09:38:18.413 20199 CRITICAL keystone [-] Class pki cannot 
be found (['Traceback (most recent call last):\n', '  File 
/usr/lib/python2.6/site-packages/keystone/openstack/common/importutils.py, 
line 31, in import_class\nreturn getattr(sys.modules[mod_str], 
class_str)\n', AttributeError: 'module' object has no attribute 'pki'\n])


  
  My /etc/keystone/keystone.conf is :
  [DEFAULT]

  [sql]
  connection = mysql://keystone:keystone@host-db/keystone

  [identity]

  [credential]

  [trust]

  [os_inherit]

  [catalog]
  driver = keystone.catalog.backends.sql.Catalog

  [endpoint_filter]

  [token]
  driver = keystone.token.backends.memcache.Token
  provider = keystone.token.providers.pki

  [cache]

  [policy]

  [ec2]

  [assignment]

  [oauth1]

  
  [ssl]

  [signing]

  [ldap]

  [auth]
  methods = external,password,token,oauth1
  password = keystone.auth.plugins.password.Password
  token = keystone.auth.plugins.token.Token
  oauth1 = keystone.auth.plugins.oauth1.OAuth

  [paste_deploy]

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1288508/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1289082] [NEW] neutron not schedule the network to DHCP agent

2014-03-06 Thread li,chen
Public bug reported:

Hi list,

I’m working on CentOS 6.4 + Havana.

When I create a new network, I get error message in
/var/log/neutron/server.log:

Run command:

neutron net-create test-network03


Output in /var/log/neutron/server.log:

2014-03-06 13:28:37.078 21277 ERROR
neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api [-] No DHCP agents are
associated with network 'b02112bf-9fe5-4053-8ced-e08bd2547b49'. Unable
to send notification for 'network_create_end' with payload: {'network':
{'status': 'ACTIVE', 'subnets': [], 'name': u'test-network03',
'provider:physical_network': None, 'admin_state_up': True, 'tenant_id':
u'1e57be810f854bcdb73901567140ac48', 'provider:network_type': u'gre',
'shared': False, 'id': 'b02112bf-9fe5-4053-8ced-e08bd2547b49',
'provider:segmentation_id': 3L}}


Run command:

neutron subnet-create test-network03 90.1.130.0/24

Output in /var/log/neutron/server.log:

2014-03-06 13:30:43.414 21277 INFO urllib3.connectionpool [-] Starting new HTTP 
connection (1): host-keystone
2014-03-06 13:30:43.516 21277 ERROR 
neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api [-] No DHCP agents are 
associated with network 'b02112bf-9fe5-4053-8ced-e08bd2547b49'. Unable to send 
notification for 'subnet_create_end' with payload: {'subnet': {'name': '', 
'enable_dhcp': True, 'network_id': u'b02112bf-9fe5-4053-8ced-e08bd2547b49', 
'tenant_id': u'1e57be810f854bcdb73901567140ac48', 'dns_nameservers': [], 
'allocation_pools': [{'start': '90.1.130.2', 'end': '90.1.130.254'}], 
'host_routes': [], 'ip_version': 4, 'gateway_ip': '90.1.130.1', 'cidr': 
u'90.1.130.0/24', 'id': '25d1653f-d043-41e6-9206-d020cab041b1'}}

I do have an active DHCP agent:

neutron agent-list

+--++-+---++
| id   | agent_type | host| 
alive | admin_state_up |
+--++-+---++
| 246dd973-ebfd-4948-b655-6171f5866b19 | DHCP agent | b-compute05 | :-) 
  | True   |
| 5c6260cf-15c2-4757-9170-68be2d5e5d8b | Open vSwitch agent | b-compute05 | :-) 
  | True   |
+--++-+---++

Anyone know why this happen ?

Thanks.
-chen

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  Hi list,
  
  I’m working on CentOS 6.4 + Havana.
  
  When I create a new network, I get error message in
  /var/log/neutron/server.log:
  
- Run command:
- neutron net-create test-network03
+ Run command:
  
- Output in /var/log/neutron/server.log:
+ neutron net-create test-network03
  
- 2014-03-06 13:28:37.078 21277 ERROR
+ 
+ Output in /var/log/neutron/server.log:
+ 
+ 2014-03-06 13:28:37.078 21277 ERROR
  neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api [-] No DHCP agents are
  associated with network 'b02112bf-9fe5-4053-8ced-e08bd2547b49'. Unable
  to send notification for 'network_create_end' with payload: {'network':
  {'status': 'ACTIVE', 'subnets': [], 'name': u'test-network03',
  'provider:physical_network': None, 'admin_state_up': True, 'tenant_id':
  u'1e57be810f854bcdb73901567140ac48', 'provider:network_type': u'gre',
  'shared': False, 'id': 'b02112bf-9fe5-4053-8ced-e08bd2547b49',
  'provider:segmentation_id': 3L}}
  
- Run command:
- neutron subnet-create test-network03 
90.1.130.0/24
  
-Output in /var/log/neutron/server.log:
- 2014-03-06 13:30:43.414 21277 INFO 
urllib3.connectionpool [-] Starting new HTTP connection (1): host-keystone
- 2014-03-06 13:30:43.516 21277 ERROR 
neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api [-] No DHCP agents are 
associated with network 'b02112bf-9fe5-4053-8ced-e08bd2547b49'. Unable to send 
notification for 'subnet_create_end' with payload: {'subnet': {'name': '', 
'enable_dhcp': True, 'network_id': u'b02112bf-9fe5-4053-8ced-e08bd2547b49', 
'tenant_id': u'1e57be810f854bcdb73901567140ac48', 'dns_nameservers': [], 
'allocation_pools': [{'start': '90.1.130.2', 'end': '90.1.130.254'}], 
'host_routes': [], 'ip_version': 4, 'gateway_ip': '90.1.130.1', 'cidr': 
u'90.1.130.0/24', 'id': '25d1653f-d043-41e6-9206-d020cab041b1'}}
+ Run command:
  
+ neutron subnet-create test-network03 90.1.130.0/24
+ 
+ Output in /var/log/neutron/server.log:
+ 
+ 2014-03-06 13:30:43.414 21277 INFO urllib3.connectionpool [-] Starting new 
HTTP connection (1): host-keystone
+ 2014-03-06 13:30:43.516 21277 ERROR 
neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api [-] No DHCP agents are 
associated with network 'b02112bf-9fe5-4053-8ced-e08bd2547b49'. Unable to send 
notification for 'subnet_create_end' with payload: {'subnet': {'name': '', 
'enable_dhcp': True, 

[Yahoo-eng-team] [Bug 1288506] [NEW] issue when I using PKI for token format

2014-03-05 Thread li,chen
Public bug reported:

Hi,

I'm working under CentOS 6.4 + Havana, my keystone version is:
  openstack-keystone.noarch 2013.2.2-1.el6  
@openstack-havana

When I run command keystone user-list, I get error:
 Authorization Failed: Unable to sign token. (HTTP 500)

I can get error information in both keystone-startup.log and
keystone.log:

2014-03-06 09:31:29.999 18693 ERROR keystone.common.cms [-] Signing error: 
Unable to load certificate - ensure you've configured PKI with 'keystone-manage 
pki_setup'
2014-03-06 09:31:29.999 18693 ERROR keystone.token.providers.pki [-] Unable to 
sign token
2014-03-06 09:31:29.999 18693 TRACE keystone.token.providers.pki Traceback 
(most recent call last):
2014-03-06 09:31:29.999 18693 TRACE keystone.token.providers.pki   File 
/usr/lib/python2.6/site-packages/keystone/token/providers/pki.py, line 39, in 
_get_token_id
2014-03-06 09:31:29.999 18693 TRACE keystone.token.providers.pki 
CONF.signing.keyfile)
2014-03-06 09:31:29.999 18693 TRACE keystone.token.providers.pki   File 
/usr/lib/python2.6/site-packages/keystone/common/cms.py, line 144, in 
cms_sign_token
2014-03-06 09:31:29.999 18693 TRACE keystone.token.providers.pki output = 
cms_sign_text(text, signing_cert_file_name, signing_key_file_name)
2014-03-06 09:31:29.999 18693 TRACE keystone.token.providers.pki   File 
/usr/lib/python2.6/site-packages/keystone/common/cms.py, line 139, in 
cms_sign_text
2014-03-06 09:31:29.999 18693 TRACE keystone.token.providers.pki raise 
environment.subprocess.CalledProcessError(retcode, openssl)
2014-03-06 09:31:29.999 18693 TRACE keystone.token.providers.pki 
CalledProcessError: Command 'openssl' returned non-zero exit status 3
2014-03-06 09:31:29.999 18693 TRACE keystone.token.providers.pki
2014-03-06 09:31:30.000 18693 WARNING keystone.common.wsgi [-] Unable to sign 
token.
~


Anyone know why this happened ???


Thanks.
-chen

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1288506

Title:
  issue when I using PKI for token format

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Hi,

  I'm working under CentOS 6.4 + Havana, my keystone version is:
openstack-keystone.noarch 2013.2.2-1.el6  
@openstack-havana

  When I run command keystone user-list, I get error:
   Authorization Failed: Unable to sign token. (HTTP 500)

  I can get error information in both keystone-startup.log and
  keystone.log:

  2014-03-06 09:31:29.999 18693 ERROR keystone.common.cms [-] Signing error: 
Unable to load certificate - ensure you've configured PKI with 'keystone-manage 
pki_setup'
  2014-03-06 09:31:29.999 18693 ERROR keystone.token.providers.pki [-] Unable 
to sign token
  2014-03-06 09:31:29.999 18693 TRACE keystone.token.providers.pki Traceback 
(most recent call last):
  2014-03-06 09:31:29.999 18693 TRACE keystone.token.providers.pki   File 
/usr/lib/python2.6/site-packages/keystone/token/providers/pki.py, line 39, in 
_get_token_id
  2014-03-06 09:31:29.999 18693 TRACE keystone.token.providers.pki 
CONF.signing.keyfile)
  2014-03-06 09:31:29.999 18693 TRACE keystone.token.providers.pki   File 
/usr/lib/python2.6/site-packages/keystone/common/cms.py, line 144, in 
cms_sign_token
  2014-03-06 09:31:29.999 18693 TRACE keystone.token.providers.pki output = 
cms_sign_text(text, signing_cert_file_name, signing_key_file_name)
  2014-03-06 09:31:29.999 18693 TRACE keystone.token.providers.pki   File 
/usr/lib/python2.6/site-packages/keystone/common/cms.py, line 139, in 
cms_sign_text
  2014-03-06 09:31:29.999 18693 TRACE keystone.token.providers.pki raise 
environment.subprocess.CalledProcessError(retcode, openssl)
  2014-03-06 09:31:29.999 18693 TRACE keystone.token.providers.pki 
CalledProcessError: Command 'openssl' returned non-zero exit status 3
  2014-03-06 09:31:29.999 18693 TRACE keystone.token.providers.pki
  2014-03-06 09:31:30.000 18693 WARNING keystone.common.wsgi [-] Unable to sign 
token.
  ~


  Anyone know why this happened ???

  
  Thanks.
  -chen

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1288506/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1288508] [NEW] issue when I using pki as the token provider

2014-03-05 Thread li,chen
Public bug reported:

Hi,

I'm working under CentOS + Havana.

When I start keystone, I get error in both keystone.log and 
keystone-startup.log:
   2014-03-06 09:38:18.214 20199 INFO keystone.common.environment [-] 
Environment configured as: eventlet
2014-03-06 09:38:18.413 20199 CRITICAL keystone [-] Class pki cannot be 
found (['Traceback (most recent call last):\n', '  File 
/usr/lib/python2.6/site-packages/keystone/openstack/common/importutils.py, 
line 31, in import_class\nreturn getattr(sys.modules[mod_str], 
class_str)\n', AttributeError: 'module' object has no attribute 'pki'\n])


My /etc/keystone/keystone.conf is :
[DEFAULT]

[sql]
connection = mysql://keystone:keystone@host-db/keystone

[identity]

[credential]

[trust]

[os_inherit]

[catalog]
driver = keystone.catalog.backends.sql.Catalog

[endpoint_filter]

[token]
driver = keystone.token.backends.memcache.Token
provider = keystone.token.providers.pki

[cache]

[policy]

[ec2]

[assignment]

[oauth1]


[ssl]

[signing]

[ldap]

[auth]
methods = external,password,token,oauth1
password = keystone.auth.plugins.password.Password
token = keystone.auth.plugins.token.Token
oauth1 = keystone.auth.plugins.oauth1.OAuth

[paste_deploy]

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1288508

Title:
  issue when I using pki as the token provider

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Hi,

  I'm working under CentOS + Havana.

  When I start keystone, I get error in both keystone.log and 
keystone-startup.log:
 2014-03-06 09:38:18.214 20199 INFO keystone.common.environment [-] 
Environment configured as: eventlet
  2014-03-06 09:38:18.413 20199 CRITICAL keystone [-] Class pki cannot 
be found (['Traceback (most recent call last):\n', '  File 
/usr/lib/python2.6/site-packages/keystone/openstack/common/importutils.py, 
line 31, in import_class\nreturn getattr(sys.modules[mod_str], 
class_str)\n', AttributeError: 'module' object has no attribute 'pki'\n])


  
  My /etc/keystone/keystone.conf is :
  [DEFAULT]

  [sql]
  connection = mysql://keystone:keystone@host-db/keystone

  [identity]

  [credential]

  [trust]

  [os_inherit]

  [catalog]
  driver = keystone.catalog.backends.sql.Catalog

  [endpoint_filter]

  [token]
  driver = keystone.token.backends.memcache.Token
  provider = keystone.token.providers.pki

  [cache]

  [policy]

  [ec2]

  [assignment]

  [oauth1]

  
  [ssl]

  [signing]

  [ldap]

  [auth]
  methods = external,password,token,oauth1
  password = keystone.auth.plugins.password.Password
  token = keystone.auth.plugins.token.Token
  oauth1 = keystone.auth.plugins.oauth1.OAuth

  [paste_deploy]

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1288508/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1265441] [NEW] dhcp agent issue when working with ovs vxlan

2014-01-01 Thread li,chen
Public bug reported:

Hi list,

I'm working under CentOS 6.4 + Havana.

I'm trying to enable vxlan.

I successfully started an instance, but the instance cant' get IP
address from DHCP agent.

I get an error message in dhcp.log:
2014-01-02 14:38:33.150 17728 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'qdhcp-1314f7bb-9b52-4db8-a677-a751e52aad0e', 'ip', '-o', 
'link', 'show', 'ns-45bfc42b-93'] execute 
/usr/lib/python2.6/site-packages/neutron/agent/linux/utils.py:43
2014-01-02 14:38:33.161 17728 DEBUG neutron.agent.linux.utils [-]
Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'qdhcp-d250b6d9-0077-41ae-88e1-8736dfd0e165', 'ip', 'route', 
'replace', 'default', 'via', '30.1.100.1', 'dev', 'ns-8d8fdae7-1f']
Exit code: 2
Stdout: ''
Stderr: 'RTNETLINK answers: No such process\n' execute 
/usr/lib/python2.6/site-packages/neutron/agent/linux/utils.py:60
2014-01-02 14:38:33.161 17728 ERROR neutron.agent.dhcp_agent [-] Unable to 
enable dhcp.
2014-01-02 14:38:33.161 17728 TRACE neutron.agent.dhcp_agent Traceback (most 
recent call last):
2014-01-02 14:38:33.161 17728 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.6/site-packages/neutron/agent/dhcp_agent.py, line 126, in 
call_driver
2014-01-02 14:38:33.161 17728 TRACE neutron.agent.dhcp_agent 
getattr(driver, action)(**action_kwargs)
2014-01-02 14:38:33.161 17728 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.6/site-packages/neutron/agent/linux/dhcp.py, line 167, in 
enable
2014-01-02 14:38:33.161 17728 TRACE neutron.agent.dhcp_agent 
reuse_existing=True)
2014-01-02 14:38:33.161 17728 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.6/site-packages/neutron/agent/linux/dhcp.py, line 724, in 
setup
2014-01-02 14:38:33.161 17728 TRACE neutron.agent.dhcp_agent 
self._set_default_route(network)
2014-01-02 14:38:33.161 17728 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.6/site-packages/neutron/agent/linux/dhcp.py, line 612, in 
_set_default_route
2014-01-02 14:38:33.161 17728 TRACE neutron.agent.dhcp_agent 
device.route.add_gateway(subnet.gateway_ip)
2014-01-02 14:38:33.161 17728 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.6/site-packages/neutron/agent/linux/ip_lib.py, line 359, in 
add_gateway
2014-01-02 14:38:33.161 17728 TRACE neutron.agent.dhcp_agent 
self._as_root(*args)
2014-01-02 14:38:33.161 17728 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.6/site-packages/neutron/agent/linux/ip_lib.py, line 208, in 
_as_root
2014-01-02 14:38:33.161 17728 TRACE neutron.agent.dhcp_agent 
kwargs.get('use_root_namespace', False))
2014-01-02 14:38:33.161 17728 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.6/site-packages/neutron/agent/linux/ip_lib.py, line 65, in 
_as_root
2014-01-02 14:38:33.161 17728 TRACE neutron.agent.dhcp_agent namespace)
2014-01-02 14:38:33.161 17728 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.6/site-packages/neutron/agent/linux/ip_lib.py, line 76, in 
_execute
2014-01-02 14:38:33.161 17728 TRACE neutron.agent.dhcp_agent 
root_helper=root_helper)
2014-01-02 14:38:33.161 17728 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.6/site-packages/neutron/agent/linux/utils.py, line 62, in 
execute
2014-01-02 14:38:33.161 17728 TRACE neutron.agent.dhcp_agent raise 
RuntimeError(m)
2014-01-02 14:38:33.161 17728 TRACE neutron.agent.dhcp_agent RuntimeError:
2014-01-02 14:38:33.161 17728 TRACE neutron.agent.dhcp_agent Command: ['sudo', 
'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qdhcp-d250b6d9-0077-41ae-88e1-8736dfd0e165', 'ip', 'route', 'replace', 
'default', 'via', '30.1.100.1', 'dev', 'ns-8d8fdae7-1f']
2014-01-02 14:38:33.161 17728 TRACE neutron.agent.dhcp_agent Exit code: 2
2014-01-02 14:38:33.161 17728 TRACE neutron.agent.dhcp_agent Stdout: ''
2014-01-02 14:38:33.161 17728 TRACE neutron.agent.dhcp_agent Stderr: 'RTNETLINK 
answers: No such process\n'
2014-01-02 14:38:33.161 17728 TRACE neutron.agent.dhcp_agent
2014-01-02 14:38:33.267 17728 DEBUG neutron.agent.linux.utils [-]
Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'qdhcp-1314f7bb-9b52-4db8-a677-a751e52aad0e', 'ip', '-o', 
'link', 'show', 'ns-45bfc42b-93']
Exit code: 0
Stdout: '22: ns-45bfc42b-93: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc 
pfifo_fast state UP qlen 1000\\link/ether fa:16:3e:b0:33:b6 brd 
ff:ff:ff:ff:ff:ff\n'
Stderr: '' execute 
/usr/lib/python2.6/site-packages/neutron/agent/linux/utils.py:60
2014-01-02 14:38:33.268 17728 DEBUG neutron.agent.linux.dhcp [-] Reusing 
existing device: ns-45bfc42b-93. setup 
/usr/lib/python2.6/site-packages/neutron/agent/linux/dhcp.py:696
2014-01-02 14:38:33.269 17728 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 

[Yahoo-eng-team] [Bug 1249194] Re: nova add-fixed-ip not work

2013-12-29 Thread li,chen
i have asked this question on mail list, but still no one answers me

** Changed in: nova
   Status: Invalid = New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1249194

Title:
  nova add-fixed-ip not work

Status in OpenStack Compute (Nova):
  New

Bug description:
  Hi list,

  I'm working under CentOS 6.4 + Havana.

  I noticed there're some command support such as :
   nova add-fixed-ip nova interface-attach

  Also, in neutron,
   neutron port-create

  So, I guess it should be possible to add a new virtual NIC port on a
  running instances, right ?

  But, after I run command: nova add-fixed-ip ${instance_name} ${net-id}

  I get error in nova-compute.log:

  2013-11-06 14:16:14.816 11803 DEBUG qpid.messaging.io.ops [-]
  SENT[28db2d8]: SessionCompleted(commands=[0-39]) write_op
  /usr/lib/python2.6/site-packages/qpid/messaging/driver.py:686
  2013-11-06 14:16:14.818 11803 ERROR nova.openstack.common.rpc.amqp
  [req-fdb9abd1-f952-4e90-afb9-5803d3200810
  c4633e89685d41c4a2d20a2234b5025e 45c69667e2a64c889719ef8d8e0dd098]
  Exception during message handling 2013-11-06 14:16:14.818 11803 TRACE
  nova.openstack.common.rpc.amqp Traceback (most recent call last):
  2013-11-06 14:16:14.818 11803 TRACE nova.openstack.common.rpc.amqp
  File /usr/lib/python2.6/site-
  packages/nova/openstack/common/rpc/amqp.py, line 461, in
  _process_data 2013-11-06 14:16:14.818 11803 TRACE
  nova.openstack.common.rpc.amqp *args) 2013-11-06 14:16:14.818 11803
  TRACE nova.openstack.common.rpc.amqp File /usr/lib/python2.6/site-
  packages/nova/openstack/common/rpc/dispatcher.py, line 172, in
  dispatch 2013-11-06 14:16:14.818 11803 TRACE
  nova.openstack.common.rpc.amqp result = getattr(proxyobj,
  method)(ctxt, *kwargs) 2013-11-06 14:16:14.818 11803 TRACE
  nova.openstack.common.rpc.amqp File /usr/lib/python2.6/site-
  packages/nova/exception.py, line 90, in wrapped 2013-11-06
  14:16:14.818 11803 TRACE nova.openstack.common.rpc.amqp payload)
  2013-11-06 14:16:14.818 11803 TRACE nova.openstack.common.rpc.amqp
  File /usr/lib/python2.6/site-packages/nova/exception.py, line 73, in
  wrapped 2013-11-06 14:16:14.818 11803 TRACE
  nova.openstack.common.rpc.amqp return f(self, context, args, *kw)
  2013-11-06 14:16:14.818 11803 TRACE nova.openstack.common.rpc.amqp
  File /usr/lib/python2.6/site-packages/nova/compute/manager.py, line
  243, in decorated_function 2013-11-06 14:16:14.818 11803 TRACE
  nova.openstack.common.rpc.amqp pass 2013-11-06 14:16:14.818 11803
  TRACE nova.openstack.common.rpc.amqp File /usr/lib/python2.6/site-
  packages/nova/compute/manager.py, line 229, in decorated_function
  2013-11-06 14:16:14.818 11803 TRACE nova.openstack.common.rpc.amqp
  return function(self, context, args, *kwargs) 2013-11-06 14:16:14.818
  11803 TRACE nova.openstack.common.rpc.amqp File /usr/lib/python2.6
  /site-packages/nova/compute/manager.py, line 271, in
  decorated_function 2013-11-06 14:16:14.818 11803 TRACE
  nova.openstack.common.rpc.amqp e, sys.exc_info()) 2013-11-06
  14:16:14.818 11803 TRACE nova.openstack.common.rpc.amqp File
  /usr/lib/python2.6/site-packages/nova/compute/manager.py, line 258,
  in decorated_function 2013-11-06 14:16:14.818 11803 TRACE
  nova.openstack.common.rpc.amqp return function(self, context, args,
  *kwargs) 2013-11-06 14:16:14.818 11803 TRACE
  nova.openstack.common.rpc.amqp File /usr/lib/python2.6/site-
  packages/nova/compute/manager.py, line 3169, in
  add_fixed_ip_to_instance 2013-11-06 14:16:14.818 11803 TRACE
  nova.openstack.common.rpc.amqp network_id,
  conductor_api=self.conductor_api) 2013-11-06 14:16:14.818 11803 TRACE
  nova.openstack.common.rpc.amqp File /usr/lib/python2.6/site-
  packages/nova/network/api.py, line 49, in wrapper 2013-11-06
  14:16:14.818 11803 TRACE nova.openstack.common.rpc.amqp res = f(self,
  context, args, *kwargs) 2013-11-06 14:16:14.818 11803 TRACE
  nova.openstack.common.rpc.amqp File /usr/lib/python2.6/site-
  packages/nova/network/neutronv2/api.py, line 513, in
  add_fixed_ip_to_instance 2013-11-06 14:16:14.818 11803 TRACE
  nova.openstack.common.rpc.amqp instance_id=instance['uuid'])
  2013-11-06 14:16:14.818 11803 TRACE nova.openstack.common.rpc.amqp
  NetworkNotFoundForInstance: Network could not be found for instance
  27d1f715-cec2-4514-83e2-1066842a745a. 2013-11-06 14:16:14.818 11803
  TRACE nova.openstack.common.rpc.amqp

  Then I checked /usr/lib/python2.6/site-
  packages/nova/network/neutronv2/api.py function
  add_fixed_ip_to_instance, find the value for data =
  neutronv2.get_client(context).list_ports(**search_opts) is empty.

  So, I run command to create the port:
  neutron port-create --tenant-id ${tenant-id} --device-id 
${instance_id} ${net-name}

  But the value for data =
  neutronv2.get_client(context).list_ports(**search_opts) is still
  empty.

  Then I get into database and 

[Yahoo-eng-team] [Bug 1264932] [NEW] can't get IP when I working with ml2 + GRE

2013-12-29 Thread li,chen
Public bug reported:

Hi list,

I'm working udner CentOS 6.4 + Havava.

I have three nodes working:
server node: running neutron-server
network node: running neutron-openvswitch-agent and neutron-dhcp-agent
compute node:running neutron-openvswitch-agent

I created a gre network and booted a instance successfully.
No error can be found in log.
But, my instance can't get IP from dhcp agent.

Anyone know why this happen ??

Thanks.
-chen


My network:

neutron net-show 1314f7bb-9b52-4db8-a677-a751e52aad0e
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| id| 1314f7bb-9b52-4db8-a677-a751e52aad0e |
| name  | gre-1|
| provider:network_type | gre  |
| provider:physical_network |  |
| provider:segmentation_id  | 1|
| router:external   | False|
| shared| False|
| status| ACTIVE   |
| subnets   | c0774200-7aff-44bd-b122-4264368947da |
| tenant_id | 45c69667e2a64c889719ef8d8e0dd098 |
+---+--+


My instance:
nova show fc1e18df-448c-4c3a-ad9d-b0da9d79b8c6
+--+---+
| Property | Value  
   |
+--+---+
| status   | ACTIVE 
   |
| updated  | 2013-12-30T06:09:22Z   
   |
| OS-EXT-STS:task_state| None   
   |
| OS-EXT-SRV-ATTR:host | b-compute01
   |
| key_name | None   
   |
| image| base_image_on_file 
(a9879545-65a0-4204-81c7-a668947c126d) |
| gre-1 network| 20.1.100.2 
   |
| hostId   | 
bea3a6565d82258df38fac1fc061bce013ad12c9a67d82baf0ace8b8  |
| OS-EXT-STS:vm_state  | active 
   |
| OS-EXT-SRV-ATTR:instance_name| instance-0339  
   |
| OS-SRV-USG:launched_at   | 2013-12-30T06:09:22.00 
   |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | b-compute01
   |
| flavor   | m1.tiny (1)
   |
| id   | fc1e18df-448c-4c3a-ad9d-b0da9d79b8c6   
   |
| security_groups  | [{u'name': u'default'}]
   |
| OS-SRV-USG:terminated_at | None   
   |
| user_id  | c4633e89685d41c4a2d20a2234b5025e   
   |
| name | test-gre-1 
   |
| created  | 2013-12-30T06:09:16Z   
   |
| tenant_id| 45c69667e2a64c889719ef8d8e0dd098   
   |
| OS-DCF:diskConfig| MANUAL 
   |
| metadata | {} 
   |
| os-extended-volumes:volumes_attached | [] 
   |
| accessIPv4   |
   |
| accessIPv6   |
   |
| progress | 0  
   |
| OS-EXT-STS:power_state   | 1  
   |
| OS-EXT-AZ:availability_zone  | nova   
   |
| config_drive |
   |
+--+---+


My ovs on 

[Yahoo-eng-team] [Bug 1264464] [NEW] issue after enable ml2 for neutron

2013-12-27 Thread li,chen
Public bug reported:

Hi list,

Everything works fine when I'm working under plugin openvswitch.

While, after I enable ml2.

I met some issues:

1.After I start neutron-server, I get message in neutron-server, it
complains about can't create table 'ovs_ml2.networkdhcpagentbindings :

2013-12-27 08:27:53.840 32275 INFO neutron.plugins.ml2.managers [-] Configured 
mechanism driver names: []
2013-12-27 08:27:53.841 32275 INFO neutron.plugins.ml2.managers [-] Loaded 
mechanism driver names: []
2013-12-27 08:27:53.841 32275 INFO neutron.plugins.ml2.managers [-] Registered 
mechanism drivers: []
2013-12-27 08:27:53.994 32275 INFO neutron.db.api [-] Database registration 
exception: (OperationalError) (1005, Can't create table 
'ovs_ml2.networkdhcpagentbindings' (errno: 150)) '\nCREATE TABLE 
networkdhcpagentbindings (\n\tnetwork_id VARCHAR(36) NOT NULL, 
\n\tdhcp_agent_id VARCHAR(36) NOT NULL, \n\tPRIMARY KEY (network_id, 
dhcp_agent_id), \n\tFOREIGN KEY(network_id) REFERENCES networks (id) ON DELETE 
CASCADE, \n\tFOREIGN KEY(dhcp_agent_id) REFERENCES agents (id) ON DELETE 
CASCADE\n)ENGINE=InnoDB\n\n' ()
2013-12-27 08:27:53.995 32275 INFO neutron.plugins.ml2.managers [-] 
Initializing driver for type 'flat'
2013-12-27 08:27:53.995 32275 INFO neutron.plugins.ml2.drivers.type_flat [-] 
ML2 FlatTypeDriver initialization complete
2013-12-27 08:27:53.995 32275 INFO neutron.plugins.ml2.managers [-] 
Initializing driver for type 'vlan'
2013-12-27 08:27:54.017 32275 INFO neutron.plugins.ml2.drivers.type_vlan [-] 
VlanTypeDriver initialization complete
2013-12-27 08:27:54.017 32275 INFO neutron.plugins.ml2.managers [-] 
Initializing driver for type 'local'
2013-12-27 08:27:54.017 32275 INFO neutron.plugins.ml2.managers [-] 
Initializing driver for type 'gre'
2013-12-27 08:27:54.017 32275 INFO neutron.plugins.ml2.drivers.type_tunnel [-] 
gre ID ranges: []
2013-12-27 08:27:54.019 32275 INFO neutron.plugins.ml2.managers [-] 
Initializing driver for type 'vxlan'
2013-12-27 08:27:54.019 32275 INFO neutron.plugins.ml2.drivers.type_tunnel [-] 
vxlan ID ranges: []
2013-12-27 08:27:54.079 32275 INFO neutron.openstack.common.rpc.impl_qpid [-] 
Connected to AMQP server on 192.168.11.11:5672
2013-12-27 08:27:54.084 32275 INFO neutron.plugins.ml2.plugin [-] Modular L2 
Plugin initialization complete
2013-12-27 08:27:54.084 32275 INFO neutron.manager [-] Loading Plugin: 
neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
2013-12-27 08:27:54.194 32275 INFO neutron.db.api [-] Database registration 
exception: (OperationalError) (1005, Can't create table 
'ovs_ml2.networkdhcpagentbindings' (errno: 150)) '\nCREATE TABLE 
networkdhcpagentbindings (\n\tnetwork_id VARCHAR(36) NOT NULL, 
\n\tdhcp_agent_id VARCHAR(36) NOT NULL, \n\tPRIMARY KEY (network_id, 
dhcp_agent_id), \n\tFOREIGN KEY(network_id) REFERENCES networks (id) ON DELETE 
CASCADE, \n\tFOREIGN KEY(dhcp_agent_id) REFERENCES agents (id) ON DELETE 
CASCADE\n)ENGINE=InnoDB\n\n' ()

2. table securitygroupportbindings missing.
Because I have booted instances in the cloud, then neutron-server keep 
reporting errors about can't find table securitygroupportbindings.

Anyone know why this happen 

Thanks.
-chen

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  Hi list,
  
  Everything works fine when I'm working under plugin openvswitch.
  
  While, after I enable ml2.
  
  I met some issues:
  
  1.After I start neutron-server, I get message in neutron-server, it
- complains aboutCan't create table 'ovs_ml2.networkdhcpagentbindings :
- 
+ complains about can't create table 'ovs_ml2.networkdhcpagentbindings :
  
  2013-12-27 08:27:53.840 32275 INFO neutron.plugins.ml2.managers [-] 
Configured mechanism driver names: []
  2013-12-27 08:27:53.841 32275 INFO neutron.plugins.ml2.managers [-] Loaded 
mechanism driver names: []
  2013-12-27 08:27:53.841 32275 INFO neutron.plugins.ml2.managers [-] 
Registered mechanism drivers: []
  2013-12-27 08:27:53.994 32275 INFO neutron.db.api [-] Database registration 
exception: (OperationalError) (1005, Can't create table 
'ovs_ml2.networkdhcpagentbindings' (errno: 150)) '\nCREATE TABLE 
networkdhcpagentbindings (\n\tnetwork_id VARCHAR(36) NOT NULL, 
\n\tdhcp_agent_id VARCHAR(36) NOT NULL, \n\tPRIMARY KEY (network_id, 
dhcp_agent_id), \n\tFOREIGN KEY(network_id) REFERENCES networks (id) ON DELETE 
CASCADE, \n\tFOREIGN KEY(dhcp_agent_id) REFERENCES agents (id) ON DELETE 
CASCADE\n)ENGINE=InnoDB\n\n' ()
  2013-12-27 08:27:53.995 32275 INFO neutron.plugins.ml2.managers [-] 
Initializing driver for type 'flat'
  2013-12-27 08:27:53.995 32275 INFO neutron.plugins.ml2.drivers.type_flat [-] 
ML2 FlatTypeDriver initialization complete
  2013-12-27 08:27:53.995 32275 INFO neutron.plugins.ml2.managers [-] 
Initializing driver for type 'vlan'
  2013-12-27 08:27:54.017 32275 INFO neutron.plugins.ml2.drivers.type_vlan [-] 
VlanTypeDriver initialization complete
  2013-12-27 08:27:54.017 32275 INFO 

[Yahoo-eng-team] [Bug 1264482] [NEW] can't create network after enable ml2

2013-12-27 Thread li,chen
Public bug reported:

Hi list,

When I run command :   neutron net-create vxlan-1, I get error in
neutron-server:

2013-12-27 09:55:37.493 32679 DEBUG qpid.messaging.io.ops [-] SENT[503df80]: 
SessionCommandPoint(command_id=serial(0), command_offset=0) write_op 
/usr/lib/python2.6/site-packages/qpid/messaging/driver.py:686
2013-12-27 09:55:37.496 32679 ERROR neutron.api.v2.resource [-] create failed
2013-12-27 09:55:37.496 32679 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
2013-12-27 09:55:37.496 32679 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.6/site-packages/neutron/api/v2/resource.py, line 84, in 
resource
2013-12-27 09:55:37.496 32679 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
2013-12-27 09:55:37.496 32679 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.6/site-packages/neutron/api/v2/base.py, line 407, in create
2013-12-27 09:55:37.496 32679 TRACE neutron.api.v2.resource obj)})
2013-12-27 09:55:37.496 32679 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.6/site-packages/neutron/api/v2/base.py, line 386, in notify
2013-12-27 09:55:37.496 32679 TRACE neutron.api.v2.resource notifier_method)
2013-12-27 09:55:37.496 32679 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.6/site-packages/neutron/api/v2/base.py, line 268, in 
_send_dhcp_notification
2013-12-27 09:55:37.496 32679 TRACE neutron.api.v2.resource 
self._dhcp_agent_notifier.notify(context, data, methodname)
2013-12-27 09:55:37.496 32679 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.6/site-packages/neutron/api/rpc/agentnotifiers/dhcp_rpc_agent_api.py,
 line 132, in notify
2013-12-27 09:55:37.496 32679 TRACE neutron.api.v2.resource 
self._notification(context, methodname, data, network_id)
2013-12-27 09:55:37.496 32679 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.6/site-packages/neutron/api/rpc/agentnotifiers/dhcp_rpc_agent_api.py,
 line 79, in _notification
2013-12-27 09:55:37.496 32679 TRACE neutron.api.v2.resource for (host, 
topic) in self._get_dhcp_agents(context, network_id):
2013-12-27 09:55:37.496 32679 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.6/site-packages/neutron/api/rpc/agentnotifiers/dhcp_rpc_agent_api.py,
 line 49, in _get_dhcp_agents
2013-12-27 09:55:37.496 32679 TRACE neutron.api.v2.resource context, 
[network_id], active=True)
2013-12-27 09:55:37.496 32679 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.6/site-packages/neutron/db/agentschedulers_db.py, line 123, 
in get_dhcp_agents_hosting_networks
2013-12-27 09:55:37.496 32679 TRACE neutron.api.v2.resource for binding in 
query
2013-12-27 09:55:37.496 32679 TRACE neutron.api.v2.resource   File 
/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/query.py,
 line 2227, in __iter__
2013-12-27 09:55:37.496 32679 TRACE neutron.api.v2.resource return 
self._execute_and_instances(context)
2013-12-27 09:55:37.496 32679 TRACE neutron.api.v2.resource   File 
/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/query.py,
 line 2242, in _execute_and_instances
2013-12-27 09:55:37.496 32679 TRACE neutron.api.v2.resource result = 
conn.execute(querycontext.statement, self._params)
2013-12-27 09:55:37.496 32679 TRACE neutron.api.v2.resource   File 
/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py,
 line 1449, in execute
2013-12-27 09:55:37.496 32679 TRACE neutron.api.v2.resource params)
2013-12-27 09:55:37.496 32679 TRACE neutron.api.v2.resource   File 
/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py,
 line 1584, in _execute_clauseelement
2013-12-27 09:55:37.496 32679 TRACE neutron.api.v2.resource compiled_sql, 
distilled_params
2013-12-27 09:55:37.496 32679 TRACE neutron.api.v2.resource   File 
/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py,
 line 1698, in _execute_context
2013-12-27 09:55:37.496 32679 TRACE neutron.api.v2.resource context)
2013-12-27 09:55:37.496 32679 TRACE neutron.api.v2.resource   File 
/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py,
 line 1851, in _handle_dbapi_exception
2013-12-27 09:55:37.496 32679 TRACE neutron.api.v2.resource None, 
sys.exc_info()[2]
2013-12-27 09:55:37.496 32679 TRACE neutron.api.v2.resource ProgrammingError: 
(ProgrammingError) (1146, Table 'ovs_ml2.networkdhcpagentbindings' doesn't 
exist) 'SELECT networkdhcpagentbindings.network_id AS 
networkdhcpagentbindings_network_id, networkdhcpagentbindings.dhcp_agent_id AS 
networkdhcpagentbindings_dhcp_agent_id, agents_1.id AS agents_1_id, 
agents_1.agent_type AS agents_1_agent_type, agents_1.`binary` AS 
agents_1_binary, agents_1.topic AS agents_1_topic, agents_1.host AS 
agents_1_host, agents_1.admin_state_up AS agents_1_admin_state_up, 

[Yahoo-eng-team] [Bug 1263637] Re: can't enable ml2

2013-12-24 Thread li,chen
Yes, looks like stevedore version is the issue.

Can I ask a little more about what is stevedore used for ??

Thanks.
-chen



** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1263637

Title:
  can't enable ml2

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Hi list,
  I'm working under CentOS 6.4 + Havana:
  openstack-neutron.noarch   
2013.2.1-1.el6  @openstack-havana
  openstack-neutron-linuxbridge.noarch   2013.2.1-1.el6  
@openstack-havana
  openstack-neutron-ml2.noarch 
2013.2.1-1.el6  @openstack-havana
  openstack-neutron-openvswitch.noarch2013.2.1-1.el6  
@openstack-havana
  python-neutron.noarch 
  2013.2.1-1.el6  @openstack-havana
  python-neutronclient.noarch  
2.3.1-1.el6 @openstack-havana

  
  Everything works when I was working under core_plugin = 
neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2.

  
  While after I set   core_plugin = 
neutron.plugins.ml2.plugin.Ml2Plugin,
  I can't start neutron-server.

  The log in /var/log/neutron/server.log is :

  2013-12-23 08:35:00.660 37948 INFO neutron.common.config [-] Logging enabled!
  2013-12-23 08:35:00.661 37948 ERROR neutron.common.legacy [-] Skipping 
unknown group key: firewall_driver
  2013-12-23 08:35:00.664 37948 INFO neutron.common.config [-] Config paste 
file: /etc/neutron/api-paste.ini
  2013-12-23 08:35:00.693 37948 INFO neutron.manager [-] Loading Plugin: 
neutron.plugins.ml2.plugin.Ml2Plugin
  2013-12-23 08:35:00.735 37948 INFO neutron.plugins.ml2.managers [-] 
Configured type driver names: ['local', 'flat', 'vlan', 'gre', 'vxlan']
  2013-12-23 08:35:00.750 37948 INFO neutron.plugins.ml2.drivers.type_flat [-] 
Allowable flat physical_network names: []
  2013-12-23 08:35:00.752 37948 INFO neutron.plugins.ml2.drivers.type_vlan [-] 
Network VLAN ranges: {}
  2013-12-23 08:35:00.752 37948 INFO neutron.plugins.ml2.drivers.type_local [-] 
ML2 LocalTypeDriver initialization complete
  2013-12-23 08:35:00.758 37948 INFO neutron.plugins.ml2.managers [-] Loaded 
type driver names: ['flat', 'vlan', 'local', 'gre', 'vxlan']
  2013-12-23 08:35:00.758 37948 INFO neutron.plugins.ml2.managers [-] 
Registered types: ['flat', 'vlan', 'local', 'gre', 'vxlan']
  2013-12-23 08:35:00.758 37948 INFO neutron.plugins.ml2.managers [-] Tenant 
network_types: ['local', 'flat', 'vlan', 'gre', 'vxlan']
  2013-12-23 08:35:00.758 37948 INFO neutron.plugins.ml2.managers [-] 
Configured mechanism driver names: ['openvswitch', 'linuxbridge']
  2013-12-23 08:35:00.759 37948 ERROR neutron.service [-] Unrecoverable error: 
please check log for details.
  2013-12-23 08:35:00.759 37948 TRACE neutron.service Traceback (most recent 
call last):
  2013-12-23 08:35:00.759 37948 TRACE neutron.service   File 
/usr/lib/python2.6/site-packages/neutron/service.py, line 99, in serve_wsgi
  2013-12-23 08:35:00.759 37948 TRACE neutron.service service.start()
  2013-12-23 08:35:00.759 37948 TRACE neutron.service   File 
/usr/lib/python2.6/site-packages/neutron/service.py, line 68, in start
  2013-12-23 08:35:00.759 37948 TRACE neutron.service self.wsgi_app = 
_run_wsgi(self.app_name)
  2013-12-23 08:35:00.759 37948 TRACE neutron.service   File 
/usr/lib/python2.6/site-packages/neutron/service.py, line 112, in _run_wsgi
  2013-12-23 08:35:00.759 37948 TRACE neutron.service app = 
config.load_paste_app(app_name)
  2013-12-23 08:35:00.759 37948 TRACE neutron.service   File 
/usr/lib/python2.6/site-packages/neutron/common/config.py, line 144, in 
load_paste_app
  2013-12-23 08:35:00.759 37948 TRACE neutron.service app = 
deploy.loadapp(config:%s % config_path, name=app_name)
  2013-12-23 08:35:00.759 37948 TRACE neutron.service   File 
/usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/loadwsgi.py,
 line 247, in loadapp
  2013-12-23 08:35:00.759 37948 TRACE neutron.service return loadobj(APP, 
uri, name=name, **kw)
  2013-12-23 08:35:00.759 37948 TRACE neutron.service   File 
/usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/loadwsgi.py,
 line 272, in loadobj
  2013-12-23 08:35:00.759 37948 TRACE neutron.service return 
context.create()
  2013-12-23 08:35:00.759 37948 TRACE neutron.service   File 
/usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/loadwsgi.py,
 line 710, in create
  2013-12-23 08:35:00.759 37948 TRACE neutron.service return 
self.object_type.invoke(self)
  2013-12-23 08:35:00.759 37948 TRACE neutron.service   File 
/usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/loadwsgi.py,
 line 144, in invoke
  

[Yahoo-eng-team] [Bug 1263637] [NEW] can't enable ml2

2013-12-23 Thread li,chen
Public bug reported:

Hi list,
I'm working under CentOS 6.4 + Havana:
openstack-neutron.noarch   
2013.2.1-1.el6  @openstack-havana
openstack-neutron-linuxbridge.noarch   2013.2.1-1.el6  
@openstack-havana
openstack-neutron-ml2.noarch 
2013.2.1-1.el6  @openstack-havana
openstack-neutron-openvswitch.noarch2013.2.1-1.el6  
@openstack-havana
python-neutron.noarch   
2013.2.1-1.el6  @openstack-havana
python-neutronclient.noarch  
2.3.1-1.el6 @openstack-havana


Everything works when I was working under core_plugin = 
neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2.


While after I set   core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin,
I can't start neutron-server.

The log in /var/log/neutron/server.log is :

2013-12-23 08:35:00.660 37948 INFO neutron.common.config [-] Logging enabled!
2013-12-23 08:35:00.661 37948 ERROR neutron.common.legacy [-] Skipping unknown 
group key: firewall_driver
2013-12-23 08:35:00.664 37948 INFO neutron.common.config [-] Config paste file: 
/etc/neutron/api-paste.ini
2013-12-23 08:35:00.693 37948 INFO neutron.manager [-] Loading Plugin: 
neutron.plugins.ml2.plugin.Ml2Plugin
2013-12-23 08:35:00.735 37948 INFO neutron.plugins.ml2.managers [-] Configured 
type driver names: ['local', 'flat', 'vlan', 'gre', 'vxlan']
2013-12-23 08:35:00.750 37948 INFO neutron.plugins.ml2.drivers.type_flat [-] 
Allowable flat physical_network names: []
2013-12-23 08:35:00.752 37948 INFO neutron.plugins.ml2.drivers.type_vlan [-] 
Network VLAN ranges: {}
2013-12-23 08:35:00.752 37948 INFO neutron.plugins.ml2.drivers.type_local [-] 
ML2 LocalTypeDriver initialization complete
2013-12-23 08:35:00.758 37948 INFO neutron.plugins.ml2.managers [-] Loaded type 
driver names: ['flat', 'vlan', 'local', 'gre', 'vxlan']
2013-12-23 08:35:00.758 37948 INFO neutron.plugins.ml2.managers [-] Registered 
types: ['flat', 'vlan', 'local', 'gre', 'vxlan']
2013-12-23 08:35:00.758 37948 INFO neutron.plugins.ml2.managers [-] Tenant 
network_types: ['local', 'flat', 'vlan', 'gre', 'vxlan']
2013-12-23 08:35:00.758 37948 INFO neutron.plugins.ml2.managers [-] Configured 
mechanism driver names: ['openvswitch', 'linuxbridge']
2013-12-23 08:35:00.759 37948 ERROR neutron.service [-] Unrecoverable error: 
please check log for details.
2013-12-23 08:35:00.759 37948 TRACE neutron.service Traceback (most recent call 
last):
2013-12-23 08:35:00.759 37948 TRACE neutron.service   File 
/usr/lib/python2.6/site-packages/neutron/service.py, line 99, in serve_wsgi
2013-12-23 08:35:00.759 37948 TRACE neutron.service service.start()
2013-12-23 08:35:00.759 37948 TRACE neutron.service   File 
/usr/lib/python2.6/site-packages/neutron/service.py, line 68, in start
2013-12-23 08:35:00.759 37948 TRACE neutron.service self.wsgi_app = 
_run_wsgi(self.app_name)
2013-12-23 08:35:00.759 37948 TRACE neutron.service   File 
/usr/lib/python2.6/site-packages/neutron/service.py, line 112, in _run_wsgi
2013-12-23 08:35:00.759 37948 TRACE neutron.service app = 
config.load_paste_app(app_name)
2013-12-23 08:35:00.759 37948 TRACE neutron.service   File 
/usr/lib/python2.6/site-packages/neutron/common/config.py, line 144, in 
load_paste_app
2013-12-23 08:35:00.759 37948 TRACE neutron.service app = 
deploy.loadapp(config:%s % config_path, name=app_name)
2013-12-23 08:35:00.759 37948 TRACE neutron.service   File 
/usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/loadwsgi.py,
 line 247, in loadapp
2013-12-23 08:35:00.759 37948 TRACE neutron.service return loadobj(APP, 
uri, name=name, **kw)
2013-12-23 08:35:00.759 37948 TRACE neutron.service   File 
/usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/loadwsgi.py,
 line 272, in loadobj
2013-12-23 08:35:00.759 37948 TRACE neutron.service return context.create()
2013-12-23 08:35:00.759 37948 TRACE neutron.service   File 
/usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/loadwsgi.py,
 line 710, in create
2013-12-23 08:35:00.759 37948 TRACE neutron.service return 
self.object_type.invoke(self)
2013-12-23 08:35:00.759 37948 TRACE neutron.service   File 
/usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/loadwsgi.py,
 line 144, in invoke
2013-12-23 08:35:00.759 37948 TRACE neutron.service **context.local_conf)
2013-12-23 08:35:00.759 37948 TRACE neutron.service   File 
/usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/util.py,
 line 59, in fix_call
2013-12-23 08:35:00.759 37948 TRACE neutron.service reraise(*exc_info)
2013-12-23 08:35:00.759 37948 TRACE neutron.service   File 
/usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/compat.py,
 line 22, in reraise
2013-12-23 08:35:00.759 37948 TRACE 

[Yahoo-eng-team] [Bug 1258421] Re: NotRegistered: Dashboard with slug router is not registered.

2013-12-08 Thread li,chen
** Changed in: horizon
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1258421

Title:
   NotRegistered: Dashboard with slug router is not registered.

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  I'm working under CentOS 6.4 + Openstack - Havana, everything works
  fine except Horizon.

  I get error in /var/log/httpd/error_log:

  [Fri Dec 06 01:27:42 2013] [error] REQ: curl -i -X GET 
http://192.168.11.11:35357/v2.0/tenants -H User-Agent: python-keystoneclient 
-H Forwarded: for=192.168.4.254;by=python-keystoneclient -H X-Auth-Token: 
3626890532059c0fc72580224dd15ab1
  [Fri Dec 06 01:27:42 2013] [error] INFO:urllib3.connectionpool:Starting new 
HTTP connection (1): 192.168.11.11
  [Fri Dec 06 01:27:42 2013] [error] DEBUG:urllib3.connectionpool:GET 
/v2.0/tenants HTTP/1.1 200 381
  [Fri Dec 06 01:27:42 2013] [error] RESP: [200] {'date': 'Fri, 06 Dec 2013 
07:27:42 GMT', 'content-type': 'application/json', 'content-length': '381', 
'vary': 'X-Auth-Token'}
  [Fri Dec 06 01:27:42 2013] [error] RESP BODY: {tenants_links: [], 
tenants: [{description: admin tenant, enabled: true, id: 
45c69667e2a64c889719ef8d8e0dd098, name: admin}, {description: Tenant 
for the openstack services, enabled: true, id: 
4cc060c11bc046178c253aa9521aa152, name: services}, {description: null, 
enabled: true, id: ca47670e792e46d48363dee7e7e43688, name: 
policy_test}]}
  [Fri Dec 06 01:27:42 2013] [error]
  [Fri Dec 06 01:27:42 2013] [error] ERROR:django.request:Internal Server 
Error: /dashboard/admin/
  [Fri Dec 06 01:27:42 2013] [error] Traceback (most recent call last):
  [Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/core/handlers/base.py, line 136, in 
get_response
  [Fri Dec 06 01:27:42 2013] [error] response = response.render()
  [Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/response.py, line 104, in 
render
  [Fri Dec 06 01:27:42 2013] [error] 
self._set_content(self.rendered_content)
  [Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/response.py, line 81, in 
rendered_content
  [Fri Dec 06 01:27:42 2013] [error] content = template.render(context)
  [Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/base.py, line 140, in render
  [Fri Dec 06 01:27:42 2013] [error] return self._render(context)
  [Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/base.py, line 134, in _render
  [Fri Dec 06 01:27:42 2013] [error] return self.nodelist.render(context)
  [Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/base.py, line 823, in render
  [Fri Dec 06 01:27:42 2013] [error] bit = self.render_node(node, context)
  [Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/debug.py, line 74, in 
render_node
  [Fri Dec 06 01:27:42 2013] [error] return node.render(context)
  [Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/loader_tags.py, line 123, in 
render
  [Fri Dec 06 01:27:42 2013] [error] return compiled_parent._render(context)
  [Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/base.py, line 134, in _render
  [Fri Dec 06 01:27:42 2013] [error] return self.nodelist.render(context)
  [Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/base.py, line 823, in render
  [Fri Dec 06 01:27:42 2013] [error] bit = self.render_node(node, context)
  [Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/debug.py, line 74, in 
render_node
  [Fri Dec 06 01:27:42 2013] [error] return node.render(context)
  [Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/loader_tags.py, line 62, in 
render
  [Fri Dec 06 01:27:42 2013] [error] result = block.nodelist.render(context)
  [Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/base.py, line 823, in render
  [Fri Dec 06 01:27:42 2013] [error] bit = self.render_node(node, context)
  [Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/debug.py, line 74, in 
render_node
  [Fri Dec 06 01:27:42 2013] [error] return node.render(context)
  [Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/loader_tags.py, line 62, in 
render
  [Fri Dec 06 01:27:42 2013] [error] result = block.nodelist.render(context)
  [Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/base.py, line 823, in render
  [Fri Dec 06 01:27:42 2013] [error] bit = 

[Yahoo-eng-team] [Bug 1258421] [NEW] NotRegistered: Dashboard with slug router is not registered.

2013-12-05 Thread li,chen
Public bug reported:

I'm working under CentOS 6.4 + Openstack - Havana, everything works fine
except Horizon.

I get error in /var/log/httpd/error_log:

[Fri Dec 06 01:27:42 2013] [error] REQ: curl -i -X GET 
http://192.168.11.11:35357/v2.0/tenants -H User-Agent: python-keystoneclient 
-H Forwarded: for=192.168.4.254;by=python-keystoneclient -H X-Auth-Token: 
3626890532059c0fc72580224dd15ab1
[Fri Dec 06 01:27:42 2013] [error] INFO:urllib3.connectionpool:Starting new 
HTTP connection (1): 192.168.11.11
[Fri Dec 06 01:27:42 2013] [error] DEBUG:urllib3.connectionpool:GET 
/v2.0/tenants HTTP/1.1 200 381
[Fri Dec 06 01:27:42 2013] [error] RESP: [200] {'date': 'Fri, 06 Dec 2013 
07:27:42 GMT', 'content-type': 'application/json', 'content-length': '381', 
'vary': 'X-Auth-Token'}
[Fri Dec 06 01:27:42 2013] [error] RESP BODY: {tenants_links: [], tenants: 
[{description: admin tenant, enabled: true, id: 
45c69667e2a64c889719ef8d8e0dd098, name: admin}, {description: Tenant 
for the openstack services, enabled: true, id: 
4cc060c11bc046178c253aa9521aa152, name: services}, {description: null, 
enabled: true, id: ca47670e792e46d48363dee7e7e43688, name: 
policy_test}]}
[Fri Dec 06 01:27:42 2013] [error]
[Fri Dec 06 01:27:42 2013] [error] ERROR:django.request:Internal Server Error: 
/dashboard/admin/
[Fri Dec 06 01:27:42 2013] [error] Traceback (most recent call last):
[Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/core/handlers/base.py, line 136, in 
get_response
[Fri Dec 06 01:27:42 2013] [error] response = response.render()
[Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/response.py, line 104, in 
render
[Fri Dec 06 01:27:42 2013] [error] self._set_content(self.rendered_content)
[Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/response.py, line 81, in 
rendered_content
[Fri Dec 06 01:27:42 2013] [error] content = template.render(context)
[Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/base.py, line 140, in render
[Fri Dec 06 01:27:42 2013] [error] return self._render(context)
[Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/base.py, line 134, in _render
[Fri Dec 06 01:27:42 2013] [error] return self.nodelist.render(context)
[Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/base.py, line 823, in render
[Fri Dec 06 01:27:42 2013] [error] bit = self.render_node(node, context)
[Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/debug.py, line 74, in 
render_node
[Fri Dec 06 01:27:42 2013] [error] return node.render(context)
[Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/loader_tags.py, line 123, in 
render
[Fri Dec 06 01:27:42 2013] [error] return compiled_parent._render(context)
[Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/base.py, line 134, in _render
[Fri Dec 06 01:27:42 2013] [error] return self.nodelist.render(context)
[Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/base.py, line 823, in render
[Fri Dec 06 01:27:42 2013] [error] bit = self.render_node(node, context)
[Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/debug.py, line 74, in 
render_node
[Fri Dec 06 01:27:42 2013] [error] return node.render(context)
[Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/loader_tags.py, line 62, in 
render
[Fri Dec 06 01:27:42 2013] [error] result = block.nodelist.render(context)
[Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/base.py, line 823, in render
[Fri Dec 06 01:27:42 2013] [error] bit = self.render_node(node, context)
[Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/debug.py, line 74, in 
render_node
[Fri Dec 06 01:27:42 2013] [error] return node.render(context)
[Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/loader_tags.py, line 62, in 
render
[Fri Dec 06 01:27:42 2013] [error] result = block.nodelist.render(context)
[Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/base.py, line 823, in render
[Fri Dec 06 01:27:42 2013] [error] bit = self.render_node(node, context)
[Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/debug.py, line 74, in 
render_node
[Fri Dec 06 01:27:42 2013] [error] return node.render(context)
[Fri Dec 06 01:27:42 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/template/loader_tags.py, line 155, in 
render
[Fri Dec 06 01:27:42 2013] [error] return 
self.render_template(self.template, context)
[Fri Dec 

[Yahoo-eng-team] [Bug 1249196] Re: nova list lost network information after run nova interface-attach

2013-11-25 Thread li,chen
duplicated bug for https://bugs.launchpad.net/nova/+bug/1223859.
Already fixed.


Thanks.
-chen


** Changed in: nova
   Status: In Progress = Invalid

** Changed in: nova
 Assignee: li,chen (chen-li) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1249196

Title:
   nova list lost network information after run nova interface-attach

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Hi list

  
  I'm working under CentOS 6.4 + Havana.

  I have a work fine instance in the cloud, display like:

   nova list
  
+--+---+++
  | ID   | Name  | Status | Networks
   |
  
+--+---+++
  | d3b16acd-2cae-47de--b3435db19399 | test-havana-4 | ACTIVE | 
flat_physnet1=191.101.0.14 |
  
+--+---+++

   After I run command nova interface-attach  --net-id ${net_id} test-
  havana-4

  I get:

   nova list
  
+--+---++---+
  | ID   | Name  | Status | Networks
  |
  
+--+---++---+
  | d3b16acd-2cae-47de--b3435db19399 | test-havana-4 | ACTIVE | 
vlan2=9.1.0.2 |
  
+--+---++---+

  The original network disappeared 

  But, actually, in vm, everything works fine.
  I can get two virtual NIC in VM:
  2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast 
state UP mode DEFAULT qlen 1000
link/ether fa:16:3e:c7:a7:d6 brd ff:ff:ff:ff:ff:ff
  3: eth1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast 
state UP mode DEFAULT qlen 1000
   link/ether fa:16:3e:23:eb:cc brd ff:ff:ff:ff:ff:ff

  So, the result shows in nova list is not correct.

  Thanks.
  -chen

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1249196/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1220505] Re: IP will be allocated automate even it is a floating IP

2013-09-04 Thread li,chen
** Changed in: neutron
   Status: Invalid = New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1220505

Title:
  IP will be allocated automate  even it is a floating IP

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I'm working under Centos 6.4 + Grizzly.

  I have created two networks, one for instances private network, and
  another one for public network (for floating IP ).

  Everything works fine. But , if I create an instance without point out the 
private network id, such as:
  nova boot --flavor m1.tiny --image 
c4302a6f-196d-4d3e-be64-c9413e8d1f71  test1

  The instance will be start with both network:
  | d99fd089-5afe-4397-b51b-767485b43383 | test1  | ACTIVE  | 
public=192.168.14.29; private=10.1.0.243  |

  The network works fine, but, I don't want the instance has the public
  IP.

  And, I think because I already assigned this public network to a router, so 
it is clear that it is not an auto assign IP.
  Also, if it can be auto assigned to an instance, it should be a floating IP 
,but not like what it is now.

  Any ideas?

  Thanks.
  -chen

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1220505/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1211224] Re: nova volume-attach failed with error info libvirtError: internal error unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk3' could not

2013-08-15 Thread li,chen
http://waipeng.wordpress.com/2013/05/20/centos-openstack-cinder-ceph/

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1211224

Title:
  nova volume-attach failed with error info libvirtError: internal error
  unable to execute QEMU command '__com.redhat_drive_add': Device
  'drive-virtio-disk3' could not be initialized

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  2013-08-12 08:24:36.046 7483 TRACE nova.openstack.common.rpc.amqp pass
  2013-08-12 08:24:36.046 7483 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib64/python2.6/contextlib.py, line 23, in __exit__
  2013-08-12 08:24:36.046 7483 TRACE nova.openstack.common.rpc.amqp 
self.gen.next()
  2013-08-12 08:24:36.046 7483 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 195, in 
decorated_function
  2013-08-12 08:24:36.046 7483 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-08-12 08:24:36.046 7483 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 237, in 
decorated_function
  2013-08-12 08:24:36.046 7483 TRACE nova.openstack.common.rpc.amqp e, 
sys.exc_info())
  2013-08-12 08:24:36.046 7483 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib64/python2.6/contextlib.py, line 23, in __exit__
  2013-08-12 08:24:36.046 7483 TRACE nova.openstack.common.rpc.amqp 
self.gen.next()
  2013-08-12 08:24:36.046 7483 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 224, in 
decorated_function
  2013-08-12 08:24:36.046 7483 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-08-12 08:24:36.046 7483 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 2852, in 
attach_volume
  2013-08-12 08:24:36.046 7483 TRACE nova.openstack.common.rpc.amqp 
context, instance, mountpoint)
  2013-08-12 08:24:36.046 7483 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib64/python2.6/contextlib.py, line 23, in __exit__
  2013-08-12 08:24:36.046 7483 TRACE nova.openstack.common.rpc.amqp 
self.gen.next()
  2013-08-12 08:24:36.046 7483 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 2847, in 
attach_volume
  2013-08-12 08:24:36.046 7483 TRACE nova.openstack.common.rpc.amqp 
mountpoint, instance)
  2013-08-12 08:24:36.046 7483 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 2887, in 
_attach_volume
  2013-08-12 08:24:36.046 7483 TRACE nova.openstack.common.rpc.amqp 
connector)
  2013-08-12 08:24:36.046 7483 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib64/python2.6/contextlib.py, line 23, in __exit__
  2013-08-12 08:24:36.046 7483 TRACE nova.openstack.common.rpc.amqp 
self.gen.next()
  2013-08-12 08:24:36.046 7483 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 2878, in 
_attach_volume
  2013-08-12 08:24:36.046 7483 TRACE nova.openstack.common.rpc.amqp 
mountpoint)
  2013-08-12 08:24:36.046 7483 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py, line 981, in 
attach_volume
  2013-08-12 08:24:36.046 7483 TRACE nova.openstack.common.rpc.amqp 
disk_dev)
  2013-08-12 08:24:36.046 7483 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib64/python2.6/contextlib.py, line 23, in __exit__
  2013-08-12 08:24:36.046 7483 TRACE nova.openstack.common.rpc.amqp 
self.gen.next()
  2013-08-12 08:24:36.046 7483 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py, line 968, in 
attach_volume
  2013-08-12 08:24:36.046 7483 TRACE nova.openstack.common.rpc.amqp 
virt_dom.attachDeviceFlags(conf.to_xml(), flags)
  2013-08-12 08:24:36.046 7483 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/eventlet/tpool.py, line 187, in doit
  2013-08-12 08:24:36.046 7483 TRACE nova.openstack.common.rpc.amqp result 
= proxy_call(self._autowrap, f, *args, **kwargs)
  2013-08-12 08:24:36.046 7483 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/eventlet/tpool.py, line 147, in proxy_call
  2013-08-12 08:24:36.046 7483 TRACE nova.openstack.common.rpc.amqp rv = 
execute(f,*args,**kwargs)
  2013-08-12 08:24:36.046 7483 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/eventlet/tpool.py, line 76, in tworker
  2013-08-12 08:24:36.046 7483 TRACE nova.openstack.common.rpc.amqp rv = 
meth(*args,**kwargs)
  2013-08-12 08:24:36.046 7483 TRACE nova.openstack.common.rpc.amqp   File 

[Yahoo-eng-team] [Bug 1192873] Re: wrong password set in api-paste.ini, but still pass the auth

2013-06-22 Thread li,chen
Looks like it is designed by keystone PKI mode.
More information is here: 
http://blog.chmouel.com/2013/05/02/keystone-pki-tokens-overview/

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1192873

Title:
  wrong password set in api-paste.ini, but still pass the auth

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I'm working on Grizzly, and I saw a really strange phenomenon in
  keystone log.

  When I run command nova list, I get two INFO output:
  2013-06-19 15:01:26 INFO [access] 192.168.11.12 - - [19/Jun/2013:07:01:26 
+] POST http://keystone:5000/v2.0/tokens HTTP/1.0 200 5143
  2013-06-19 15:01:26 INFO [access] 192.168.11.11 - - [19/Jun/2013:07:01:26 
+] GET http://keystone:35357/v2.0/tokens/revoked HTTP/1.0 200 504

  I think this matches my understanding about how auth work, although I have 
questions about the revoked.
  First, user get a new token, then nova verify the token. 

  Then, suddenly, the second log disappeared, I can only get:
  2013-06-20 16:35:45 INFO [access] 192.168.11.12 - - [20/Jun/2013:08:35:45 
+] POST http://keystone:5000/v2.0/tokens HTTP/1.0 200 5143

  This come to me a question, how nova-api verify user's token ?
  So, I edited /etc/nova/api-paste.ini, changed admin_password to a wrong 
number, and cleaned all tokens in keystone, and restart nova-api.
  I suppose this will cause nova list failed in auth.
  But, I still get my instance list.

  How could this happen ?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1192873/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1186112] Re: keystone never delete expires token in database

2013-05-30 Thread li,chen
** Changed in: keystone
   Status: New = Invalid

** Converted to question:
   https://answers.launchpad.net/keystone/+question/229964

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1186112

Title:
  keystone never delete expires token in database

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  keystone never delete expires token in database. And I noticed the token 
table has no index on “expired” and “valid”.
  Is this a bug, Or it is designed to work in this way? Why?

  Thanks.
  -chen

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1186112/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1068048] Re: nova live-migration with ceph backend attempts to detach vda

2013-03-22 Thread li,chen
my error is due to missing configuration about libvirt.

** Changed in: nova
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1068048

Title:
  nova live-migration with ceph backend attempts to detach vda

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Migration of an instance between two Folsom Compute nodes more often
  than not results in failure.

  When attempting to migrate a VM with
  block_device_mapping=vda:ceph_volume destination compute node attempts
  to detach vda

  Destination host on failed migration: http://pastebin.com/3GRnZzkL
  Source host on failed migration: http://pastebin.com/dMann0vL

  About 4 attempts later, after no changes.
  Destination host on successful migration: http://pastebin.com/aSDN4Myn
  Source host on successful migration: http://pastebin.com/PW1LbeW8

  
  Command being run:
  [admin:admin] root@openstack-control:~# nova live-migration 
8c5cde28-904c-49ee-ac18-2885e87925b7 openstack-compute1

  
  Info:
  [demo:demo] root@openstack-control:~# nova boot --flavor 1 --image 
7c55a705-c642-4223-914a-44e777beb859 --key_name mykey --block_device_mapping 
vda=923f330d-a084-491d-a32b-b4e38ccc9ca1::10:0 l-vol-vm_1
  [demo:demo] root@openstack-control:~# nova boot --flavor 1 --image 
7c55a705-c642-4223-914a-44e777beb859 --key_name mykey --block_device_mapping 
vda=bb9a9ae7-e3c6-43f6-9aae-2701e1543adc::10:0 l-vol-vm_2
  [demo:demo] root@openstack-control:~# nova boot --flavor 1 --image 
7c55a705-c642-4223-914a-44e777beb859 --key_name mykey --block_device_mapping 
vda=74dcc66f-53fc-43ec-9f79-16e8442d4b5c::10:0 l-vol-vm_3

  [demo:demo] root@openstack-control:~# nova list
  
+--++---+---+
  | ID   | Name   | Status| Networks
  |
  
+--++---+---+
  | 050c406b-3a3b-41b3-85ae-7ab4765f8d6e | l-vol-vm_1 | ACTIVE| 
demo-net=10.7.7.7 |
  | 8c5cde28-904c-49ee-ac18-2885e87925b7 | l-vol-vm_2 | ACTIVE| 
demo-net=10.7.7.8 |
  | 1fb5935f-8ab0-4464-8eb7-35b355f3c2ad | l-vol-vm_3 | ACTIVE| 
demo-net=10.7.7.3 |
  
+--++---+---+

  [demo:demo] root@openstack-control:~# cinder list
  
+--+---+--+--+-+--+
  |  ID  |   Status  |   Display Name   | Size 
| Volume Type | Attached to  |
  
+--+---+--+--+-+--+
  | 74dcc66f-53fc-43ec-9f79-16e8442d4b5c |   in-use  |  linux-vm_3-vol  |  10  
| None| 1fb5935f-8ab0-4464-8eb7-35b355f3c2ad |
  | 923f330d-a084-491d-a32b-b4e38ccc9ca1 |   in-use  |  linux-vm_1-vol  |  10  
| None| 050c406b-3a3b-41b3-85ae-7ab4765f8d6e |
  | bb9a9ae7-e3c6-43f6-9aae-2701e1543adc |   in-use  |  linux-vm_2-vol  |  10  
| None| 8c5cde28-904c-49ee-ac18-2885e87925b7 |
  | ccb71b41-efdf-4ae3-8b3e-ccb299a378e5 | available | Linux-Master-Vol |  10  
| None|  |
  
+--+---+--+--+-+--+

  
  Compute nodes are both running:
  ceph-common 0.53-1precise
  libvirt-bin 0.9.8-2ubuntu17.4
  libvirt0 0.9.8-2ubuntu17.4
  nova-common 2012.2+git201210091907~precise-0ubuntu1
  nova-compute 2012.2+git201210091907~precise-0ubuntu1
  nova-compute-kvm 2012.2+git201210091907~precise-0ubuntu1
  openvswitch-common 1.4.0-1ubuntu1.3
  openvswitch-datapath-dkms 1.4.0-1ubuntu1.3
  openvswitch-switch 1.4.0-1ubuntu1.3
  python-libvirt 0.9.8-2ubuntu17.4
  python-nova 2012.2+git201210091907~precise-0ubuntu1
  python-novaclient 1:2.9.0.10+git201210101300~precise-0ubuntu1
  python-quantum 2012.2+git201209271425~precise-0ubuntu1
  python-quantumclient 1:2.1.1+git201209200900~precise-0ubuntu1
  quantum-common 2012.2+git201209271425~precise-0ubuntu1
  quantum-plugin-openvswitch 2012.2+git201209271425~precise-0ubuntu1
  quantum-plugin-openvswitch-agent 2012.2+git201209271425~precise-0ubuntu1

  Control node is running:
  ceph-common 0.53-1precise
  cinder-api 2012.2+git201209252100~precise-0ubuntu1
  cinder-common 2012.2+git201209252100~precise-0ubuntu1
  cinder-scheduler 2012.2+git201209252100~precise-0ubuntu1
  cinder-volume 2012.2+git201209252100~precise-0ubuntu1
  glance 2012.2+git201209250330~precise-0ubuntu1
  glance-api 2012.2+git201209250330~precise-0ubuntu1
  glance-common 2012.2+git201209250330~precise-0ubuntu1
  glance-registry 2012.2+git201209250330~precise-0ubuntu1
  keystone 

[Yahoo-eng-team] [Bug 1068048] Re: nova live-migration with ceph backend attempts to detach vda

2013-03-20 Thread li,chen
I have met the same error.

and I try directly run libvirt command :
virsh migrate --live instance-0782 qemu+ssh://root@compute-node-2/system 
--verbose

It works fine.

** Changed in: nova
   Status: Invalid = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1068048

Title:
  nova live-migration with ceph backend attempts to detach vda

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  Migration of an instance between two Folsom Compute nodes more often
  than not results in failure.

  When attempting to migrate a VM with
  block_device_mapping=vda:ceph_volume destination compute node attempts
  to detach vda

  Destination host on failed migration: http://pastebin.com/3GRnZzkL
  Source host on failed migration: http://pastebin.com/dMann0vL

  About 4 attempts later, after no changes.
  Destination host on successful migration: http://pastebin.com/aSDN4Myn
  Source host on successful migration: http://pastebin.com/PW1LbeW8

  
  Command being run:
  [admin:admin] root@openstack-control:~# nova live-migration 
8c5cde28-904c-49ee-ac18-2885e87925b7 openstack-compute1

  
  Info:
  [demo:demo] root@openstack-control:~# nova boot --flavor 1 --image 
7c55a705-c642-4223-914a-44e777beb859 --key_name mykey --block_device_mapping 
vda=923f330d-a084-491d-a32b-b4e38ccc9ca1::10:0 l-vol-vm_1
  [demo:demo] root@openstack-control:~# nova boot --flavor 1 --image 
7c55a705-c642-4223-914a-44e777beb859 --key_name mykey --block_device_mapping 
vda=bb9a9ae7-e3c6-43f6-9aae-2701e1543adc::10:0 l-vol-vm_2
  [demo:demo] root@openstack-control:~# nova boot --flavor 1 --image 
7c55a705-c642-4223-914a-44e777beb859 --key_name mykey --block_device_mapping 
vda=74dcc66f-53fc-43ec-9f79-16e8442d4b5c::10:0 l-vol-vm_3

  [demo:demo] root@openstack-control:~# nova list
  
+--++---+---+
  | ID   | Name   | Status| Networks
  |
  
+--++---+---+
  | 050c406b-3a3b-41b3-85ae-7ab4765f8d6e | l-vol-vm_1 | ACTIVE| 
demo-net=10.7.7.7 |
  | 8c5cde28-904c-49ee-ac18-2885e87925b7 | l-vol-vm_2 | ACTIVE| 
demo-net=10.7.7.8 |
  | 1fb5935f-8ab0-4464-8eb7-35b355f3c2ad | l-vol-vm_3 | ACTIVE| 
demo-net=10.7.7.3 |
  
+--++---+---+

  [demo:demo] root@openstack-control:~# cinder list
  
+--+---+--+--+-+--+
  |  ID  |   Status  |   Display Name   | Size 
| Volume Type | Attached to  |
  
+--+---+--+--+-+--+
  | 74dcc66f-53fc-43ec-9f79-16e8442d4b5c |   in-use  |  linux-vm_3-vol  |  10  
| None| 1fb5935f-8ab0-4464-8eb7-35b355f3c2ad |
  | 923f330d-a084-491d-a32b-b4e38ccc9ca1 |   in-use  |  linux-vm_1-vol  |  10  
| None| 050c406b-3a3b-41b3-85ae-7ab4765f8d6e |
  | bb9a9ae7-e3c6-43f6-9aae-2701e1543adc |   in-use  |  linux-vm_2-vol  |  10  
| None| 8c5cde28-904c-49ee-ac18-2885e87925b7 |
  | ccb71b41-efdf-4ae3-8b3e-ccb299a378e5 | available | Linux-Master-Vol |  10  
| None|  |
  
+--+---+--+--+-+--+

  
  Compute nodes are both running:
  ceph-common 0.53-1precise
  libvirt-bin 0.9.8-2ubuntu17.4
  libvirt0 0.9.8-2ubuntu17.4
  nova-common 2012.2+git201210091907~precise-0ubuntu1
  nova-compute 2012.2+git201210091907~precise-0ubuntu1
  nova-compute-kvm 2012.2+git201210091907~precise-0ubuntu1
  openvswitch-common 1.4.0-1ubuntu1.3
  openvswitch-datapath-dkms 1.4.0-1ubuntu1.3
  openvswitch-switch 1.4.0-1ubuntu1.3
  python-libvirt 0.9.8-2ubuntu17.4
  python-nova 2012.2+git201210091907~precise-0ubuntu1
  python-novaclient 1:2.9.0.10+git201210101300~precise-0ubuntu1
  python-quantum 2012.2+git201209271425~precise-0ubuntu1
  python-quantumclient 1:2.1.1+git201209200900~precise-0ubuntu1
  quantum-common 2012.2+git201209271425~precise-0ubuntu1
  quantum-plugin-openvswitch 2012.2+git201209271425~precise-0ubuntu1
  quantum-plugin-openvswitch-agent 2012.2+git201209271425~precise-0ubuntu1

  Control node is running:
  ceph-common 0.53-1precise
  cinder-api 2012.2+git201209252100~precise-0ubuntu1
  cinder-common 2012.2+git201209252100~precise-0ubuntu1
  cinder-scheduler 2012.2+git201209252100~precise-0ubuntu1
  cinder-volume 2012.2+git201209252100~precise-0ubuntu1
  glance 2012.2+git201209250330~precise-0ubuntu1
  glance-api 2012.2+git201209250330~precise-0ubuntu1
  glance-common 

[Yahoo-eng-team] [Bug 955191] Re: Wrong UUIDs accepted and reflected in db during cli cmd exectution

2013-02-01 Thread li,chen
I have tried this at the newest code, this bug is not exist any more.

** Changed in: keystone
   Status: Confirmed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/955191

Title:
  Wrong UUIDs accepted and reflected in db during cli cmd exectution

Status in OpenStack Identity (Keystone):
  Fix Released

Bug description:
  After a devstack install, playing with keystone cmds.  For the sub-cmd
  user-role-add, by mistake I used the role UUID  for both role and user
  UUIDs. The cmd executed successfully and values reflected in db
  (Tables: metadata and user_tenant membership)

  deepak@deepak-devvm:~/devstack$ keystone role-list
  +--+--+
  |id| name |
  +--+--+
  | 3bc97204d7df40788c4bfa1b66ff3d14 | anotherrole  |
  | 597e1ab461df42d2847b02ae053112f7 | Member   |
  | 5ceef439c8ab4cfc8abee359ced4758c | admin|
  | 650abec8e72645928ce2bfae1222b192 | KeystoneAdmin|
  | b8cf5415a4d84791aa8c1049b4fc7c50 | KeystoneServiceAdmin |
  +--+--+
  deepak@deepak-devvm:~/devstack$ keystone user-role-add 
--user=3bc97204d7df40788c4bfa1b66ff3d14 --role=3bc97204d7df40788c4bfa1b66ff3d14 
--tenant_id=6d7ccff941e843ee86340a3a964720b7

  This is also true for user-role-remove  subcmd.

  Similarly trying to use tenant uuid for all the three options, I get the 
error: 'NoneType' object has no attribute 'iteritems'
  

  deepak@deepak-devvm:~/devstack$ keystone tenant-list
  +--++-+
  |id|name| enabled |
  +--++-+
  | 36e434a5c60445a6a46cb7c0c779f26f | demo   | True|
  | 6d7ccff941e843ee86340a3a964720b7 | service| True|
  | c3ceb42f641a4227bcef9719fde82d82 | admin  | True|
  | fb1fbc76098b4d9ea8fef457069a3175 | invisible_to_admin | True|
  +--++-+

  deepak@deepak-devvm:~/devstack$ keystone user-role-add 
--user=6d7ccff941e843ee86340a3a964720b7 --role=6d7ccff941e843ee86340a3a964720b7 
--tenant_id=6d7ccff941e843ee86340a3a964720b7
  'NoneType' object has no attribute 'iteritems'

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/955191/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp