[Yahoo-eng-team] [Bug 1367575] [NEW] Some server actions does not work for v2.1 API

2014-09-10 Thread Ghanshyam Mann
Public bug reported:

Below server action needs does not work for V2.1 API.
1. start server
2. stop server
3. confirm resize
4. revert resize

Those needs to be converted to V2.1 from V3 base code.

** Affects: nova
 Importance: Undecided
 Assignee: Ghanshyam Mann (ghanshyammann)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Ghanshyam Mann (ghanshyammann)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367575

Title:
  Some server actions does not work for v2.1 API

Status in OpenStack Compute (Nova):
  New

Bug description:
  Below server action needs does not work for V2.1 API.
  1. start server
  2. stop server
  3. confirm resize
  4. revert resize

  Those needs to be converted to V2.1 from V3 base code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1367575/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367588] [NEW] When a VM with FloatingIP is directly deleted without disassociating a FIP, the fip agent gateway port is not deleted.

2014-09-10 Thread Swaminathan Vasudevan
Public bug reported:

When a VM with FloatingIP is deleted without disassociating a FIP, the
internal FIP agent gateway port on that particular compute node is not
deleted.

1. Create a dvr router.
2. Attach a subnet to the router
3. Attach a Gateway to the router
4. Create a Floating IP
5. Create a VM on the above subnet
6. Associate the Floating IP to the VM's private IP.
7. Now do a port-list you will see a port with device_owner as 
router:floatingip_agent_gw
8. Delete the VM ( nova delete VM-name).
9. The port still remains.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-dvr-backlog

** Tags added: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1367588

Title:
  When a VM with FloatingIP is directly deleted without disassociating a
  FIP, the fip agent gateway port is not deleted.

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When a VM with FloatingIP is deleted without disassociating a FIP, the
  internal FIP agent gateway port on that particular compute node is not
  deleted.

  1. Create a dvr router.
  2. Attach a subnet to the router
  3. Attach a Gateway to the router
  4. Create a Floating IP
  5. Create a VM on the above subnet
  6. Associate the Floating IP to the VM's private IP.
  7. Now do a port-list you will see a port with device_owner as 
router:floatingip_agent_gw
  8. Delete the VM ( nova delete VM-name).
  9. The port still remains.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1367588/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367590] [NEW] File exists: '/opt/stack/new/horizon/static/scss/assets'

2014-09-10 Thread Joshua Harlow
Public bug reported:

Seems like some kind of asset problem is occurring that is breaking the
integrated gate.

File exists: '/opt/stack/new/horizon/static/scss/assets'

http://logs.openstack.org/81/120281/3/check/check-tempest-dsvm-neutron-
full/fbe5341/logs/screen-horizon.txt.gz

This causes:

tempest.scenario.test_dashboard_basic_ops.TestDashboardBasicOps  to the
fail...

http://logs.openstack.org/81/120281/3/check/check-tempest-dsvm-neutron-
full/fbe5341/logs/testr_results.html.gz

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1367590

Title:
  File exists: '/opt/stack/new/horizon/static/scss/assets'

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Seems like some kind of asset problem is occurring that is breaking
  the integrated gate.

  File exists: '/opt/stack/new/horizon/static/scss/assets'

  http://logs.openstack.org/81/120281/3/check/check-tempest-dsvm-
  neutron-full/fbe5341/logs/screen-horizon.txt.gz

  This causes:

  tempest.scenario.test_dashboard_basic_ops.TestDashboardBasicOps  to
  the fail...

  http://logs.openstack.org/81/120281/3/check/check-tempest-dsvm-
  neutron-full/fbe5341/logs/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1367590/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367596] [NEW] admin_state_up=False on l3-agent doesn't affect active routers

2014-09-10 Thread Yair Fried
Public bug reported:

When cloud admin is shutting down l3-agent via API (without evacuating routers 
first) it stands to reason he would like traffic to stop routing via this agent 
(either for maintenance or maybe because a security breach was found...).
Currently, even when an agent is down, traffic keeps routing without any affect.

Agent should set all interfaces inside router namespace  to DOWN so no
more traffic is being routed.

Alternatively, agent should set routers admin state to DOWN, assuming
this actually affects the router.

Either way, end result should be - traffic is not routed via agent when
admin brings it down.

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  When cloud admin is shutting down l3-agent via API (without evacuating 
routers first) it stands to reason he would like traffic to stop routing via 
this agent (either for maintenance or maybe because a security breach was 
found...).
  Currently, even when an agent is down, traffic keeps routing without any 
effect.
  
  Agent should set all interfaces inside router namespace  to DOWN so no
  more traffic is being routed.
  
  Alternatively, agent should set routers admin state to DOWN, assuming
- this actually effects the router.
+ this actually affects the router.
  
  Either way, end result should be - traffic is not routed via agent when
  admin brings it down.

** Description changed:

  When cloud admin is shutting down l3-agent via API (without evacuating 
routers first) it stands to reason he would like traffic to stop routing via 
this agent (either for maintenance or maybe because a security breach was 
found...).
- Currently, even when an agent is down, traffic keeps routing without any 
effect.
+ Currently, even when an agent is down, traffic keeps routing without any 
affect.
  
  Agent should set all interfaces inside router namespace  to DOWN so no
  more traffic is being routed.
  
  Alternatively, agent should set routers admin state to DOWN, assuming
  this actually affects the router.
  
  Either way, end result should be - traffic is not routed via agent when
  admin brings it down.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1367596

Title:
  admin_state_up=False on l3-agent doesn't affect active routers

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When cloud admin is shutting down l3-agent via API (without evacuating 
routers first) it stands to reason he would like traffic to stop routing via 
this agent (either for maintenance or maybe because a security breach was 
found...).
  Currently, even when an agent is down, traffic keeps routing without any 
affect.

  Agent should set all interfaces inside router namespace  to DOWN so no
  more traffic is being routed.

  Alternatively, agent should set routers admin state to DOWN, assuming
  this actually affects the router.

  Either way, end result should be - traffic is not routed via agent
  when admin brings it down.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1367596/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362048] Re: SQLite timeout in glance image_cache

2014-09-10 Thread Jordan Pittier
Yeah, I though for a minute that to trigger a reverify in the gate, a
bug in Tempest had to be logged. But that's obviously wrong.

** No longer affects: tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1362048

Title:
  SQLite timeout in glance image_cache

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Hi,
  Sometime I get the following stack trace in Glance-API : 

  GET /v1/images/42646b2b-cf0b-4b15-b011-19d0a6880ffb HTTP/1.1 200 4970175 
2.403391
  for chunk in image_iter:
File /opt/stack/new/glance/glance/api/middleware/cache.py, line 281, in 
get_from_cache
  yield chunk
File /usr/lib/python2.7/contextlib.py, line 24, in __exit__
  self.gen.next()
File /opt/stack/new/glance/glance/image_cache/drivers/sqlite.py, line 
373, in open_for_read
  with self.get_db() as db:
File /usr/lib/python2.7/contextlib.py, line 17, in __enter__
  return self.gen.next()
File /opt/stack/new/glance/glance/image_cache/drivers/sqlite.py, line 
391, in get_db
  conn.execute('PRAGMA synchronous = NORMAL')
File /opt/stack/new/glance/glance/image_cache/drivers/sqlite.py, line 77, 
in execute
  return self._timeout(lambda: sqlite3.Connection.execute(
File /opt/stack/new/glance/glance/image_cache/drivers/sqlite.py, line 74, 
in _timeout
  sleep(0.05)
File /usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line 31, 
in sleep
  hub.switch()
File /usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 187, in 
switch
  return self.greenlet.switch()
  Timeout: 2 seconds

  It happens also from time to time in the Gate. See the following
  logstash request :

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwicmV0dXJuIHNlbGYuZ3JlZW5sZXQuc3dpdGNoKClcIiBBTkQgZmlsZW5hbWU6XCJsb2dzL3NjcmVlbi1nLWFwaS50eHRcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwOTEyNjQ1NjU3NywibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

  
  This caused the gate failure of : 
http://logs.openstack.org/22/116622/2/check/check-tempest-dsvm-postgres-full/f079ef9/logs/screen-g-api.txt.gz?
  (wait for a full load of this page then grep Timeout: 2 seconds)

  Sorry for not being able to investigate more.

  Jordan

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1362048/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1249319] Re: evacuate on ceph backed volume fails

2014-09-10 Thread Dmitry Mescheryakov
The bug is mirrored in MOS there:
https://bugs.launchpad.net/mos/+bug/1367610

** No longer affects: mos

** Tags removed: customer-found

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1249319

Title:
  evacuate on ceph backed volume fails

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  When using nova evacuate to move an instance from one compute host to
  another, the command silently fails. The issue seems to be that the
  rebuild process builds an incorrect libvirt.xml file that no longer
  correctly references the ceph volume.

  Specifically under the disk section I see:

  source protocol=rbd name=volumes/instance-0004_disk

  where in the original libvirt.xml the file was:

  source protocol=rbd name=volumes/volume-9e1a7835-b780-495c-a88a-
  4558be784dde

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1249319/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367615] [NEW] lbaas.pool db schema has no foreign key to subnet_id

2014-09-10 Thread Li Ma
Public bug reported:

The lbaas.pool DB schema needs a foreign key constraint to subnet.id.

Otherwise, when invoking api.lbaas.pool_list in horizon, 
it will throw an exception because the subnet has safely deleted before.

** Affects: horizon
 Importance: Undecided
 Assignee: Li Ma (nick-ma-z)
 Status: New

** Affects: neutron
 Importance: Undecided
 Assignee: Li Ma (nick-ma-z)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Li Ma (nick-ma-z)

** Also affects: horizon
   Importance: Undecided
   Status: New

** Changed in: horizon
 Assignee: (unassigned) = Li Ma (nick-ma-z)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1367615

Title:
  lbaas.pool db schema has no foreign key to subnet_id

Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The lbaas.pool DB schema needs a foreign key constraint to subnet.id.

  Otherwise, when invoking api.lbaas.pool_list in horizon, 
  it will throw an exception because the subnet has safely deleted before.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1367615/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367619] [NEW] MetadefNamespace.namespaces column should indicate nullable=False

2014-09-10 Thread Wayne
Public bug reported:

The metadef_namespaces table definition indicates the namespace column
as not accepting nulls. The related MetadefNamespace ORM class should
also indicate that the namespace column does not accept nulls with
nullable=False in the column definition.

** Affects: glance
 Importance: Undecided
 Assignee: Wayne (wayne-okuma)
 Status: New

** Changed in: glance
 Assignee: (unassigned) = Wayne (wayne-okuma)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1367619

Title:
  MetadefNamespace.namespaces column should indicate nullable=False

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  The metadef_namespaces table definition indicates the namespace column
  as not accepting nulls. The related MetadefNamespace ORM class should
  also indicate that the namespace column does not accept nulls with
  nullable=False in the column definition.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1367619/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334164] Re: nova error migrating VMs with floating ips: 'FixedIP' object has no attribute '_sa_instance_state'

2014-09-10 Thread Roman Podoliaka
** No longer affects: mos/6.0.x

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1334164

Title:
  nova error migrating VMs with floating ips: 'FixedIP' object has no
  attribute '_sa_instance_state'

Status in Fuel: OpenStack installer that works:
  Fix Committed
Status in Fuel for OpenStack 5.0.x series:
  Fix Released
Status in Mirantis OpenStack:
  Fix Committed
Status in Mirantis OpenStack 5.0.x series:
  Fix Released
Status in Mirantis OpenStack 5.1.x series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Seeing this in conductor logs when migrating a VM with a floating IP
  assigned:

  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py, line 133, 
in _dispatch_and_reply
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py, line 176, 
in _dispatch
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py, line 122, 
in _do_dispatch
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/nova/conductor/manager.py, line 1019, in 
network_migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
migration)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/nova/conductor/manager.py, line 527, in 
network_migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
self.network_api.migrate_instance_start(context, instance, migration)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/nova/network/api.py, line 94, in wrapped
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
func(self, context, *args, **kwargs)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/nova/network/api.py, line 543, in 
migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
self.network_rpcapi.migrate_instance_start(context, **args)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/nova/network/rpcapi.py, line 350, in 
migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
floating_addresses=floating_addresses)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/oslo/messaging/rpc/client.py, line 150, in 
call
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
wait_for_reply=True, timeout=timeout)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/oslo/messaging/transport.py, line 90, in 
_send
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
timeout=timeout)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py, line 
409, in send
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
self._send(target, ctxt, message, wait_for_reply, timeout)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py, line 
402, in _send
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher raise 
result
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
AttributeError: 'FixedIP' object has no attribute '_sa_instance_state'

To manage notifications about this bug go to:
https://bugs.launchpad.net/fuel/+bug/1334164/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367633] [NEW] Server actions 'createImage' does not work for v2.1 API

2014-09-10 Thread Ghanshyam Mann
Public bug reported:

'createImage' server action  does not work for V2.1 API.

This needs to be converted to V2.1 from V3 base code.

** Affects: nova
 Importance: Undecided
 Assignee: Ghanshyam Mann (ghanshyammann)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) = Ghanshyam Mann (ghanshyammann)

** Summary changed:

- Server actions 'create image' does not work for v2.1 API
+ Server actions 'createImage' does not work for v2.1 API

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367633

Title:
  Server actions 'createImage' does not work for v2.1 API

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  'createImage' server action  does not work for V2.1 API.

  This needs to be converted to V2.1 from V3 base code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1367633/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367642] [NEW] revertResize/confirmResize server actions does not work for v2.1 API

2014-09-10 Thread Ghanshyam Mann
Public bug reported:

revertResize/confirmResize server actions does not work for v2.1 API

Those needs to be converted to V2.1 from V3 base code.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367642

Title:
  revertResize/confirmResize server actions does not work for v2.1 API

Status in OpenStack Compute (Nova):
  New

Bug description:
  revertResize/confirmResize server actions does not work for v2.1 API

  Those needs to be converted to V2.1 from V3 base code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1367642/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367650] [NEW] Use existing Metadef{type}NotFound Exceptions instead of MetadefRecordNotFound

2014-09-10 Thread Wayne
Public bug reported:

Currently, when metadef namespaces, objects or properties are not found
by ID a general MetadefRecordNotFound exception is thrown. It is better
use the existing MetadefNamespace/Object/PropertyNotFound Exceptions
instead of the too general MetadefRecordNotFound exception.

** Affects: glance
 Importance: Undecided
 Assignee: Wayne (wayne-okuma)
 Status: New

** Changed in: glance
 Assignee: (unassigned) = Wayne (wayne-okuma)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1367650

Title:
  Use existing Metadef{type}NotFound Exceptions instead of
  MetadefRecordNotFound

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Currently, when metadef namespaces, objects or properties are not
  found by ID a general MetadefRecordNotFound exception is thrown. It is
  better use the existing MetadefNamespace/Object/PropertyNotFound
  Exceptions instead of the too general MetadefRecordNotFound exception.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1367650/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367673] [NEW] Bug on auto-filled filename on container upload in object storage page

2014-09-10 Thread Ambroise CHRISTEA
Public bug reported:

When you chose a file for the first time or without changing anything, the 
filename input is correctly filled with the chosen file's name.
But if, after chosing a file, you empty the filename input, then chose another 
file, the input is not auto-filled anymore.

Test:

1) Go to object storage page and chose a container
2) Click on Upload Object
3) Chose a file to upload
(his name is correctly put in the filename input)
4) Empty the filename input
5) Chose another file to upload
Result : The filename input stays empty.

** Affects: horizon
 Importance: Undecided
 Assignee: Ambroise CHRISTEA (ambroise-christea)
 Status: New

** Description changed:

- There's a little bug when you chose a file to upload.
  When you chose a file for the first time or without changing anything, the 
filename input is correctly filled with the chosen file's name.
  But if, after chosing a file, you empty the filename input, then chose 
another file, the input is not auto-filled anymore.
  
  Test:
  
  1) Go to object storage page and chose a container
  2) Click on Upload Object
  3) Chose a file to upload
  (his name is correctly put in the filename input)
  4) Empty the filename input
  5) Chose another file to upload
  Result : The filename input stays empty.

** Changed in: horizon
 Assignee: (unassigned) = Ambroise CHRISTEA (ambroise-christea)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1367673

Title:
  Bug on auto-filled filename on container upload in object storage page

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When you chose a file for the first time or without changing anything, the 
filename input is correctly filled with the chosen file's name.
  But if, after chosing a file, you empty the filename input, then chose 
another file, the input is not auto-filled anymore.

  Test:

  1) Go to object storage page and chose a container
  2) Click on Upload Object
  3) Chose a file to upload
  (his name is correctly put in the filename input)
  4) Empty the filename input
  5) Chose another file to upload
  Result : The filename input stays empty.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1367673/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1366770] Re: RBDVolumeProxy is used incorrrectly

2014-09-10 Thread Sean Dague
** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1366770

Title:
  RBDVolumeProxy is used incorrrectly

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  The constructor method for the class has the following fingerprint:

  def __init__(self, driver, name, pool=None, snapshot=None,
   read_only=False):

  
  While it's used in multiple places without passing driver argument:

  with RBDVolumeProxy(self, name) as vol:
  vol.resize(size)

  
  This probably means that the code does not work and is not covered by unit 
tests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1366770/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367697] [NEW] Fix execution on udevadm program inside neutron agents

2014-09-10 Thread Geaaru
Public bug reported:

Hi,

inside file ovs_neutron_agent.py and  ofa_neutron_agent.py there is an
absolute path on call udevadm program.

In this case an absolute path is not the best solution because on
different distro path could be different. For example on gentoo distro
udevadm path is /usr/bin/udevadm.

About this there are two solutions:

a) use directly name of the program without absolute path , so in this
case only udevadm (Eventually add note that this program must be
present on PATH of the neutron agent/daemon.

b) manage a variable like root_helper , for example udevadm_helper to
permit configuration of the path from configuration file.


Probably solution a could be sufficient in this case.

Thanks
Geaaru

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1367697

Title:
  Fix execution on udevadm program inside neutron agents

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Hi,

  inside file ovs_neutron_agent.py and  ofa_neutron_agent.py there is an
  absolute path on call udevadm program.

  In this case an absolute path is not the best solution because on
  different distro path could be different. For example on gentoo distro
  udevadm path is /usr/bin/udevadm.

  About this there are two solutions:

  a) use directly name of the program without absolute path , so in this
  case only udevadm (Eventually add note that this program must be
  present on PATH of the neutron agent/daemon.

  b) manage a variable like root_helper , for example udevadm_helper to
  permit configuration of the path from configuration file.

  
  Probably solution a could be sufficient in this case.

  Thanks
  Geaaru

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1367697/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367091] Re: delimiter of Swift Container pseudo folder is *DOUBLE* slash

2014-09-10 Thread Akihiro Motoki
** Project changed: neutron = horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1367091

Title:
  delimiter of Swift Container pseudo folder is *DOUBLE* slash

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The delimiter of Swift Container pseudo folder in the Container table
  is *two* slash. A single slash is sufficient.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1367091/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1241051] Re: inappropriate exception raised when getting a nonexistent interface of a server

2014-09-10 Thread Sean Dague
The code in this area seems to have addressed it:
https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L708-L726

Marking as Invalid now, please reopen if this is still an issue.

** Changed in: nova
 Assignee: wingwj (wingwj) = (unassigned)

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1241051

Title:
  inappropriate exception raised when getting a nonexistent interface of
  a server

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I found this when I ran tempest after  adding some testcases of
  attach_interface.

  when getting a nonexistent interface of a server, ComputeFault exception was 
raised, which is not appropriate, see below:
  {
  computeFault: {
  message: The server has either erred or is incapable of performing 
the requested operation.,
  code: 500
  }
  }

  the exception should be NotFound.

  I'll fix it ASAP.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1241051/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1343596] Re: utils not imported in store/swift.py

2014-09-10 Thread Zhi Yan Liu
Currently Glance are using glance_store instead of owning backend
drivers by own tree. And the issue has already been fixed in
glance_store:
https://github.com/openstack/glance_store/blob/master/glance_store/_drivers/swift/store.py#L121
.

** Changed in: glance
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1343596

Title:
  utils not imported in store/swift.py

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  glance/store/swift.py does not import utils from glance.common, yet
  utils is used here:
  https://github.com/openstack/glance/blob/master/glance/store/swift.py#L127

  This causes the following traceback if swift returns an error (a 408
  in my case):

  014-07-17 16:45:19.949 22001 TRACE glance.api.v1.upload_utils Traceback (most 
recent call last):
  2014-07-17 16:45:19.949 22001 TRACE glance.api.v1.upload_utils   File 
/opt/stack/glance/glance/api/v1/upload_utils.py, line 96, in 
upload_data_to_store
  2014-07-17 16:45:19.949 22001 TRACE glance.api.v1.upload_utils store)
  2014-07-17 16:45:19.949 22001 TRACE glance.api.v1.upload_utils   File 
/opt/stack/glance/glance/store/__init__.py, line 338, in store_add_to_backend
  2014-07-17 16:45:19.949 22001 TRACE glance.api.v1.upload_utils (location, 
size, checksum, metadata) = store.add(image_id, data, size)
  2014-07-17 16:45:19.949 22001 TRACE glance.api.v1.upload_utils   File 
/opt/stack/glance/glance/store/swift.py, line 563, in add
  2014-07-17 16:45:19.949 22001 TRACE glance.api.v1.upload_utils Got error 
from Swift: %s) % utils.exception_to_str(e))
  2014-07-17 16:45:19.949 22001 TRACE glance.api.v1.upload_utils NameError: 
global name 'utils' is not defined
  2014-07-17 16:45:19.949 22001 TRACE glance.api.v1.upload_utils

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1343596/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367685] Re: neutron port-create returning No more IP addresses available on network error when the subnet has two allocation pools with same start and end ipsNo more IP addres

2014-09-10 Thread Numan Siddique
Thanks Sridhar.


** Changed in: neutron
   Status: New = Invalid

** Changed in: neutron
 Assignee: Numan Siddique (numansiddique) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1367685

Title:
  neutron port-create returning No more IP addresses available on
  network error when the subnet has two allocation pools with same
  start and end ipsNo more IP addresses available on network

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  neutron port-create returning No more IP addresses available on network 
error when the subnet has two allocation pools with same start and end ipsNo 
more IP addresses available on network.
  On running 'neutron port-list' the port is listed.

  
   neutron net-create test
  Created a new network:
  +-+--+
  | Field   | Value|
  +-+--+
  | admin_state_up  | True |
  | id  | 4151b7e5-7fdd-4975-a5f1-b45ee8a0ae73 |
  | name| test |
  | router:external | False|
  | shared  | False|
  | status  | ACTIVE   |
  | subnets |  |
  | tenant_id   | 5227a52545934d1ca0ad3b3fdb163863 |
  +-+--+
  ubuntu@oc-ovsvm:~$ neutron subnet-create test 30.0.0.0/24 --allocation-pool 
start=30.0.0.2,end=30.0.0.2 --allocation-pool start=30.0.0.5,end=30.0.0.5
  Created a new subnet:
  +---+--+
  | Field | Value|
  +---+--+
  | allocation_pools  | {start: 30.0.0.2, end: 30.0.0.2} |
  |   | {start: 30.0.0.5, end: 30.0.0.5} |
  | cidr  | 30.0.0.0/24  |
  | dns_nameservers   |  |
  | enable_dhcp   | True |
  | gateway_ip| 30.0.0.1 |
  | host_routes   |  |
  | id| 41b9e7db-3be0-4fa0-954c-6693119ba6ce |
  | ip_version| 4|
  | ipv6_address_mode |  |
  | ipv6_ra_mode  |  |
  | name  |  |
  | network_id| 4151b7e5-7fdd-4975-a5f1-b45ee8a0ae73 |
  | tenant_id | 5227a52545934d1ca0ad3b3fdb163863 |
  +---+--+
  ubuntu@oc-ovsvm:~$ 
  ubuntu@oc-ovsvm:~$ 
  ubuntu@oc-ovsvm:~$ neutron port-create test
  Created a new port:
  
+---+-+
  | Field | Value   
|
  
+---+-+
  | admin_state_up| True
|
  | allowed_address_pairs | 
|
  | binding:vnic_type | normal  
|
  | device_id | 
|
  | device_owner  | 
|
  | fixed_ips | {subnet_id: 
41b9e7db-3be0-4fa0-954c-6693119ba6ce, ip_address: 30.0.0.2} |
  | id| dbb80785-47ae-4a79-89a8-657c667e9bd2
|
  | mac_address   | fa:16:3e:27:77:6b   
|
  | name  | 
|
  | network_id| 4151b7e5-7fdd-4975-a5f1-b45ee8a0ae73
|
  | security_groups   | ac026f4e-8b28-4523-88dd-4191c2420aae
|
  | status| DOWN
|
  | tenant_id | 5227a52545934d1ca0ad3b3fdb163863
|
  
+---+-+
 

[Yahoo-eng-team] [Bug 1367705] [NEW] HA router transition state should be logged to l3-agent

2014-09-10 Thread Yair Fried
Public bug reported:

Any transition (master, backup, fault) should be logged to relevant l3-agents.
fault - error level
backup - INFO (not debug)
master - info

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1367705

Title:
  HA router transition state should be logged to l3-agent

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Any transition (master, backup, fault) should be logged to relevant l3-agents.
  fault - error level
  backup - INFO (not debug)
  master - info

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1367705/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1366911] Re: Nova does not ensure a valid token is available if snapshot process exceeds token lifetime

2014-09-10 Thread Sean Dague
I think the real issue here is that they clients need to revalidate
tokens, which they don't, so in this case it's really a glanceclient
bug.

** Also affects: python-glanceclient
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1366911

Title:
  Nova does not ensure a valid token is available if snapshot process
  exceeds token lifetime

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in OpenStack Compute (Nova):
  Invalid
Status in Python client library for Glance:
  New

Bug description:
  Recently we encountered the following issue due to the change in
  Icehouse for the default lifetime of a token before it expires. It's
  now 1 hour, while previously it was 8.

  If a snapshot process takes longer than an hour, when it goes to the
  next phase it will fail with a 401 Unauthorized error because it has
  an invalid token.

  In our specific example the following would take place:

  1. User would set a snapshot to begin and a token would be associated with 
this request.
  2. Snapshot would be created, compression time would take about 55 minutes. 
Enough to just push the snapshotting of this instance over the 60 minute mark.
  3. Upon Image Upload (Uploading image data for image in the logs) Nova 
would then return a 401 Unauthorized error stating This server could not 
verify that you are authorized to access the document you requested. Either you 
supplied the wrong credentials (e.g., bad password), or your browser does not 
understand how to supply the credentials required.

  Icehouse 2014.1.2, KVM as the hypervisor.

  The workaround is to specify a longer token timeout - however limits
  the ability to set short token expirations.

  A possible solution may be to get a new/refresh the token if the time
  has exceeded the timeout.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1366911/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367716] [NEW] Caching of main menu panel list

2014-09-10 Thread Radomir Dopieralski
Public bug reported:

As we are adding more logic to panels for hiding or showing them
depending on what is available in other services, we will inevitably
have to call out to those other services' APIs to check things. Since
the main menu with the panel list is displayed practically on every
page, those calls would be made on almost every single request to
Horizon. This would slow things considerably, and is also very
inconvenient to mock in tests.

The solution to this is to introduce a caching mechanism, which would
keep the list of dashboards and panels to be displayed in the session's
cache, and which could be conveniently mocked as a whole in tests.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1367716

Title:
  Caching of main menu panel list

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  As we are adding more logic to panels for hiding or showing them
  depending on what is available in other services, we will inevitably
  have to call out to those other services' APIs to check things. Since
  the main menu with the panel list is displayed practically on every
  page, those calls would be made on almost every single request to
  Horizon. This would slow things considerably, and is also very
  inconvenient to mock in tests.

  The solution to this is to introduce a caching mechanism, which would
  keep the list of dashboards and panels to be displayed in the
  session's cache, and which could be conveniently mocked as a whole in
  tests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1367716/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367720] [NEW] Upgrade of jquery-ui

2014-09-10 Thread Radomir Dopieralski
Public bug reported:

The version of jquery-ui library that is currently bundled in Horizon is
quite ancient and buggy. Most distributions will try to replace those
files with their own versions, which are much higher than what we have.
This can result in unpredictable breakage, that we won't be able to
reproduce in our development environments.

The solution is to upgrade jquery-ui to a recent version, so that we can
test it more realistically before the release.

** Affects: horizon
 Importance: Undecided
 Status: New

** Changed in: horizon
Milestone: None = juno-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1367720

Title:
  Upgrade of jquery-ui

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The version of jquery-ui library that is currently bundled in Horizon
  is quite ancient and buggy. Most distributions will try to replace
  those files with their own versions, which are much higher than what
  we have. This can result in unpredictable breakage, that we won't be
  able to reproduce in our development environments.

  The solution is to upgrade jquery-ui to a recent version, so that we
  can test it more realistically before the release.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1367720/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362513] Re: libvirt: connect_volume scans all LUNs, which takes 2 mins when host is connected with about 900 Luns

2014-09-10 Thread Sean Dague
Honestly, I consider this a performance wishlist item. If there are
patches that's cool, but this seems like a very edge case configuration.

** Changed in: nova
   Status: New = Opinion

** Changed in: nova
   Importance: Undecided = Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362513

Title:
  libvirt: connect_volume scans all LUNs, which takes 2 mins  when host
  is connected with about 900 Luns

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  Tested OpenStack version: IceHouse 2014.1, master branch still has this issue.
  Host version: CentOS 6, 2.6.32-431.el6.x86_64

  I have done some work to test the performance of LUN scanning with multipath, 
use the way like what Nova dose.
  In my test, The host was connected with almost 900 LUNs.
  1. I use 'iscsiadm' with '--rescan' to discover LUNs, which takes almost 15s. 
It seems '--rescan' cause kernel to rescan all the LUNs which has already been 
connected to the host.
  2. I use 'multipath -r' to construct multipath devices, which takes almost 2 
minutes. I found that 'multipath -r' will reconstructs all multipath devices 
against the LUNs 
  The two steps scans all of the LUNs, and totally costs more then 2 minutes.

  According to connect_volume in nova.virt.libvirt.volume.py:
  
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/volume.py#L252,
  Nova also uses the tow steps to detect new multipath volume, this two
  steps will scan all of the LUNs, including all the others which
  already connected. So if a host has a large number of LUNs connected
  to it, the connect_volume will be very slow.

  I think connect_volume needn't scan all of the LUNs, only need scan
  the LUN specified by connection_info.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362513/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361631] Re: Do not query datetime type filed when it is not needed

2014-09-10 Thread Sean Dague
This is a really deep optimization, I think something like this needs to
come up as a spec on database optimization not a one off bug.

** Changed in: nova
   Status: New = Opinion

** Changed in: nova
   Importance: Undecided = Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361631

Title:
  Do not query datetime type filed when it is not needed

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  creating a datetime object is more expensive then any other type used
  in the database.

  Creating the datetime object is expensive especially for mysql
  drivers, because creating the object from a datetime string
  representation is expensive.

  When listing 4k instances with details without the volumes_extension,
  approximately 2 second spent in the mysql driver, which spent 1 second
  for parsing the datetime (DateTime_or_None).

  The datetime format is only useful when you are intended to present
  the time for an end user, for the system the float or integer
  representations are more efficient.

  * consider changing the store type to float or int
  * exclude the datetime fields from the query when it will not be part of an 
api response
  * remove the datetime fields from the database where it is is not really 
needed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1361631/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1360260] Re: 'allow_same_net_traffic=true' has no effect

2014-09-10 Thread Sean Dague
** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1360260

Title:
  'allow_same_net_traffic=true' has no effect

Status in OpenStack Compute (Nova):
  New
Status in OpenStack Manuals:
  New

Bug description:
  environment: Ubuntu trusty, icehouse from repos. 
  Setup per 'Openstack Installation Guide for Ubuntu 12.04/14.04 LTS' 

  **brief**

  two instances X and Y are members of security group A. Despite the
  following explicit setting in nova.conf:

  allow_same_net_traffic=True

  ...the instances are only allowed to communicate according to the
  rules defined in security group A.

  
  **detail**

  I first noticed this attempting to run iperf between two instances on
  the same security network; they were unable to connect via the default
  TCP port 5001.

  They were able to ping...looking at rules for the security group they
  are are associated with, ping was allowed, so I then suspected the
  security group rules were being applied to all communication, despite
  them being on the same security group.

  To test, I added rules to group A that allowed all communication, and
  associated the rules with itself (i.e. security group A) and voila,
  they could talk!

  I then thought I had remembered incorrectly that by default all
  traffic is allowed between instances on the same security group, so I
  double-checked the documentation, but according to the documentation I
  had remembered correctly:

  allow_same_net_traffic = True (BoolOpt) Whether to allow network
  traffic from same network

  ...I searched through my nova.conf files, but there was no
  'allow_same_net_traffic' entry, so the default ought to be True,
  right? Just to be sure, I explicitly added:

  allow_same_net_traffic = True

  to nova.conf and restarted nova services, but the security group rules
  are still being applied to communication between instances that are
  associated with the same security group.

  I thought the 'default' security group might be a special case, so I
  tested on another security group, but still get the same behaviour.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1360260/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367729] [NEW] glance-manage db metadefs commands don't use transactions

2014-09-10 Thread Pawel Koniszewski
Public bug reported:

Current approach of loading metadata definitions to database does not
use transactions. Instead it inserts data to database without
transactions so if something fails inside a single file, e.g. inserting
properties, then user has to manually remove all related data from
database, repair the json file and call 'db load_metadefs' again.

To prevent such scenario db load_metadefs should use transactions, so if
something fails then user won't care about consistency of the data in
database. Also to keep consistency in data seeding script all methods
should be rewritten to use sessions instead of engines.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1367729

Title:
  glance-manage db metadefs commands don't use transactions

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Current approach of loading metadata definitions to database does not
  use transactions. Instead it inserts data to database without
  transactions so if something fails inside a single file, e.g.
  inserting properties, then user has to manually remove all related
  data from database, repair the json file and call 'db load_metadefs'
  again.

  To prevent such scenario db load_metadefs should use transactions, so
  if something fails then user won't care about consistency of the data
  in database. Also to keep consistency in data seeding script all
  methods should be rewritten to use sessions instead of engines.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1367729/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359808] Re: extended_volumes slows down the nova instance list by 40..50%

2014-09-10 Thread Sean Dague
Per similar bugs I've found in here, performance improvements are really
hard to track as bugs because they are a point in time behavior that
doesn't really have a repeat scenario. We should take a push on specs
for performance improvements. I think we all know that large numbers of
API calls take a while, but what's acceptable is still up for debate.

** Changed in: nova
   Status: New = Opinion

** Changed in: nova
   Importance: Undecided = Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1359808

Title:
  extended_volumes slows down the nova instance list by 40..50%

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  When listing ~4096 instances, the nova API (n-api) service has high CPU(100%) 
 usage because it does individual SELECTs,
  for every server's block_device_mapping. This adds ~20-25 sec to the response 
time.

  Please use more efficient way for getting the block_device_mapping,
  when multiple instance queried.

  This line initiating the individual select:
  
https://github.com/openstack/nova/blob/4b414adce745c07fbf2003ec25a5e554e634c8b7/nova/api/openstack/compute/contrib/extended_volumes.py#L32

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1359808/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358667] Re: Don't need judge suffix in _create_image method

2014-09-10 Thread Sean Dague
** Changed in: nova
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1358667

Title:
  Don't need judge suffix in _create_image method

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  I don't think we need judge suffix in _create_image method:
  
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L2715
  because it just waste time, is that a bug?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1358667/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1360987] Re: shelve command should restore instance's original state

2014-09-10 Thread Sean Dague
** Changed in: nova
   Status: New = Opinion

** Changed in: nova
   Importance: Undecided = Wishlist

** Changed in: nova
   Status: Opinion = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1360987

Title:
  shelve command should restore instance's original state

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  A paused instance can be shelved and then unshelved
  then the instance is ACTIVE instead of PAUSE state 

  this also apply to SUSPEND state

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1360987/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367741] [NEW] The `fault` should be included to log error message when vmware error happens

2014-09-10 Thread Jaroslav Henner
Public bug reported:

... because it can contain important information. For example:

(TaskInfo){
   key = task-34928
   task = 
  (task){
 value = task-34928
 _type = Task
  }
   description = 
  (LocalizableMessage){
 key = com.vmware.vim.vpxd.vpx.vmprov.CreateDestinationVm
 message = Copying Virtual Machine configuration
  }
   name = CreateVM_Task
   descriptionId = Folder.createVm
   entity = 
  (entity){
 value = group-v3
 _type = Folder
  }
   entityName = vm
   state = error
   cancelled = False
   cancelable = False
   error = 
  (LocalizedMethodFault){
 fault = 
(PlatformConfigFault){
   text = Failed to attach port
}
 localizedMessage = An error occurred during host configuration.
  }
   reason = 
  (TaskReasonUser){
 userName = root
  }
   queueTime = 2014-09-10 12:46:48.283593
   startTime = 2014-09-10 12:46:48.290384
   completeTime = 2014-09-10 12:46:49.798797
   eventChainId = 157130
 }

Currently, only the localizedMessage is used to produce the log line in
nova/virt/vmwareapi/driver.py  _poll_task(). In this case, the message
is too general. The important reason is said in the error.fault.text, so
It should be reported as well.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367741

Title:
  The `fault` should be included to log error message when vmware error
  happens

Status in OpenStack Compute (Nova):
  New

Bug description:
  ... because it can contain important information. For example:

  (TaskInfo){
 key = task-34928
 task = 
(task){
   value = task-34928
   _type = Task
}
 description = 
(LocalizableMessage){
   key = com.vmware.vim.vpxd.vpx.vmprov.CreateDestinationVm
   message = Copying Virtual Machine configuration
}
 name = CreateVM_Task
 descriptionId = Folder.createVm
 entity = 
(entity){
   value = group-v3
   _type = Folder
}
 entityName = vm
 state = error
 cancelled = False
 cancelable = False
 error = 
(LocalizedMethodFault){
   fault = 
  (PlatformConfigFault){
 text = Failed to attach port
  }
   localizedMessage = An error occurred during host configuration.
}
 reason = 
(TaskReasonUser){
   userName = root
}
 queueTime = 2014-09-10 12:46:48.283593
 startTime = 2014-09-10 12:46:48.290384
 completeTime = 2014-09-10 12:46:49.798797
 eventChainId = 157130
   }

  Currently, only the localizedMessage is used to produce the log line
  in nova/virt/vmwareapi/driver.py  _poll_task(). In this case, the
  message is too general. The important reason is said in the
  error.fault.text, so It should be reported as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1367741/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367740] [NEW] Assignment backends raise non-suggestive exception in grant CRUD

2014-09-10 Thread Samuel de Medeiros Queiroz
Public bug reported:

When getting or deleting a grant, if something goes wrong, a
RoleNotFound exception is thrown. [1]-[6]

In cases where the role exists and the combination of other arguments is
invalid, this is a non-suggestive exception because it tells us Could
not find role: %(role_id)s.

We should create a new exception called GrantNotFound and use it in
those cases.

[1] 
https://github.com/openstack/keystone/blob/master/keystone/assignment/backends/sql.py#L201-L202
[2] 
https://github.com/openstack/keystone/blob/master/keystone/assignment/backends/sql.py#L219-L220

[3] 
https://github.com/openstack/keystone/blob/master/keystone/assignment/backends/kvs.py#L527-L528
[4] 
https://github.com/openstack/keystone/blob/master/keystone/assignment/backends/kvs.py#L549-L550

[5] 
https://github.com/openstack/keystone/blob/master/keystone/assignment/backends/ldap.py#L353-L354
[6] 
https://github.com/openstack/keystone/blob/master/keystone/assignment/backends/ldap.py#L381-L382

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1367740

Title:
  Assignment backends raise non-suggestive exception in grant CRUD

Status in OpenStack Identity (Keystone):
  New

Bug description:
  When getting or deleting a grant, if something goes wrong, a
  RoleNotFound exception is thrown. [1]-[6]

  In cases where the role exists and the combination of other arguments
  is invalid, this is a non-suggestive exception because it tells us
  Could not find role: %(role_id)s.

  We should create a new exception called GrantNotFound and use it in
  those cases.

  [1] 
https://github.com/openstack/keystone/blob/master/keystone/assignment/backends/sql.py#L201-L202
  [2] 
https://github.com/openstack/keystone/blob/master/keystone/assignment/backends/sql.py#L219-L220

  [3] 
https://github.com/openstack/keystone/blob/master/keystone/assignment/backends/kvs.py#L527-L528
  [4] 
https://github.com/openstack/keystone/blob/master/keystone/assignment/backends/kvs.py#L549-L550

  [5] 
https://github.com/openstack/keystone/blob/master/keystone/assignment/backends/ldap.py#L353-L354
  [6] 
https://github.com/openstack/keystone/blob/master/keystone/assignment/backends/ldap.py#L381-L382

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1367740/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1360720] Re: nova network always report error when boot VM

2014-09-10 Thread Sean Dague
The error message is correctly, flat_interface can't be a bridge.

** Changed in: nova
   Status: New = Invalid

** Changed in: ubuntu
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1360720

Title:
  nova network always report error when boot VM

Status in OpenStack Compute (Nova):
  Invalid
Status in Ubuntu:
  Invalid

Bug description:
  When boot a VM with nova network, it always report  NovaException:
  Failed to add interface: device br100 is a bridge device itself; can't
  enslave a bridge device to a bridge device. and this caused my VM can
  not be started.

  Reproduce steps:
  1) Install OpenStack with Devstack
  jay@jay001:~/src/devstack$ cat localrc 
  HOST_IP=192.168.0.103
  ADMIN_PASSWORD=nova
  MYSQL_PASSWORD=nova
  RABBIT_PASSWORD=nova
  SERVICE_PASSWORD=nova
  SERVICE_TOKEN=tokentoken
  FLAT_INTERFACE=br100
  #VIRT_DRIVER=docker
  #RECLONE=yes
   
  VERBOSE=True
  LOG_COLOR=True
  SCREEN_LOGDIR=/opt/stack/logs
   
  #disable_service horizon
   
  #OFFLINE=False
  #OFFLINE=True
  #ENABLED_SERVICES+=,heat,h-api-cfn,h-api-cw,h-eng
  ENABLED_SERVICES+=,heat,h-api,h-api-cfn,h-api-cw,h-eng
  
#IMAGE_URLS+=,http://fedorapeople.org/groups/heat/prebuilt-jeos-images/F16-x86_64-cfntools.qcow2,http://fedorapeople.org/groups/heat/prebuilt-jeos-images/F16-i386-cfntools.qcow2;
  
#ENABLED_SERVICES+=ceilometer-acompute,ceilometer-acentral,ceilometer-collector,ceilometer-api,ceilometer-alarm-notify,ceilometer-alarm-eval
  #CEILOMETER_BACKEND=mysql
  2) After install finished, boot a VM
  jay@jay001:~/src/devstack$ nova boot --image  cirros-0.3.2-x86_64-uec 
--flavor 1 vm1
  
+--++
  | Property | Value
  |
  
+--++
  | OS-DCF:diskConfig| MANUAL   
  |
  | OS-EXT-AZ:availability_zone  | nova 
  |
  | OS-EXT-SRV-ATTR:host | -
  |
  | OS-EXT-SRV-ATTR:hypervisor_hostname  | -
  |
  | OS-EXT-SRV-ATTR:instance_name| instance-0002
  |
  | OS-EXT-STS:power_state   | 0
  |
  | OS-EXT-STS:task_state| scheduling   
  |
  | OS-EXT-STS:vm_state  | building 
  |
  | OS-SRV-USG:launched_at   | -
  |
  | OS-SRV-USG:terminated_at | -
  |
  | accessIPv4   |  
  |
  | accessIPv6   |  
  |
  | adminPass| F5NXNAVJMXNi 
  |
  | config_drive |  
  |
  | created  | 2014-08-23T23:54:50Z 
  |
  | flavor   | m1.tiny (1)  
  |
  | hostId   |  
  |
  | id   | 48eec530-4279-423c-a134-0bbb19287d72 
  |
  | image| cirros-0.3.2-x86_64-uec 
(b8e84ec2-a63c-4f24-b9bb-6532f507668e) |
  | key_name | -
  |
  | metadata | {}   
  |
  | name | vm1  
  |
  | os-extended-volumes:volumes_attached | []   
  |
  | progress | 0
  |
  | security_groups  | default  
  |
  | status   | BUILD
  |
  | tenant_id| 0694df50d3c34d128160d9a4a90db5ff 

[Yahoo-eng-team] [Bug 1354258] Re: nova-api will go wrong if AZ name has space in it when memcach is used

2014-09-10 Thread Sean Dague
This is a root memcache issue. If not addressed there we could uri
encode ourselves.

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1354258

Title:
  nova-api will go wrong if AZ name has space in it when memcach is used

Status in OpenStack Compute (Nova):
  Invalid
Status in The Oslo library incubator:
  Triaged

Bug description:
  Description:
  1. memcahe is enabled
  2. AZ name has space in it such as vmware region

  Then the nova-api will go wrong:
  [root@rs-144-1 init.d]# nova list
  ERROR: The server has either erred or is incapable of performing the 
requested operation. (HTTP 500) (Request-ID: 
req-a26c1fd3-ce08-4875-aacf-f8db8f73b089)

  Reason:
  Memcach retrieve the AZ name as key and check it. It will raise an error if 
there are unexpected character in the key.

  LOG in /var/log/api.log

  2014-08-08 03:22:50.525 23184 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/availability_zones.py, line 145, in 
get_instance_availability_zone
  2014-08-08 03:22:50.525 23184 TRACE nova.api.openstack az = 
cache.get(cache_key)
  2014-08-08 03:22:50.525 23184 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/memcache.py, line 898, in get
  2014-08-08 03:22:50.525 23184 TRACE nova.api.openstack return 
self._get('get', key)
  2014-08-08 03:22:50.525 23184 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/memcache.py, line 847, in _get
  2014-08-08 03:22:50.525 23184 TRACE nova.api.openstack self.check_key(key)
  2014-08-08 03:22:50.525 23184 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/memcache.py, line 1065, in check_key
  2014-08-08 03:22:50.525 23184 TRACE nova.api.openstack #Control 
characters not allowed)
  2014-08-08 03:22:50.525 23184 TRACE nova.api.openstack 
MemcachedKeyCharacterError: Control characters not allowed

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1354258/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1352405] Re: Storage on hypervisors page incorrect for shared storage

2014-09-10 Thread Sean Dague
Not really a nova issue, shared storage accounting in the cluster is not
really it's job.

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1352405

Title:
  Storage on hypervisors page incorrect for shared storage

Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  The storage total and storage used shown on the Hypervisors page does not 
take account of the shared storage case.
  We have shared storage for /var/lib/nova/instances (currently using Gluster) 
and Horizon computes a simple addition of the usage across the compute nodes. 
The total and used figures are incorrect.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1352405/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355623] [NEW] nova floating-ip-create need pool name

2014-09-10 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

#
# help menu
#
[root@cnode35-m ~(keystone_admin)]# nova help floating-ip-create
usage: nova floating-ip-create [floating-ip-pool]

Allocate a floating IP for the current tenant.

Positional arguments:
  floating-ip-pool Name of Floating IP Pool. (Optional)

#
# error log
#
[root@cnode35-m ~(keystone_admin)]# nova floating-ip-create
ERROR: FloatingIpPoolNotFound: Floating ip pool not found. (HTTP 404) 
(Request-ID: req-224995d7-b1bf-4b82-83f6-d9259c1ca265)

** Affects: nova
 Importance: Undecided
 Status: Confirmed

-- 
nova floating-ip-create need pool name
https://bugs.launchpad.net/bugs/1355623
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355623] Re: nova floating-ip-create need pool name

2014-09-10 Thread vishal yadav
** Changed in: python-novaclient
   Status: New = Confirmed

** Project changed: python-novaclient = nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1355623

Title:
  nova floating-ip-create need pool name

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  #
  # help menu
  #
  [root@cnode35-m ~(keystone_admin)]# nova help floating-ip-create
  usage: nova floating-ip-create [floating-ip-pool]

  Allocate a floating IP for the current tenant.

  Positional arguments:
floating-ip-pool Name of Floating IP Pool. (Optional)

  #
  # error log
  #
  [root@cnode35-m ~(keystone_admin)]# nova floating-ip-create
  ERROR: FloatingIpPoolNotFound: Floating ip pool not found. (HTTP 404) 
(Request-ID: req-224995d7-b1bf-4b82-83f6-d9259c1ca265)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1355623/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1352105] Re: can not get network info from metadata server when cloud is set to use dhcp

2014-09-10 Thread Sean Dague
Isn't that the design point, if you've configured to use dhcp in your
cloud that should be how networks are configured.

Marking as Opinion, please move back if you think there is a bug here.

** Summary changed:

- can not get network info from metadata server
+ can not get network info from metadata server when cloud is set to use dhcp

** Changed in: nova
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1352105

Title:
  can not get network info from metadata server when cloud is set to use
  dhcp

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  I want to use cloudinit to get the network info and write it into the network 
configuration file,
  but it failed because cloudinit didn't get the network info.
  In the vm, I used curl to test the metadata as below
  #curl http://169.254.169.254/openstack/latest/meta_data.json
  the reponse info didn't contain network info.
  See following code in nova/virt/netutils.py
  if subnet_v4:
  if subnet_v4.get_meta('dhcp_server') is not None:
  continue
  It seems that when vm use  neutron dhcp, network info will be ignored in 
meta_data.json

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1352105/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358624] Re: Config Drive not created when boot from volume

2014-09-10 Thread Sean Dague
Please provide heat templates as well as the nova config, I expect part
of the issue is the options that heat is passing to Nova.

** Changed in: nova
   Status: New = Incomplete

** Also affects: heat
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1358624

Title:
  Config Drive not created when boot from volume

Status in Orchestration API (Heat):
  New
Status in OpenStack Compute (Nova):
  Incomplete

Bug description:
  I am trying to launch my instance from cinder volume with heat template, but 
no success. 
  My template will first create a volume from a glance image and then boot from 
this volume.
  It seems that libvirt is looking for 
/var/lib/nova/instances/instance-id/disk, while my instance is booted from 
volume, the instance-id directory does not even exist.

  In scheduler.log, I see:
  Error from last host: compute-08 (node compute-08.local): [u'Traceback (most 
recent call last):\n', u'  File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 1305, in 
_build_instance\nset_access_ip=set_access_ip)\n', u'  File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 393, in 
decorated_function\nreturn function(self, context, *args, **kwargs)\n', u'  
File /usr/lib/python2.6/site-packages/nova/compute/manager.py, line 1717, in 
_spawn\nLOG.exception(_(\'Instance failed to spawn\'), 
instance=instance)\n', u'  File 
/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py, line 68, 
in __exit__\nsix.reraise(self.type_, self.value, self.tb)\n', u'  File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 1714, in 
_spawn\nblock_device_info)\n', u'  File 
/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py, line 2265, in 
spawn\nblock_device_info)\n', u'  File /usr/lib/python2.6/site-packages/
 nova/virt/libvirt/driver.py, line 3656, in _create_domain_and_network\n
power_on=power_on)\n', u'  File 
/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py, line 3559, in 
_create_domain\ndomain.XMLDesc(0))\n', u'  File 
/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py, line 68, 
in __exit__\nsix.reraise(self.type_, self.value, self.tb)\n', u'  File 
/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py, line 3554, in 
_create_domain\ndomain.createWithFlags(launch_flags)\n', u'  File 
/usr/lib/python2.6/site-packages/eventlet/tpool.py, line 179, in doit\n
result = proxy_call(self._autowrap, f, *args, **kwargs)\n', u'  File 
/usr/lib/python2.6/site-packages/eventlet/tpool.py, line 139, in proxy_call\n 
   rv = execute(f,*args,**kwargs)\n', u'  File 
/usr/lib/python2.6/site-packages/eventlet/tpool.py, line 77, in tworker\n
rv = meth(*args,**kwargs)\n', u'  File 
/usr/lib64/python2.6/site-packages/libvirt.py, line 708, in crea
 teWithFlags\nif ret == -1: raise libvirtError 
(\'virDomainCreateWithFlags() failed\', dom=self)\n', ulibvirtError: cannot 
open file '/var/lib/nova/instances/3310bf49-82d0-467d-8b9b-de8d453cdef8/disk': 
No such file or directory\n]

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1358624/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1284232] Re: test_pause_paused_server fails with Cannot 'unpause' while instance is in vm_state active

2014-09-10 Thread Sean Dague
This bug is old enough that we no longer have the logs. Marking as
invalid as we can't really move forward on this particular instance.

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1284232

Title:
  test_pause_paused_server fails with Cannot 'unpause' while instance
  is in vm_state active

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Tempest test is failing a negative test, test_pause_paused_server.

  From http://logs.openstack.org/29/74229/5/check/check-tempest-dsvm-
  postgres-full/0878106/

  2014-02-21 22:54:26.645 | 
  2014-02-21 22:54:26.645 | traceback-1: {{{
  2014-02-21 22:54:26.645 | Traceback (most recent call last):
  2014-02-21 22:54:26.645 |   File 
tempest/services/compute/json/servers_client.py, line 371, in unpause_server
  2014-02-21 22:54:26.646 | return self.action(server_id, 'unpause', None, 
**kwargs)
  2014-02-21 22:54:26.646 |   File 
tempest/services/compute/json/servers_client.py, line 196, in action
  2014-02-21 22:54:26.646 | post_body)
  2014-02-21 22:54:26.646 |   File tempest/common/rest_client.py, line 177, 
in post
  2014-02-21 22:54:26.646 | return self.request('POST', url, headers, body)
  2014-02-21 22:54:26.646 |   File tempest/common/rest_client.py, line 352, 
in request
  2014-02-21 22:54:26.646 | resp, resp_body)
  2014-02-21 22:54:26.647 |   File tempest/common/rest_client.py, line 406, 
in _error_checker
  2014-02-21 22:54:26.647 | raise exceptions.Conflict(resp_body)
  2014-02-21 22:54:26.647 | Conflict: An object with that identifier already 
exists
  2014-02-21 22:54:26.647 | Details: {u'message': uCannot 'unpause' while 
instance is in vm_state active, u'code': 409}
  2014-02-21 22:54:26.647 | }}}
  2014-02-21 22:54:26.647 | 
  2014-02-21 22:54:26.648 | Traceback (most recent call last):
  2014-02-21 22:54:26.648 |   File 
tempest/api/compute/servers/test_servers_negative.py, line 135, in 
test_pause_paused_server
  2014-02-21 22:54:26.648 | 
self.client.wait_for_server_status(self.server_id, 'PAUSED')
  2014-02-21 22:54:26.648 |   File 
tempest/services/compute/json/servers_client.py, line 160, in 
wait_for_server_status
  2014-02-21 22:54:26.648 | raise_on_error=raise_on_error)
  2014-02-21 22:54:26.648 |   File tempest/common/waiters.py, line 89, in 
wait_for_server_status
  2014-02-21 22:54:26.649 | raise exceptions.TimeoutException(message)
  2014-02-21 22:54:26.649 | TimeoutException: Request timed out
  2014-02-21 22:54:26.649 | Details: Server 
e09307a3-d2b3-4b43-895e-b14952a15aea failed to reach PAUSED status and task 
state None within the required time (196 s). Current status: ACTIVE. Current 
task state: pausing.
  2014-02-21 22:54:26.649 |

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1284232/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1356736] Re: When executing 'vm resize' commad there is noresponse after a long time if the vm is down

2014-09-10 Thread Sean Dague
Not really sure what the bug is here, there is nothing really actionable
in this report. Please describe the sequence of events required to
reproduce, what version of openstack, and what you believe the expected
behavior is.

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1356736

Title:
  When executing 'vm resize' commad there is noresponse after a long
  time if the vm is down

Status in OpenStack Compute (Nova):
  Invalid
Status in OpenStack Core Infrastructure:
  Invalid

Bug description:
  It seems if the vm is down and at the same time sending the resize
  command. The command will be hanging.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1356736/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367757] [NEW] nuage_net_partition_router_mapping model does not match migration

2014-09-10 Thread Henry Gessau
Public bug reported:

class NetPartitionRouter(model_base.BASEV2):
__tablename__ = nuage_net_partition_router_mapping
net_partition_id = sa.Column(sa.String(36),
 sa.ForeignKey('nuage_net_partitions.id',
 ondelete=CASCADE),
 primary_key=True)
router_id = sa.Column(sa.String(36),
  sa.ForeignKey('routers.id', ondelete=CASCADE),
  primary_key=True)
nuage_router_id = sa.Column(sa.String(36))


op.create_table(
'net_partition_router_mapping',
sa.Column('net_partition_id', sa.String(length=36), nullable=False),
sa.Column('router_id', sa.String(length=36), nullable=False),
sa.Column('nuage_router_id', sa.String(length=36), nullable=True),
sa.ForeignKeyConstraint(['net_partition_id'], ['net_partitions.id'],
ondelete='CASCADE'),
sa.ForeignKeyConstraint(['router_id'], ['routers.id'],
ondelete='CASCADE'),
sa.PrimaryKeyConstraint('router_id'),
)

The migration is missing PK constraint on net_partition_id.

** Affects: neutron
 Importance: Undecided
 Assignee: Ann Kamyshnikova (akamyshnikova)
 Status: New


** Tags: db

** Tags added: db

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1367757

Title:
  nuage_net_partition_router_mapping model does not match migration

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  class NetPartitionRouter(model_base.BASEV2):
  __tablename__ = nuage_net_partition_router_mapping
  net_partition_id = sa.Column(sa.String(36),
   sa.ForeignKey('nuage_net_partitions.id',
   ondelete=CASCADE),
   primary_key=True)
  router_id = sa.Column(sa.String(36),
sa.ForeignKey('routers.id', ondelete=CASCADE),
primary_key=True)
  nuage_router_id = sa.Column(sa.String(36))

  
  op.create_table(
  'net_partition_router_mapping',
  sa.Column('net_partition_id', sa.String(length=36), nullable=False),
  sa.Column('router_id', sa.String(length=36), nullable=False),
  sa.Column('nuage_router_id', sa.String(length=36), nullable=True),
  sa.ForeignKeyConstraint(['net_partition_id'], ['net_partitions.id'],
  ondelete='CASCADE'),
  sa.ForeignKeyConstraint(['router_id'], ['routers.id'],
  ondelete='CASCADE'),
  sa.PrimaryKeyConstraint('router_id'),
  )

  The migration is missing PK constraint on net_partition_id.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1367757/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1285530] Re: exception message should use gettextutils

2014-09-10 Thread Sean Dague
** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1285530

Title:
  exception message should use gettextutils

Status in Cinder:
  Incomplete
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Triaged
Status in OpenStack Neutron (virtual network service):
  Incomplete
Status in The Oslo library incubator:
  Triaged
Status in Messaging API for OpenStack:
  Incomplete
Status in Python client library for Ironic:
  In Progress
Status in Python client library for Neutron:
  In Progress
Status in OpenStack Object Storage (Swift):
  In Progress
Status in Tuskar:
  In Progress
Status in Tuskar UI:
  In Progress

Bug description:
  What To Translate

  At present the convention is to translate all user-facing strings.
  This means API messages, CLI responses, documentation, help text, etc.

  There has been a lack of consensus about the translation of log
  messages; the current ruling is that while it is not against policy to
  mark log messages for translation if your project feels strongly about
  it, translating log messages is not actively encouraged.

  Exception text should not be marked for translation, becuase if an
  exception occurs there is no guarantee that the translation machinery
  will be functional.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1285530/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261631] Re: Reconnect on failure for multiple servers always connects to first server

2014-09-10 Thread Sean Dague
** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1261631

Title:
  Reconnect on failure for multiple servers always connects to first
  server

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  In Progress
Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Released
Status in OpenStack Identity (Keystone):
  Triaged
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released
Status in The Oslo library incubator:
  Fix Released
Status in oslo-incubator havana series:
  Fix Committed
Status in Messaging API for OpenStack:
  Fix Released

Bug description:
  In attempting to reconnect to an AMQP server when a communication
  failure occurs, both the qpid and rabbit drivers target the configured
  servers in the order in which they were provided.  If a connection to
  the first server had failed, the subsequent reconnection attempt would
  be made to that same server instead of trying one that had not yet
  failed.  This could increase the time to failover to a working server.

  A plausible workaround for qpid would be to decrease the value for
  qpid_timeout, but since the problem only occurs if the failed server
  is the first configured, the results of the workaround would depend on
  the order that the failed server appears in the configuration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1261631/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291677] Re: delete interface times out in check-tempest-dsvm-neutron

2014-09-10 Thread Sean Dague
** No longer affects: nova

** No longer affects: neutron

** Changed in: tempest
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1291677

Title:
  delete interface times out in check-tempest-dsvm-neutron

Status in Tempest:
  Invalid

Bug description:
  
  This occurred during a tempest run on https://review.openstack.org/#/c/78459/ 
(in Keystone). check-tempest-dsvm-neutron failed with a single test failure: 
tempest.api.compute.v3.servers.test_attach_interfaces.AttachInterfacesV3Test.test_create_list_show_delete_interfaces

  Looks like the test gets a token, starts booting an instance:

  POST http://127.0.0.1:5000/v2.0/tokens - Status: 200
  POST http://127.0.0.1:8774/v3/servers - Status: 202

  ... this is all probably expected... eventually it does a DELETE:

  2014-03-12 20:59:32,797 
  Request: DELETE 
http://127.0.0.1:8774/v3/servers/f4433c4f-4d27-492d-8794-77674a634c3f/os-attach-interfaces/f145579d-aaa5-4d44-941c-9856500f65b5
  Response Status: 202

  Then it starts requesting status... and eventually it gives up:

  2014-03-12 21:02:48,016
  GET 
http://127.0.0.1:8774/v3/servers/f4433c4f-4d27-492d-8794-77674a634c3f/os-attach-interfaces
  Response Status: 200
  Response Body: ... port_state: ACTIVE,  ... port_state: ACTIVE ... 
port_state: ACTIVE

  The test failed in _test_delete_interface:

  File tempest/api/compute/v3/servers/test_attach_interfaces.py, line 127, in 
test_create_list_show_delete_interfaces
  File tempest/api/compute/v3/servers/test_attach_interfaces.py, line 96, in 
_test_delete_interface
  Details: Failed to delete interface within the required time: 196 sec.

  The timings at the end show the slowpoke:

  
tempest.api.compute.v3.servers.test_attach_interfaces.AttachInterfacesV3Test.test_create_list_show_delete_interfaces[gate,smoke]
  218.999

  

  Neutron's last entry for the instance in q-svc.txt is

  [12/Mar/2014 21:02:52] GET /v2.0/ports.json?device_id=f4433c4f-
  4d27-492d-8794-77674a634c3f HTTP/1.1 200 1912 0.027403

  I grepped through the q-svc.txt log file and I don't see anyplace
  where the DELETE is forwarded on.

  

  Did the test not wait long enough for the operation to complete? Was
  the shutdown request ignored?

To manage notifications about this bug go to:
https://bugs.launchpad.net/tempest/+bug/1291677/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1289135] Re: cinderclient AmbiguousEndpoints in Nova API when deleting nested stack

2014-09-10 Thread Sean Dague
Assuming this is in the cinder client

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1289135

Title:
  cinderclient AmbiguousEndpoints in Nova API when deleting nested stack

Status in OpenStack Compute (Nova):
  Invalid
Status in Python client library for Cinder:
  New

Bug description:
  While chasing down some errors I found the first one was the
  following, found in the log from the Nova API process.

  2014-03-06 22:17:41.713 ERROR nova.api.openstack 
[req-0a2e7b6b-8ea8-48f1-b6c9-4c6a20ba27b4 admin admin] Caught error: 
AmbiguousEndpoints: [{u'url': 
u'http://10.10.0.24:8776/v1/a4c140dd5649439987f9c61d0a91c76e', u'region': 
u'RegionOne', u'legacy_endpoint_id': u'58ac5510148c4641ab48e1499c0bb4ec', 
'serviceName': None, u'interface': u'admin', u'id': 
u'154c830dce20478a8b269b5f85f7bca3'}, {u'url': 
u'http://10.10.0.24:8776/v1/a4c140dd5649439987f9c61d0a91c76e', u'region': 
u'RegionOne', u'legacy_endpoint_id': u'58ac5510148c4641ab48e1499c0bb4ec', 
'serviceName': None, u'interface': u'public', u'id': 
u'4129f440fa42491f997984455b9727af'}, {u'url': 
u'http://10.10.0.24:8776/v1/a4c140dd5649439987f9c61d0a91c76e', u'region': 
u'RegionOne', u'legacy_endpoint_id': u'58ac5510148c4641ab48e1499c0bb4ec', 
'serviceName': None, u'interface': u'internal', u'id': 
u'7f2013973d0248f1ba64ece67e3df7bb'}]
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack Traceback (most recent 
call last):
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/__init__.py, line 125, in __call__
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack return 
req.get_response(self.application)
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1296, in send
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1260, in 
call_application
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack   File 
/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py, 
line 596, in __call__
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack return 
self.app(env, start_response)
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/routes/middleware.py, line 131, in __call__
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack return 
self.func(req, *args, **kwargs)
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/wsgi.py, line 925, in __call__
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack content_type, 
body, accept)
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/wsgi.py, line 987, in _process_stack
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/wsgi.py, line 1074, in dispatch
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack return 
method(req=request, **action_args)
  2014-03-06 22:17:41.713 17339 TRACE 

[Yahoo-eng-team] [Bug 1341128] Re: Several inaccuracies in wiki PCI_passthrough_SRIOV_support

2014-09-10 Thread Sean Dague
Not a nova problem, please just update the wiki

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1341128

Title:
  Several inaccuracies in wiki PCI_passthrough_SRIOV_support

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support

  There are several inaccuracies below:
  1) create PCI flavor
  #The below name bigGPU should be bigGPU2.
   nova pci-flavor-create  name 'bigGPU'  description 'passthrough Intel's 
on-die GPU'
   nova pci-flavor-update  name 'bigGPU2'   set'vendor_id'='8086'   
'product_id': '0002'
  2)create flavor and boot with it
   nova flavor-key m1.small set pci_passthrough:pci_flavor= '1:bigGPU,bigGPU2;'
   nova boot  mytest  --flavor m1.tiny  --image=cirros-0.3.1-x86_64-uec
  # The flavor above should be same.

  Are they treat as one bug? I am not sure:)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1341128/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1301824] Re: Hypervisors only shows the disk size of /root

2014-09-10 Thread Sean Dague
Nova is showing the space in the filesystem where the instances will be
stored. Showing inaccessable local storage makes no sense.

** Changed in: nova
   Status: New = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1301824

Title:
  Hypervisors only shows the disk size of /root

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  Dashboard=admin=Hypervisors, Storage total=48G, however, the system
  is ~300G, only /root is ~48G. Looks like that dashboard only shows the
  disk size of /root rather than the whole system's disk.

  [root@HPBlade1 ~]# df -h
  FilesystemSize  Used Avail Use% Mounted on
  /dev/mapper/vg_hpblade1-lv_root
 50G   35G   13G  74% /
  tmpfs  32G   76K   32G   1% /dev/shm
  /dev/sda1 485M   63M  397M  14% /boot
  /dev/mapper/vg_hpblade1-lv_home
198G   23G  165G  13% /home
  [root@HPBlade1 ~]#

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1301824/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1282232] Re: The error message when adding an invalid Router is not formatted

2014-09-10 Thread Akihiro Motoki
** Changed in: python-neutronclient
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1282232

Title:
  The error message when adding an invalid Router is not formatted

Status in OpenStack Dashboard (Horizon):
  In Progress
Status in Python client library for Neutron:
  Fix Released

Bug description:
  I added a invalid Router (external network without subnet) The problem is on 
the displayed message:
  Error: Failed to set gateway 400-{u'NeutronError': {u'message: u'BadUpdate 
Router request: No subnets defined on network 47..', 
u'type:u'BadRequest'u'dwtail:u}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1282232/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355857] Re: HyperV: resize of instance fails when trying migration across host

2014-09-10 Thread Alessandro Pilotti
Python raises a very misleading error when failing to access a network
share for security issues.

Make sure you run the Nova compute service with a user which has SMB
access to the target node (e.g Administrator). In particular the service
user must not be LOCALSYTEM.


** Changed in: nova
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1355857

Title:
  HyperV: resize of instance fails when trying migration across host

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I have a devstack setup and 2 hyperv hosts. Both the hosts are in same 
domain. Compute service and Live Migration is enabled in both the hosts.
  When I am trying to resize a provisioned instance it succeeds if it is 
resized in the same host. However, when it is trying to resize and migrate 
across hosts it fails with following error:

  Compute.log
  2014-08-12 14:03:57.533 2992 DEBUG nova.virt.hyperv.migrationops 
[req-98eaeb46-1272-40af-9205-09fff169f450 None] [instance: 
904fa8eb-7377-4c42-9268-eae266b52648] migrate_disk_and_power_off called 
migrate_disk_and_power_off C:\Program Files (x86)\Cloudbase 
Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\virt\hyperv\migrationops.py:114
  2014-08-12 14:03:57.533 2992 DEBUG nova.virt.hyperv.vmops 
[req-98eaeb46-1272-40af-9205-09fff169f450 None] [instance: 
904fa8eb-7377-4c42-9268-eae266b52648] Power off instance power_off C:\Program 
Files (x86)\Cloudbase 
Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\virt\hyperv\vmops.py:425
  2014-08-12 14:03:57.908 2992 DEBUG nova.virt.hyperv.vmutils 
[req-98eaeb46-1272-40af-9205-09fff169f450 None] WMI job succeeded: Turning Off 
Virtual Machine, Elapsed=00.217830:000 _wait_for_job C:\Program 
Files (x86)\Cloudbase 
Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\virt\hyperv\vmutils.py:481
  2014-08-12 14:03:57.908 2992 DEBUG nova.virt.hyperv.vmutils 
[req-98eaeb46-1272-40af-9205-09fff169f450 None] Successfully changed vm state 
of instance-05d6 to 3 set_vm_state C:\Program Files (x86)\Cloudbase 
Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\virt\hyperv\vmutils.py:394
  2014-08-12 14:03:57.908 2992 DEBUG nova.virt.hyperv.vmops 
[req-98eaeb46-1272-40af-9205-09fff169f450 None] Successfully changed state of 
VM instance-05d6 to: 3 _set_vm_state C:\Program Files (x86)\Cloudbase 
Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\virt\hyperv\vmops.py:440
  2014-08-12 14:03:59.096 2992 DEBUG nova.virt.hyperv.migrationops 
[req-98eaeb46-1272-40af-9205-09fff169f450 None] Migration target host: 
10.1.4.214 _migrate_disk_files C:\Program Files (x86)\Cloudbase 
Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\virt\hyperv\migrationops.py:53
  2014-08-12 14:04:04.753 2992 DEBUG nova.virt.hyperv.pathutils 
[req-98eaeb46-1272-40af-9205-09fff169f450 None] Creating directory: 
\\10.1.4.214\C$$\OpenStack\Instances\instance-05d6 _check_create_dir 
C:\Program Files (x86)\Cloudbase 
Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\virt\hyperv\pathutils.py:96
  2014-08-12 14:07:20.177 2992 ERROR nova.compute.manager 
[req-98eaeb46-1272-40af-9205-09fff169f450 None] [instance: 
904fa8eb-7377-4c42-9268-eae266b52648] Setting instance vm_state to ERROR
  2014-08-12 14:07:20.177 2992 TRACE nova.compute.manager [instance: 
904fa8eb-7377-4c42-9268-eae266b52648] Traceback (most recent call last):
  2014-08-12 14:07:20.177 2992 TRACE nova.compute.manager [instance: 
904fa8eb-7377-4c42-9268-eae266b52648]   File C:\Program Files (x86)\Cloudbase 
Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\compute\manager.py, 
line 5780, in _error_out_instance_on_exception
  2014-08-12 14:07:20.177 2992 TRACE nova.compute.manager [instance: 
904fa8eb-7377-4c42-9268-eae266b52648] yield
  2014-08-12 14:07:20.177 2992 TRACE nova.compute.manager [instance: 
904fa8eb-7377-4c42-9268-eae266b52648]   File C:\Program Files (x86)\Cloudbase 
Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\compute\manager.py, 
line 3569, in resize_instance
  2014-08-12 14:07:20.177 2992 TRACE nova.compute.manager [instance: 
904fa8eb-7377-4c42-9268-eae266b52648] block_device_info)
  2014-08-12 14:07:20.177 2992 TRACE nova.compute.manager [instance: 
904fa8eb-7377-4c42-9268-eae266b52648]   File C:\Program Files (x86)\Cloudbase 
Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\virt\hyperv\driver.py,
 line 191, in migrate_disk_and_power_off
  2014-08-12 14:07:20.177 2992 TRACE nova.compute.manager [instance: 
904fa8eb-7377-4c42-9268-eae266b52648] block_device_info)
  2014-08-12 14:07:20.177 2992 TRACE nova.compute.manager [instance: 
904fa8eb-7377-4c42-9268-eae266b52648]   File C:\Program Files (x86)\Cloudbase 
Solutions\OpenStack\Nova\Python27\lib\site-packages\nova\virt\hyperv\migrationops.py,
 line 126, in migrate_disk_and_power_off
 

[Yahoo-eng-team] [Bug 1367741] Re: The `fault` should be included to log error message when vmware error happens

2014-09-10 Thread Jaroslav Henner
** Also affects: oslo.vmware
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367741

Title:
  The `fault` should be included to log error message when vmware error
  happens

Status in OpenStack Compute (Nova):
  Confirmed
Status in Oslo VMware library for OpenStack projects:
  New

Bug description:
  ... because it can contain important information. For example:

  (TaskInfo){
 key = task-34928
 task = 
(task){
   value = task-34928
   _type = Task
}
 description = 
(LocalizableMessage){
   key = com.vmware.vim.vpxd.vpx.vmprov.CreateDestinationVm
   message = Copying Virtual Machine configuration
}
 name = CreateVM_Task
 descriptionId = Folder.createVm
 entity = 
(entity){
   value = group-v3
   _type = Folder
}
 entityName = vm
 state = error
 cancelled = False
 cancelable = False
 error = 
(LocalizedMethodFault){
   fault = 
  (PlatformConfigFault){
 text = Failed to attach port
  }
   localizedMessage = An error occurred during host configuration.
}
 reason = 
(TaskReasonUser){
   userName = root
}
 queueTime = 2014-09-10 12:46:48.283593
 startTime = 2014-09-10 12:46:48.290384
 completeTime = 2014-09-10 12:46:49.798797
 eventChainId = 157130
   }

  Currently, only the localizedMessage is used to produce the log line
  in nova/virt/vmwareapi/driver.py  _poll_task(). In this case, the
  message is too general. The important reason is said in the
  error.fault.text, so It should be reported as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1367741/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367771] [NEW] glance-manage db load_metadefs will fail if DB is not empty

2014-09-10 Thread Pawel Koniszewski
Public bug reported:

To insert data into DB 'glance-manage db load_metadefs' uses IDs for
namespaces which are generated by built-in function in Python -
enumerate:

for namespace_id, json_schema_file in enumerate(json_schema_files,
start=1):

For empty database it works fine, but this causes problems when there
are already metadata namespaces in database. The problem is that when
there are already metadata definitions in DB then every invoke of
glance-manage db load_metadefs leads to IntegrityErrors because of
duplicated IDs.

There are two approaches to fix this:
1. Ask for a namespace just after inserting it. Unfortunately in current 
implementation we need to do one more query.
2. When this go live - https://review.openstack.org/#/c/120414/ - then we won't 
need to do another query, because ID is available just after inserting a 
namespace to DB (namespace.save(session=session)).

** Affects: glance
 Importance: Undecided
 Assignee: Pawel Koniszewski (pawel-koniszewski)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1367771

Title:
  glance-manage db load_metadefs will fail if DB is not empty

Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress

Bug description:
  To insert data into DB 'glance-manage db load_metadefs' uses IDs for
  namespaces which are generated by built-in function in Python -
  enumerate:

  for namespace_id, json_schema_file in enumerate(json_schema_files,
  start=1):

  For empty database it works fine, but this causes problems when there
  are already metadata namespaces in database. The problem is that when
  there are already metadata definitions in DB then every invoke of
  glance-manage db load_metadefs leads to IntegrityErrors because of
  duplicated IDs.

  There are two approaches to fix this:
  1. Ask for a namespace just after inserting it. Unfortunately in current 
implementation we need to do one more query.
  2. When this go live - https://review.openstack.org/#/c/120414/ - then we 
won't need to do another query, because ID is available just after inserting a 
namespace to DB (namespace.save(session=session)).

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1367771/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1297701] Re: Create VM use another tenant's port, the VM can't communicate with other

2014-09-10 Thread Sean Dague
** Changed in: nova
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1297701

Title:
  Create VM use another tenant's port, the VM can't communicate with
  other

Status in OpenStack Neutron (virtual network service):
  Opinion
Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  An admin user create port for another project, then use this port
  Create VM, the VM can't communicate with other, because the security
  rule does not work. the vm in nova can not show IP.

  root@ubuntu01:/var/log/neutron# neutron port-show 
66c2d6bd-7d39-4948-b561-935cb9d264eb
  
+---+---+
  | Field | Value   
  |
  
+---+---+
  | admin_state_up| True
  |
  | allowed_address_pairs | {ip_address: 169.254.16.253, mac_address: 
fa:16:3e:48:73:a7}  |
  | binding:capabilities  | {port_filter: false}  
  |
  | binding:host_id   | 
  |
  | binding:vif_type  | unbound 
  |
  | device_id | 
  |
  | device_owner  | 
  |
  | extra_dhcp_opts   | 
  |
  | fixed_ips | {subnet_id: 
5519e015-fc83-44c2-99ad-d669b3c2c9d7, ip_address: 10.10.10.4} |
  | id| 66c2d6bd-7d39-4948-b561-935cb9d264eb
  |
  | mac_address   | fa:16:3e:48:73:a7   
  |
  | name  | 
  |
  | network_id| 255f3e92-5a6e-44a5-bbf9-1a62bf5d5935
  |
  | security_groups   | 94ad554f-392d-4dd5-8184-357f37b75111
  |
  | status| DOWN
  |
  | tenant_id | 3badf700bbc749ec9d9869fddc63899f
  |
  
+---+---+

  root@ubuntu01:/var/log/neutron# keystone tenant-list
  +--+-+-+
  |id|   name  | enabled |
  +--+-+-+
  | 34fddbc22c184214b823be267837ef81 |  admin  |   True  |
  | 48eb4330b6e74a9f9e74d3e191a0fa2e | service |   True  |
  +--+-+-+

  root@ubuntu01:/var/log/neutron# nova list
  
+--+---+++-+--+
  | ID   | Name  | Status | Task State | Power 
State | Networks |
  
+--+---+++-+--+
  | 5ce98599-75cb-49db-aa76-668491ee3bd0 | test3 | ACTIVE | None   | 
Running |  |
  
+--+---+++-+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1297701/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290468] Re: AttributeError: 'NoneType' object has no attribute '_sa_instance_state'

2014-09-10 Thread Sean Dague
I think this was resolved upstream with the oslo fixes

** Changed in: nova
   Status: New = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1290468

Title:
  AttributeError: 'NoneType' object has no attribute
  '_sa_instance_state'

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  Fix Released
Status in The Oslo library incubator:
  Fix Released

Bug description:
  Dan Smith was seeing this in some nova testing:

  http://paste.openstack.org/show/73043/

  Looking at logstash, this is showing up a lot since 3/7 which is when
  lazy translation was enabled in Cinder:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQXR0cmlidXRlRXJyb3I6IFxcJ05vbmVUeXBlXFwnIG9iamVjdCBoYXMgbm8gYXR0cmlidXRlIFxcJ19zYV9pbnN0YW5jZV9zdGF0ZVxcJ1wiIEFORCBmaWxlbmFtZTpsb2dzKnNjcmVlbi1jLWFwaS50eHQiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTQ0NzI5Nzg4MDV9

  
https://review.openstack.org/#/q/status:merged+project:openstack/cinder+branch:master+topic:bug/1280826,n,z

  Logstash shows a 99% success rate when this shows up but it can't stay
  like this, but right now it looks to be more cosmetic than functional.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1290468/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367778] [NEW] Extract Assignment related tests from IdentityTestCase

2014-09-10 Thread Samuel de Medeiros Queiroz
Public bug reported:

IdentityTestCase is intended to have only tests for user and groups.
However it also has tests for domains, projects, roles and grants/role 
assignments.

Every test that are not for users or groups have to be extracted from
IdentityTestCase (test_v3_identity) and to be put in AssignmentTestCase
(test_v3_assignment, to be created).

** Affects: keystone
 Importance: Low
 Assignee: Samuel de Medeiros Queiroz (samuel-z)
 Status: Triaged


** Tags: test-improvement

** Changed in: keystone
 Assignee: (unassigned) = Samuel de Medeiros Queiroz (samuel-z)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1367778

Title:
  Extract Assignment related tests from IdentityTestCase

Status in OpenStack Identity (Keystone):
  Triaged

Bug description:
  IdentityTestCase is intended to have only tests for user and groups.
  However it also has tests for domains, projects, roles and grants/role 
assignments.

  Every test that are not for users or groups have to be extracted from
  IdentityTestCase (test_v3_identity) and to be put in
  AssignmentTestCase (test_v3_assignment, to be created).

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1367778/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367786] [NEW] Hyper-V driver should log a clear error message during migrations for remote node permissions errors

2014-09-10 Thread Alessandro Pilotti
Public bug reported:

When failing to access a remote SMB UNC path, Python raises the
following exception:

WindowsError: [Error 123] The filename, directory name, or volume
label syntax is incorrect: ''

This is definitely misleading when troubleshooting the issue, which
occurs during resize / cold migrations.

The Nova driver should report a clear error message, making sure the
user understands the full context.

** Affects: nova
 Importance: Low
 Status: Triaged


** Tags: hyper-v

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367786

Title:
  Hyper-V driver should log a clear error message during migrations for
  remote node permissions errors

Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  When failing to access a remote SMB UNC path, Python raises the
  following exception:

  WindowsError: [Error 123] The filename, directory name, or volume
  label syntax is incorrect: ''

  This is definitely misleading when troubleshooting the issue, which
  occurs during resize / cold migrations.

  The Nova driver should report a clear error message, making sure the
  user understands the full context.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1367786/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367780] [NEW] Can't connect to console in dashboard using haproxy

2014-09-10 Thread Bobby Yakovich
Public bug reported:

I am having an issue connecting to console in horizon dashboard using HAproxy 
front end.
When launching console I receive page can not be displayed.
If I try novnc url directly with token info in web browser it works fine.
Appears horizon dashboard is passing in correct url string.

Set up:

2 HAproxy's in front of 2 controllers.
compute node novnc URL pointed at public IP of Haproxy, Haproxy forwards 
request to controller (tried both public and private IP of controllers).

Running ubuntu 14.04 and openstack icehouse

Error in console:  Not Found
The requested URL /130.245.183.130:6080/vnc_auto.html was not found on this 
server

Per diagnostics appears URL for horizon is being appended by url for novnc, 
causing issue with locating novnc instance.
First IP is horizon second ip is haproxy pointed to controller for novnc.  If 
just use the controller ip it works.
http://130.245.183.133/130.245.183.130:6080/vnc_auto.html?token=10850e75-8079-4cb0-b781-64be4a505c7ftitle=test1_ub12(60a30b21-e3c7-4663-a71b-9a1cfd6d61df)

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1367780

Title:
  Can't connect to console in dashboard using haproxy

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I am having an issue connecting to console in horizon dashboard using HAproxy 
front end.
  When launching console I receive page can not be displayed.
  If I try novnc url directly with token info in web browser it works fine.
  Appears horizon dashboard is passing in correct url string.

  Set up:

  2 HAproxy's in front of 2 controllers.
  compute node novnc URL pointed at public IP of Haproxy, Haproxy forwards 
request to controller (tried both public and private IP of controllers).

  Running ubuntu 14.04 and openstack icehouse

  Error in console:  Not Found
  The requested URL /130.245.183.130:6080/vnc_auto.html was not found on this 
server

  Per diagnostics appears URL for horizon is being appended by url for novnc, 
causing issue with locating novnc instance.
  First IP is horizon second ip is haproxy pointed to controller for novnc.  If 
just use the controller ip it works.
  
http://130.245.183.133/130.245.183.130:6080/vnc_auto.html?token=10850e75-8079-4cb0-b781-64be4a505c7ftitle=test1_ub12(60a30b21-e3c7-4663-a71b-9a1cfd6d61df)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1367780/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361795] Re: context referenced in pci_manager.__init__, but not defined

2014-09-10 Thread Sean Dague
https://review.openstack.org/#/c/102298/ is actually the review for this

** Changed in: nova
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361795

Title:
  context referenced in pci_manager.__init__, but not defined

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  the variable context is referenced in the pci_manager.__init__(), but
  it's not passed in as an argument or defined anywhere. Thus exception
  will be thrown when it's referenced.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1361795/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1331882] Re: trustor_user_id not available in v2 trust token

2014-09-10 Thread Dolph Mathews
** Also affects: ossn
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1331882

Title:
  trustor_user_id not available in v2 trust token

Status in OpenStack Identity (Keystone):
  In Progress
Status in OpenStack Security Notes:
  New

Bug description:
  The trust information in the v2 token is missing the trustor_user_id
  and impersonation values. This means you are unable to tell who gave
  you the trust.

  The following two examples were generated with the same information.
  (They are printed from client.auth_ref which is why they are missing
  some structure information)

  v2 Trust token:

  {u'metadata': {u'is_admin': 0,
 u'roles': [u'136bc06cef2f496f842a76644feaed03',
u'7d42773abeff45ea90fdb4067f6b3a9f']},
   u'serviceCatalog': [...],
   u'token': {u'expires': u'2014-06-19T02:41:19Z',
  u'id': u'4b8d23d9707a4c9f8a270759725dfcf8',
  u'issued_at': u'2014-06-19T01:41:19.811417',
  u'tenant': {u'description': u'Default Tenant',
  u'enabled': True,
  u'id': u'9029b226bc894fa3a23ec24fd9f4796c',
  u'name': u'demo'}},
   u'trust': {u'id': u'0b16de31a8c64fd5b0054054db468a00',
  u'trustee_user_id': u'f6cce259563e40acb3f841f5d89c6191'},
   u'user': {u'id': u'f6cce259563e40acb3f841f5d89c6191',
 u'name': u'bob',
 u'roles': [{u'name': u'can_create'}, {u'name': u'can_delete'}],
 u'roles_links': [],
 u'username': u'bob'}}

  
  v3 Trust token: 

  {u'OS-TRUST:trust': {u'id': u'0b16de31a8c64fd5b0054054db468a00',
   u'impersonation': False,
   u'trustee_user': {u'id': 
u'f6cce259563e40acb3f841f5d89c6191'},
   u'trustor_user': {u'id': 
u'5fcb10539aa646ea8b0fe3c80e15d33d'}},
   'auth_token': '0b8a2d2e081e4e6e8ae3ad5dfedcf9db',
   u'catalog': [...],
   u'expires_at': u'2014-06-19T02:41:19.935302Z',
   u'extras': {},
   u'issued_at': u'2014-06-19T01:41:19.935330Z',
   u'methods': [u'password'],
   u'project': {u'domain': {u'id': u'default', u'name': u'Default'},
u'id': u'9029b226bc894fa3a23ec24fd9f4796c',
u'name': u'demo'},
   u'roles': [{u'id': u'136bc06cef2f496f842a76644feaed03',
   u'name': u'can_create'},
  {u'id': u'7d42773abeff45ea90fdb4067f6b3a9f',
   u'name': u'can_delete'}],
   u'user': {u'domain': {u'id': u'default', u'name': u'Default'},
 u'id': u'f6cce259563e40acb3f841f5d89c6191',
 u'name': u'bob'}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1331882/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1306972] Re: Enabling/disabling a down service causes service-list to show it as up

2014-09-10 Thread Sean Dague
Why do you believe it should stay down?

** Changed in: nova
 Assignee: Mohammed Naser (mnaser) = (unassigned)

** Changed in: nova
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1306972

Title:
  Enabling/disabling a down service causes service-list to show it as
  up

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  Seen in Grizzly, suspect it might be present in master too from
  looking at the code.

  Before:
  ==

  openstack41@lab6-n13:~$ nova service-list
  
+--+--+--+--+---++
  | Binary   | Host | Zone | Status   | State | Updated_at  
   |
  
+--+--+--+--+---++
  | nova-compute | lab5-n03 | nova | disabled | down  | 
2014-04-12T15:14:13.00 |
  ... snipped ...
  
+--+--+--+--+---++

  
  Enable the service:
  ==

  openstack41@lab6-n13:~$ nova service-enable lab5-n03 nova-compute
  +--+--+-+
  | Host | Binary   | Status  |
  +--+--+-+
  | lab5-n03 | nova-compute | enabled |
  +--+--+-+

  After enabling:
  ==

  openstack41@lab6-n13:~$ nova service-list
  
+--+--+--+--+---++
  | Binary   | Host | Zone | Status   | State | Updated_at  
   |
  
+--+--+--+--+---++
  | nova-compute | lab5-n03 | nova | enabled  | up| 
2014-04-12T15:26:11.00 |
  ... snipped ...
  
+--+--+--+--+---++

  
  What I think should happen is that the state should stay down even though 
the record was updated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1306972/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1295601] Re: Allow for different Cephx users for Cinder and Nova (images_type=rbd)

2014-09-10 Thread Sean Dague
** Changed in: nova
   Status: New = Opinion

** Changed in: nova
   Importance: Undecided = Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1295601

Title:
  Allow for different Cephx users for Cinder and Nova (images_type=rbd)

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  Right now, there's only one set of options for rbd_user and
  rbd_secret_uuid in nova.conf.  When it is set, it will be used

  - for images when images_type=rbd is set: 
https://github.com/openstack/nova/blob/c15dff2e9978fe851c73e92ab7f9b46e27de81ba/nova/virt/libvirt/imagebackend.py#L79-L80
  - for volumes, it will override whatever glance has provided: 
https://github.com/openstack/nova/blob/c15dff2e9978fe851c73e92ab7f9b46e27de81ba/nova/virt/libvirt/volume.py#L217-L229

  Therefore, when you intend to use RBD for images and for volumes,
  you'd have to use the same user for _both_.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1295601/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362528] Re: cirros starts with file system in read only mode

2014-09-10 Thread Armando Migliaccio
Why would this bug report no longer affect nova? Last time I checked
Nova is used to create VM's.

If the file system boots up in read only mode, why Neutron would be at
fault, here?

** Description changed:

  Query:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiU3RhcnRpbmcgZHJvcGJlYXIgc3NoZDogbWtkaXI6IGNhbid0IGNyZWF0ZSBkaXJlY3RvcnkgJy9ldGMvZHJvcGJlYXInOiBSZWFkLW9ubHkgZmlsZSBzeXN0ZW1cIiBBTkQgdGFnczpcImNvbnNvbGVcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwOTIxNzMzOTM5OSwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==
  
  The VM boots incorrectly, the SSH service does not start, and the
  connection fails.
  
  http://logs.openstack.org/16/110016/7/gate/gate-tempest-dsvm-neutron-pg-
- full/603e3c6/console.html#_2014-08-26_08_59_39_951
- 
+ full/603e3c6/console.html.gz#_2014-08-26_08_59_39_951
  
  Only observed with neutron, 1 gate hit in 7 days.
  No hint about the issue in syslog or libvirt logs.

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1362528

Title:
  cirros starts with file system in read only mode

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  Query:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiU3RhcnRpbmcgZHJvcGJlYXIgc3NoZDogbWtkaXI6IGNhbid0IGNyZWF0ZSBkaXJlY3RvcnkgJy9ldGMvZHJvcGJlYXInOiBSZWFkLW9ubHkgZmlsZSBzeXN0ZW1cIiBBTkQgdGFnczpcImNvbnNvbGVcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwOTIxNzMzOTM5OSwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

  The VM boots incorrectly, the SSH service does not start, and the
  connection fails.

  http://logs.openstack.org/16/110016/7/gate/gate-tempest-dsvm-neutron-
  pg-full/603e3c6/console.html.gz#_2014-08-26_08_59_39_951

  Only observed with neutron, 1 gate hit in 7 days.
  No hint about the issue in syslog or libvirt logs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1362528/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1365678] Re: Sync with openstack/requirements

2014-09-10 Thread Doug Hellmann
oslotest fix: https://review.openstack.org/#/c/119201/


** Changed in: oslotest
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1365678

Title:
  Sync with openstack/requirements

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Oslo test framework and fixture library:
  Fix Released

Bug description:
  Our last sync with openstack/requirements was around the beginning of
  August. We haven't been able to sync because we have unit test
  failures on the sync job since around August 20th:

https://review.openstack.org/#/c/111620/

  Likely, our tests need to be fixed to support whatever dependency is
  breaking us.

   Traceback (most recent call last):
   File 
/home/jenkins/workspace/gate-keystone-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py,
 line 1404, in stop
 return self.__exit__()
   File 
/home/jenkins/workspace/gate-keystone-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py,
 line 1376, in __exit__
 raise RuntimeError('stop called on unstarted patcher')
   RuntimeError: stop called on unstarted patcher

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1365678/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1223611] Re: Can not attach a volume to a running compute instance

2014-09-10 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1223611

Title:
  Can not attach a volume to a running compute instance

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  When I attempt to attach a volume using the nova cli or horizon:

  $ nova volume-attach 7a0a3ffa-603a-4758-8533-32ffd9974d03 
66e834db-0f98-45fd-bfa0-6386f4a965a4 /dev/vdb
  +--+--+
  | Property | Value|
  +--+--+
  | device   | /dev/hdb |
  | serverId | 7a0a3ffa-603a-4758-8533-32ffd9974d03 |
  | id   | 66e834db-0f98-45fd-bfa0-6386f4a965a4 |
  | volumeId | 66e834db-0f98-45fd-bfa0-6386f4a965a4 |
  +--+--+

  (note the returned device is /dev/hdb not /dev/vdb)

  In the error logs on the compute host:

  ... a large stack dump and ...
  /var/log/nova/nova-compute.log: 2013-09-10 16:57:26.979 4774 TRACE 
nova.openstack.common.rpc.amqp libvirtError: unsupported configuration: disk 
bus 'ide' cannot be hotplugged.
  /var/log/libvirt/libvirtd.log: 2013-09-10 23:57:23.529+: 1886: error : 
qemuDomainAttachDeviceDiskLive:5870 : unsupported configuration: disk bus 'ide' 
cannot be hotplugged.

  
  libvirt uses the device name as a hint as to what driver to use when 
attaching a new device. The instance has been booted from another volume which 
means volumes can be attached successfully (but only on boot.)

  I have also tried specifying /dev/sdb, /dev/xvdb with the same
  failure. Specifying just 'vdb' raises InvalidDevicePath.

  I am running nova-compute on Ubuntu from a package, version
  2013.1.2-0ubuntu1.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1223611/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357677] Re: Instances failes to boot from volume

2014-09-10 Thread Sean Dague
That output is the kind of output that would always be dumped from the
console, it's not really a useful bug

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357677

Title:
  Instances failes to boot from volume

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Logstash query for full console outputs which does not contains 'info:
  initramfs loading root from /dev/vda' , but contains the previous boot
  message.

  These issues look like ssh connectivity issue, but the instance is not
  booted and it happens regardless to the network type.

  message: Freeing unused kernel memory AND message: Initializing
  cgroup subsys cpuset AND NOT message: initramfs loading root from
  AND tags:console

  49 incident/week.

  Example console log:
  
http://logs.openstack.org/75/113175/1/gate/check-tempest-dsvm-neutron-full/827c854/console.html.gz#_2014-08-14_11_23_30_120

  It failed when it's tried to ssh 3th server.
  WARNING: The conole.log contains two instances serial console output,  try no 
to mix them when reading.

  The fail point in the test code was here:
  
https://github.com/openstack/tempest/blob/b7144eb08175d010e1300e14f4f75d04d9c63c98/tempest/scenario/test_volume_boot_pattern.py#L175

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357677/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1283599] Re: TestNetworkBasicOps occasionally fails to delete resources

2014-09-10 Thread Armando Migliaccio
I believe the issue lies in the management for the cache layer between
Nova and Neutron. From what I can tell by looking at this run:

http://logs.openstack.org/02/117902/2/gate/gate-tempest-dsvm-neutron-
full/d5bafe2/logs

I can see that the VM's port is created here:

http://logs.openstack.org/02/117902/2/gate/gate-tempest-dsvm-neutron-
full/d5bafe2/logs/screen-n-cpu.txt.gz#_2014-09-09_05_54_12_795

But then for some reason is not found here:

http://logs.openstack.org/02/117902/2/gate/gate-tempest-dsvm-neutron-
full/d5bafe2/logs/screen-n-cpu.txt.gz#_2014-09-09_05_54_22_403

In fact, I look at the nova code:

https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L4695

It shows that if the for loop does not yield any port, a PortNotFound
exception is raised.

However the port does exist in Neutron, and Neutron does barf
legitimately if one attempts to delete a subnet when ports are still
being allocated.

Looking more deeply, I can see that the cache update is done here:

http://logs.openstack.org/02/117902/2/gate/gate-tempest-dsvm-neutron-
full/d5bafe2/logs/screen-n-cpu.txt.gz#_2014-09-09_05_54_13_124

But subsequently here:

http://logs.openstack.org/02/117902/2/gate/gate-tempest-dsvm-neutron-
full/d5bafe2/logs/screen-n-cpu.txt.gz#_2014-09-09_05_54_13_389

A new update wipes out the port details from _heal_instance_info_cache;
it's possible that the issue here is a dirty read:
_heal_instance_info_cache gets network info, then it is preempted by an
interface event; when it resumes, it overwrites the most up to date info
with the older one. I believe this is a Nova primarily an issue on the
Nova end as well.

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1283599

Title:
  TestNetworkBasicOps occasionally fails to delete resources

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  Invalid

Bug description:
  Network, Subnet and security group appear to be in use when they are deleted.
  Observed in: 
http://logs.openstack.org/84/75284/3/check/check-tempest-dsvm-neutron-full/d792a7a/logs

  Observed so far with neutron full job only.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1283599/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367596] Re: admin_state_up=False on l3-agent doesn't affect active routers

2014-09-10 Thread Armando Migliaccio
I disagree. Management place != Control place != Data plane.

Disabling the agent should not affect the data plane.

** Changed in: neutron
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1367596

Title:
  admin_state_up=False on l3-agent doesn't affect active routers

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  When cloud admin is shutting down l3-agent via API (without evacuating 
routers first) it stands to reason he would like traffic to stop routing via 
this agent (either for maintenance or maybe because a security breach was 
found...).
  Currently, even when an agent is down, traffic keeps routing without any 
affect.

  Agent should set all interfaces inside router namespace  to DOWN so no
  more traffic is being routed.

  Alternatively, agent should set routers admin state to DOWN, assuming
  this actually affects the router.

  Either way, end result should be - traffic is not routed via agent
  when admin brings it down.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1367596/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1366859] Re: Ironic: extra_spec requirement 'amd64' does not match 'x86_64'

2014-09-10 Thread Dan Prince
** Changed in: ironic
   Status: New = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1366859

Title:
  Ironic: extra_spec requirement 'amd64' does not match 'x86_64'

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Won't Fix
Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Using the latest Nova Ironic compute drivers (either from Ironic or
  Nova) I'm hitting scheduling ERRORS:

  Sep 08 15:26:45 localhost nova-scheduler[29761]: 2014-09-08
  15:26:45.620 29761 DEBUG
  nova.scheduler.filters.compute_capabilities_filter [req-9e34510e-268c-
  40de-8433-d7b41017b54e None] extra_spec requirement 'amd64' does not
  match 'x86_64' _satisfies_extra_specs
  /opt/stack/venvs/nova/lib/python2.7/site-
  packages/nova/scheduler/filters/compute_capabilities_filter.py:70

  I've gone ahead and patched in
  https://review.openstack.org/#/c/117555/.

  The issue seems to be that ComputeCapabilitiesFilter does not itself
  canonicalize instance_types when comparing them which will breaks
  existing TripleO baremetal clouds using x86_64 (amd64).

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1366859/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367845] [NEW] Traceback is logged when delete an instance.

2014-09-10 Thread Danny Choi
Public bug reported:

OpenStack version: Icehouse

Issue: traceback is logged in nova-compute.log when delete an instance.

Setup: 3 nodes, namely Controller, Compute and Network; with nova-
compute running solely on Compute node.

Steps to reproduce:
1. Create an instance using image cirrus 0.3.2.
2. Verify instance is running: nova list
3. Delete the instance: nova delete name
4. Check nova-compute.log at Compute node.

root@Controller:/home/guest# nova --version
2.17.0
root@Controller:/home/guest# nova service-list
+--++--+-+---++-+
| Binary   | Host   | Zone | Status  | State | Updated_at   
  | Disabled Reason |
+--++--+-+---++-+
| nova-cert| Controller | internal | enabled | up| 
2014-09-10T17:12:34.00 | -   |
| nova-conductor   | Controller | internal | enabled | up| 
2014-09-10T17:12:26.00 | -   |
| nova-consoleauth | Controller | internal | enabled | up| 
2014-09-10T17:12:28.00 | -   |
| nova-scheduler   | Controller | internal | enabled | up| 
2014-09-10T17:12:31.00 | -   |
| nova-compute | Compute| nova | enabled | up| 
2014-09-10T17:12:34.00 | -   |
+--++--+-+---++-+
root@Controller:/home/guest# nova boot --image cirros-0.3.2-x86_64 --flavor 1 
--nic net-id=75375f9b-0f26-4e1a-aedc-24457192f265 cirros
+--++
| Property | Value  
|
+--++
| OS-DCF:diskConfig| MANUAL 
|
| OS-EXT-AZ:availability_zone  | nova   
|
| OS-EXT-SRV-ATTR:host | -  
|
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -  
|
| OS-EXT-SRV-ATTR:instance_name| instance-0046  
|
| OS-EXT-STS:power_state   | 0  
|
| OS-EXT-STS:task_state| scheduling 
|
| OS-EXT-STS:vm_state  | building   
|
| OS-SRV-USG:launched_at   | -  
|
| OS-SRV-USG:terminated_at | -  
|
| accessIPv4   |
|
| accessIPv6   |
|
| adminPass| jFmNDB5Jsd77   
|
| config_drive |
|
| created  | 2014-09-10T17:13:06Z   
|
| flavor   | m1.tiny (1)
|
| hostId   |
|
| id   | bc01c570-c40f-4088-a17c-0278fc6c3315   
|
| image| cirros-0.3.2-x86_64 
(38f00c62-f9df-4133-abf2-7c9ba948d414) |
| key_name | -  
|
| metadata | {} 
|
| name | cirros 
|
| os-extended-volumes:volumes_attached | [] 
|
| progress | 0  
|
| security_groups  | default
|
| status   | BUILD  
|
| tenant_id| 73a095bf078443c9b340d871deaabcc3   
|
| updated  | 2014-09-10T17:13:06Z   
|
| user_id  | 77bdd3f911744f72af7038d40d722439   
|

[Yahoo-eng-team] [Bug 1367780] Re: Can't connect to console in dashboard using haproxy

2014-09-10 Thread Bobby Yakovich
Disregard issue resolved it was a typo in my config.

** Changed in: horizon
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1367780

Title:
  Can't connect to console in dashboard using haproxy

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  I am having an issue connecting to console in horizon dashboard using HAproxy 
front end.
  When launching console I receive page can not be displayed.
  If I try novnc url directly with token info in web browser it works fine.
  Appears horizon dashboard is passing in correct url string.

  Set up:

  2 HAproxy's in front of 2 controllers.
  compute node novnc URL pointed at public IP of Haproxy, Haproxy forwards 
request to controller (tried both public and private IP of controllers).

  Running ubuntu 14.04 and openstack icehouse

  Error in console:  Not Found
  The requested URL /130.245.183.130:6080/vnc_auto.html was not found on this 
server

  Per diagnostics appears URL for horizon is being appended by url for novnc, 
causing issue with locating novnc instance.
  First IP is horizon second ip is haproxy pointed to controller for novnc.  If 
just use the controller ip it works.
  
http://130.245.183.133/130.245.183.130:6080/vnc_auto.html?token=10850e75-8079-4cb0-b781-64be4a505c7ftitle=test1_ub12(60a30b21-e3c7-4663-a71b-9a1cfd6d61df)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1367780/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367858] [NEW] New network mark network address as required field

2014-09-10 Thread Bradley Jones
Public bug reported:

The network address for a subnet is a required field if a subnet is
being created.

This field should be marked with a * to indicate that it must be filled
in.

** Affects: horizon
 Importance: Undecided
 Assignee: Bradley Jones (bradjones)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Bradley Jones (bradjones)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1367858

Title:
  New network mark network address as required field

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The network address for a subnet is a required field if a subnet is
  being created.

  This field should be marked with a * to indicate that it must be
  filled in.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1367858/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367860] [NEW] metadef namespace OS::Compute::HostCapabilities description improvement

2014-09-10 Thread Travis Tripp
Public bug reported:

metadef namespace OS::Compute::HostCapabilities description improvement

The OS::Compute::HostCapabilities namespace should include more
information about how the properties are used by Nova.  Let admin's know
that the ComputeCapabilitiesFilter needs to be enabled.

** Affects: glance
 Importance: Undecided
 Assignee: Travis Tripp (travis-tripp)
 Status: New


** Tags: low-hanging-fruit

** Changed in: glance
 Assignee: (unassigned) = Travis Tripp (travis-tripp)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1367860

Title:
  metadef namespace OS::Compute::HostCapabilities description
  improvement

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  metadef namespace OS::Compute::HostCapabilities description
  improvement

  The OS::Compute::HostCapabilities namespace should include more
  information about how the properties are used by Nova.  Let admin's
  know that the ComputeCapabilitiesFilter needs to be enabled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1367860/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367864] [NEW] User and group association not deleted after router delete in nuage plugin

2014-09-10 Thread Chirag Shahani
Public bug reported:

In nuage plugin when a router delete operation is performed, the user
and group association is not deleted. This is a bug which is caused by a
check for a nuage zone attached to the router even after the router is
deleted.

This commit associated with this bug will fix the above issue.

** Affects: neutron
 Importance: Undecided
 Assignee: Chirag Shahani (chirag-shahani)
 Status: New


** Tags: nuage

** Changed in: neutron
 Assignee: (unassigned) = Chirag Shahani (chirag-shahani)

** Summary changed:

- User and group not deleted after router delete in nuage plugin
+ User and group association not deleted after router delete in nuage plugin

** Description changed:

  In nuage plugin when a router delete operation is performed, the user
- and group is not deleted. This is a bug which is caused by a check for a
- nuage zone attached to the router even after the router is deleted.
+ and group association is not deleted. This is a bug which is caused by a
+ check for a nuage zone attached to the router even after the router is
+ deleted.
  
  This commit associated with this bug will fix the above issue.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1367864

Title:
  User and group association not deleted after router delete in nuage
  plugin

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In nuage plugin when a router delete operation is performed, the user
  and group association is not deleted. This is a bug which is caused by
  a check for a nuage zone attached to the router even after the router
  is deleted.

  This commit associated with this bug will fix the above issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1367864/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358857] Re: test_load_balancer_basic mismatch error

2014-09-10 Thread Armando Migliaccio
By looking at the gate configuration, this currently test the LB in the
scenario where there is a single instance with two fake HTTP servers. If
we do get responses during the _send_requests there is no reason to
believe that there is Neutron at fault here. I attribute this spurious
issue with the fact that 10 attempts is too low a threshold. If we raise
the retries slightly, the test runtime should not increase much; however
if we end up getting stuck with one server, then we can say there's
something wrong with a higher degree of confidence.

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1358857

Title:
  test_load_balancer_basic mismatch error

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in Tempest:
  New

Bug description:
  Gate failed check-tempest-dsvm-neutron-full on this (unrelated) patch change: 
  https://review.openstack.org/#/c/114693/

  http://logs.openstack.org/93/114693/1/check/check-tempest-dsvm-
  neutron-full/2755713/console.html

  
  2014-08-19 01:11:40.597 | ==
  2014-08-19 01:11:40.597 | Failed 1 tests - output below:
  2014-08-19 01:11:40.597 | ==
  2014-08-19 01:11:40.597 | 
  2014-08-19 01:11:40.597 | 
tempest.scenario.test_load_balancer_basic.TestLoadBalancerBasic.test_load_balancer_basic[compute,gate,network,smoke]
  2014-08-19 01:11:40.597 | 

  2014-08-19 01:11:40.597 | 
  2014-08-19 01:11:40.597 | Captured traceback:
  2014-08-19 01:11:40.597 | ~~~
  2014-08-19 01:11:40.598 | Traceback (most recent call last):
  2014-08-19 01:11:40.598 |   File tempest/test.py, line 128, in wrapper
  2014-08-19 01:11:40.598 | return f(self, *func_args, **func_kwargs)
  2014-08-19 01:11:40.598 |   File 
tempest/scenario/test_load_balancer_basic.py, line 297, in 
test_load_balancer_basic
  2014-08-19 01:11:40.598 | self._check_load_balancing()
  2014-08-19 01:11:40.598 |   File 
tempest/scenario/test_load_balancer_basic.py, line 277, in 
_check_load_balancing
  2014-08-19 01:11:40.598 | self._send_requests(self.vip_ip, 
set([server1, server2]))
  2014-08-19 01:11:40.598 |   File 
tempest/scenario/test_load_balancer_basic.py, line 289, in _send_requests
  2014-08-19 01:11:40.598 | set(resp))
  2014-08-19 01:11:40.598 |   File 
/usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 321, in 
assertEqual
  2014-08-19 01:11:40.598 | self.assertThat(observed, matcher, message)
  2014-08-19 01:11:40.599 |   File 
/usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 406, in 
assertThat
  2014-08-19 01:11:40.599 | raise mismatch_error
  2014-08-19 01:11:40.599 | MismatchError: set(['server1', 'server2']) != 
set(['server1'])

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1358857/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367881] [NEW] l2pop RPC code throwing an exception in fdb_chg_ip_tun()

2014-09-10 Thread Brian Haley
Public bug reported:

I'm seeing an error in the l2pop code where it's failing to add a flow
for the ARP entry responder.

This is sometimes leading to DHCP failures for VMs, although a soft
reboot typically fixes that problem.

Here is the trace:

2014-09-10 15:10:36.954 9351 ERROR neutron.agent.linux.ovs_lib 
[req-de0c2985-1fac-46a8-a42b-f0bad5a43805 None] OVS flows could not be applied 
on bridge br-tun
2014-09-10 15:10:36.954 9351 TRACE neutron.agent.linux.ovs_lib Traceback (most 
recent call last):
2014-09-10 15:10:36.954 9351 TRACE neutron.agent.linux.ovs_lib   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
 line 407, in _fdb_chg_ip 
2014-09-10 15:10:36.954 9351 TRACE neutron.agent.linux.ovs_lib 
self.local_ip, self.local_vlan_map)
2014-09-10 15:10:36.954 9351 TRACE neutron.agent.linux.ovs_lib   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/common/log.py,
 line 36, in wrapper
2014-09-10 15:10:36.954 9351 TRACE neutron.agent.linux.ovs_lib return 
method(*args, **kwargs)
2014-09-10 15:10:36.954 9351 TRACE neutron.agent.linux.ovs_lib   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/l2population_rpc.py,
 line 250, in fdb_chg_ip_tun
2014-09-10 15:10:36.954 9351 TRACE neutron.agent.linux.ovs_lib for mac, ip 
in after:
2014-09-10 15:10:36.954 9351 TRACE neutron.agent.linux.ovs_lib TypeError: 
'NoneType' object is not iterable
2014-09-10 15:10:36.954 9351 TRACE neutron.agent.linux.ovs_lib 
2014-09-10 15:10:36.955 9351 ERROR oslo.messaging.rpc.dispatcher 
[req-de0c2985-1fac-46a8-a42b-f0bad5a43805 ] Exception during message handling: 
'NoneType' object is not iterable
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 134, in _dispatch_and_reply
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 177, in _dispatch
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 123, in _do_dispatch
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/common/log.py,
 line 36, in wrapper
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher return 
method(*args, **kwargs)
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/l2population_rpc.py,
 line 55, in update_fdb_entries
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher 
self.fdb_update(context, fdb_entries)
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/common/log.py,
 line 36, in wrapper
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher return 
method(*args, **kwargs)
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/l2population_rpc.py,
 line 212, in fdb_update
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher 
getattr(self, method)(context, values)
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
 line 407, in _fdb_chg_ip
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher 
self.local_ip, self.local_vlan_map)
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/common/log.py,
 line 36, in wrapper
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher return 
method(*args, **kwargs)
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/l2population_rpc.py,
 line 250, in fdb_chg_ip_tun
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher for mac, 
ip in after:
2014-09-10 15:10:36.955 9351 TRACE oslo.messaging.rpc.dispatcher TypeError: 
'NoneType' object is not iterable
2014-09-10 15:10:36.955 9351 TRACE 

[Yahoo-eng-team] [Bug 1367892] [NEW] delete port fails with RouterNotHostedByL3Agent exception

2014-09-10 Thread Ed Bak
Public bug reported:

When deleting a vm, port_delete sometimes fails with a
RouterNotHostedByL3Agent exception.  This error is created by a script
which boots a vm, associates a floating ip, tests that the vm is
pingable, disassociates the fip and then deletes the vm.  The following
stack trace has been seen multiple times.

2014-09-09 11:55:59 7648 DEBUG neutronclient.v2_0.client 
[req-16883a09-7ec6-4159-9580-9cfa1880f786 73ae929bd62c4eddbe2f38a709265f2b 
3d4668d03b5e4ac7b316aac9ff88e2db] Error message: {NeutronError: {message: 
The router 0ffc5634-d7ff-4bc7-8dca-cbdb10414924 is not hosted by L3 agent 
35f71627-3c41-4226-96dd-15faa6ec44c3., type: RouterNotHostedByL3Agent, 
detail: }} _handle_fault_response 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/neutronclient/v2_0/client.py:1202
2014-09-09 11:55:59 7648 ERROR nova.network.neutronv2.api 
[req-16883a09-7ec6-4159-9580-9cfa1880f786 73ae929bd62c4eddbe2f38a709265f2b 
3d4668d03b5e4ac7b316aac9ff88e2db] Failed to delete neutron port 
41b8e31b-f459-4159-9311-d8701885f43a
2014-09-09 11:55:59.153 7648 TRACE nova.network.neutronv2.api Traceback (most 
recent call last):
2014-09-09 11:55:59.153 7648 TRACE nova.network.neutronv2.api File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/network/neutronv2/api.py,
 line 448, in deallocate_for_instance
2014-09-09 11:55:59.153 7648 TRACE nova.network.neutronv2.api 
neutron.delete_port(port)
2014-09-09 11:55:59.153 7648 TRACE nova.network.neutronv2.api File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/neutronclient/v2_0/client.py,
 line 101, in with_params
2014-09-09 11:55:59.153 7648 TRACE nova.network.neutronv2.api ret = 
self.function(instance, *args, **kwargs)
2014-09-09 11:55:59.153 7648 TRACE nova.network.neutronv2.api File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/neutronclient/v2_0/client.py,
 line 328, in delete_port
2014-09-09 11:55:59.153 7648 TRACE nova.network.neutronv2.api return 
self.delete(self.port_path % (port))
2014-09-09 11:55:59.153 7648 TRACE nova.network.neutronv2.api File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/neutronclient/v2_0/client.py,
 line 1311, in delete
2014-09-09 11:55:59.153 7648 TRACE nova.network.neutronv2.api headers=headers, 
params=params)
2014-09-09 11:55:59.153 7648 TRACE nova.network.neutronv2.api File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/neutronclient/v2_0/client.py,
 line 1300, in retry_request
2014-09-09 11:55:59.153 7648 TRACE nova.network.neutronv2.api headers=headers, 
params=params)
2014-09-09 11:55:59.153 7648 TRACE nova.network.neutronv2.api File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/neutronclient/v2_0/client.py,
 line 1243, in do_request
2014-09-09 11:55:59.153 7648 TRACE nova.network.neutronv2.api 
self._handle_fault_response(status_code, replybody)
2014-09-09 11:55:59.153 7648 TRACE nova.network.neutronv2.api File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/neutronclient/v2_0/client.py,
 line 1211, in _handle_fault_response
2014-09-09 11:55:59.153 7648 TRACE nova.network.neutronv2.api 
exception_handler_v20(status_code, des_error_body)
2014-09-09 11:55:59.153 7648 TRACE nova.network.neutronv2.api File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/neutronclient/v2_0/client.py,
 line 68, in exception_handler_v20
2014-09-09 11:55:59.153 7648 TRACE nova.network.neutronv2.api 
status_code=status_code)
2014-09-09 11:55:59.153 7648 TRACE nova.network.neutronv2.api Conflict: The 
router 0ffc5634-d7ff-4bc7-8dca-cbdb10414924 is not hosted by L3 agent 
35f71627-3c41-4226-96dd-15faa6ec44c3.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1367892

Title:
  delete port fails with RouterNotHostedByL3Agent exception

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When deleting a vm, port_delete sometimes fails with a
  RouterNotHostedByL3Agent exception.  This error is created by a script
  which boots a vm, associates a floating ip, tests that the vm is
  pingable, disassociates the fip and then deletes the vm.  The
  following stack trace has been seen multiple times.

  2014-09-09 11:55:59 7648 DEBUG neutronclient.v2_0.client 
[req-16883a09-7ec6-4159-9580-9cfa1880f786 73ae929bd62c4eddbe2f38a709265f2b 
3d4668d03b5e4ac7b316aac9ff88e2db] Error message: {NeutronError: {message: 
The router 0ffc5634-d7ff-4bc7-8dca-cbdb10414924 is not hosted by L3 agent 
35f71627-3c41-4226-96dd-15faa6ec44c3., type: RouterNotHostedByL3Agent, 
detail: }} _handle_fault_response 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/neutronclient/v2_0/client.py:1202
  2014-09-09 11:55:59 7648 ERROR nova.network.neutronv2.api 
[req-16883a09-7ec6-4159-9580-9cfa1880f786 73ae929bd62c4eddbe2f38a709265f2b 
3d4668d03b5e4ac7b316aac9ff88e2db] Failed to delete neutron port 

[Yahoo-eng-team] [Bug 1367899] [NEW] cloud-init rsyslog config uses deprecated syntax

2014-09-10 Thread Craig Miskell
Public bug reported:

The rsyslog config snippet /etc/rsyslog.d/21-cloudinit.conf ends with the line
 ~

As of Trusty (well, after Precise) this syntax is deprecated in the shipped 
rsyslog, resulting in a warning message at rsyslog startup, and should be 
replaced with
 stop

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1367899

Title:
  cloud-init rsyslog config uses deprecated syntax

Status in Init scripts for use on cloud images:
  New

Bug description:
  The rsyslog config snippet /etc/rsyslog.d/21-cloudinit.conf ends with the line
   ~

  As of Trusty (well, after Precise) this syntax is deprecated in the shipped 
rsyslog, resulting in a warning message at rsyslog startup, and should be 
replaced with
   stop

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1367899/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367908] [NEW] Update glance docs to reflect the changes in metadefs api's

2014-09-10 Thread Lakshmi N Sampath
Public bug reported:

Glance docs needs to be updated to synchronize with the changes in metadefs api 
 -  rename resource_type to resource_type_associations in namespace API 
input/output
 -  Add created_at/updated_at in resource_type_associations block of namespace 
API input/output.

** Affects: glance
 Importance: Undecided
 Assignee: Lakshmi N Sampath (lakshmi-sampath)
 Status: New


** Tags: low-hanging-fruit

** Changed in: glance
 Assignee: (unassigned) = Lakshmi N Sampath (lakshmi-sampath)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1367908

Title:
  Update glance docs to reflect the changes in metadefs api's

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Glance docs needs to be updated to synchronize with the changes in metadefs 
api 
   -  rename resource_type to resource_type_associations in namespace API 
input/output
   -  Add created_at/updated_at in resource_type_associations block of 
namespace API input/output.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1367908/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367918] [NEW] Xenapi attached volume with no VM leaves instance in undeletable state

2014-09-10 Thread Andrew Laski
Public bug reported:

As shown by the stack trace below, when a volume is attached but the VM
is not present the volume can't be cleaned up by Cinder and will raise
an Exception which puts the instance into an error state.  The volume
attachment isn't removed because an if statement is hit in the xenapi
destroy method which logs VM is not present, skipping destroy... and
then moves on to trying to cleanup the volume in Cinder.  This is
because most operations in xen rely on finding the vm_ref and then
cleaning up resources that are attached there.  But if the volume is
attached to an SR but not associated with an instance it ends up being
orphaned.


014-08-29 15:54:02.836 8766 DEBUG nova.volume.cinder 
[req-341cd17d-0f2f-4d64-929f-a94f8c0fa295 None] Cinderclient connection created 
using URL: https://localhost/v1/tenant
cinderclient 
/opt/rackstack/879.28/nova/lib/python2.6/site-packages/nova/volume/cinder.py:108
2014-08-29 15:54:03.251 8766 ERROR nova.compute.manager 
[req-341cd17d-0f2f-4d64-929f-a94f8c0fa295 None] [instance: uuid] Setting 
instance vm_state to ERROR
2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: uuid] 
Traceback (most recent call last):
2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: uuid] File 
/opt/rackstack/879.28/nova/lib/python2.6/site-packages/nova/compute/manager.py,
 line 2443, in do_terminate_instance
2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: uuid] 
self._delete_instance(context, instance, bdms, quotas)
2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: uuid] File 
/opt/rackstack/879.28/nova/lib/python2.6/site-packages/nova/hooks.py, line 
131, in inner
2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: uuid] rv = 
f(*args, **kwargs)
2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: uuid] File 
/opt/rackstack/879.28/nova/lib/python2.6/site-packages/nova/compute/manager.py,
 line 2412, in delete_instance
2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: uuid] 
quotas.rollback()
2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: uuid] File 
/opt/rackstack/879.28/nova/lib/python2.6/site-packages/nova/openstack/common/excutils.py,
 line 82, in exit
2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: uuid] 
six.reraise(self.type, self.value, self.tb)
2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: uuid] File 
/opt/rackstack/879.28/nova/lib/python2.6/site-packages/nova/compute/manager.py,
 line 2390, in _delete_instance
2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: uuid] 
self._shutdown_instance(context, instance, bdms)
2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: uuid] File 
/opt/rackstack/879.28/nova/lib/python2.6/site-packages/nova/compute/manager.py,
 line 2335, in _shutdown_instance
2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: uuid] 
connector)
2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: uuid] File 
/opt/rackstack/879.28/nova/lib/python2.6/site-packages/nova/volume/cinder.py, 
line 189, in wrapper
2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: uuid] res 
= method(self, ctx, volume_id, *args, **kwargs)
2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: uuid] File 
/opt/rackstack/879.28/nova/lib/python2.6/site-packages/nova/volume/cinder.py, 
line 309, in terminate_connection
2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: uuid] 
connector)
2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: uuid] File 
/opt/rackstack/879.28/nova/lib/python2.6/site-packages/cinderclient/v1/volumes.py,
 line 331, in terminate_connection
2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: uuid] 
{'connector': connector})
2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: uuid] File 
/opt/rackstack/879.28/nova/lib/python2.6/site-packages/cinderclient/v1/volumes.py,
 line 250, in _action
2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: uuid] 
return self.api.client.post(url, body=body)
2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: uuid] File 
/opt/rackstack/879.28/nova/lib/python2.6/site-packages/cinderclient/client.py,
 line 223, in post
2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: uuid] 
return self._cs_request(url, 'POST', **kwargs)
2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: uuid] File 
/opt/rackstack/879.28/nova/lib/python2.6/site-packages/cinderclient/client.py,
 line 187, in _cs_request
2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: uuid] 
**kwargs)
2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: uuid] File 
/opt/rackstack/879.28/nova/lib/python2.6/site-packages/cinderclient/client.py,
 line 170, in request
2014-08-29 15:54:03.251 8766 TRACE nova.compute.manager [instance: uuid] 
raise 

[Yahoo-eng-team] [Bug 1360260] Re: 'allow_same_net_traffic=true' has no effect

2014-09-10 Thread Sean Dague
It's intended behavior that security groups always trump this option.
This is possibly a documentation fix to ensure that it's clear.

** Tags removed: allowsamenettraffic nova security
** Tags added: network

** Changed in: nova
   Status: New = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1360260

Title:
  'allow_same_net_traffic=true' has no effect

Status in OpenStack Compute (Nova):
  Won't Fix
Status in OpenStack Manuals:
  New

Bug description:
  environment: Ubuntu trusty, icehouse from repos. 
  Setup per 'Openstack Installation Guide for Ubuntu 12.04/14.04 LTS' 

  **brief**

  two instances X and Y are members of security group A. Despite the
  following explicit setting in nova.conf:

  allow_same_net_traffic=True

  ...the instances are only allowed to communicate according to the
  rules defined in security group A.

  
  **detail**

  I first noticed this attempting to run iperf between two instances on
  the same security network; they were unable to connect via the default
  TCP port 5001.

  They were able to ping...looking at rules for the security group they
  are are associated with, ping was allowed, so I then suspected the
  security group rules were being applied to all communication, despite
  them being on the same security group.

  To test, I added rules to group A that allowed all communication, and
  associated the rules with itself (i.e. security group A) and voila,
  they could talk!

  I then thought I had remembered incorrectly that by default all
  traffic is allowed between instances on the same security group, so I
  double-checked the documentation, but according to the documentation I
  had remembered correctly:

  allow_same_net_traffic = True (BoolOpt) Whether to allow network
  traffic from same network

  ...I searched through my nova.conf files, but there was no
  'allow_same_net_traffic' entry, so the default ought to be True,
  right? Just to be sure, I explicitly added:

  allow_same_net_traffic = True

  to nova.conf and restarted nova services, but the security group rules
  are still being applied to communication between instances that are
  associated with the same security group.

  I thought the 'default' security group might be a special case, so I
  tested on another security group, but still get the same behaviour.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1360260/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1360260] Re: 'allow_same_net_traffic=true' has no effect

2014-09-10 Thread Sean Dague
Apparently, I'm wrong. Vishy said the behavior changed with:
https://review.openstack.org/#/c/4110/

** Changed in: nova
   Status: Won't Fix = Confirmed

** Changed in: nova
   Importance: Undecided = Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1360260

Title:
  'allow_same_net_traffic=true' has no effect

Status in OpenStack Compute (Nova):
  Incomplete
Status in OpenStack Manuals:
  New

Bug description:
  environment: Ubuntu trusty, icehouse from repos. 
  Setup per 'Openstack Installation Guide for Ubuntu 12.04/14.04 LTS' 

  **brief**

  two instances X and Y are members of security group A. Despite the
  following explicit setting in nova.conf:

  allow_same_net_traffic=True

  ...the instances are only allowed to communicate according to the
  rules defined in security group A.

  
  **detail**

  I first noticed this attempting to run iperf between two instances on
  the same security network; they were unable to connect via the default
  TCP port 5001.

  They were able to ping...looking at rules for the security group they
  are are associated with, ping was allowed, so I then suspected the
  security group rules were being applied to all communication, despite
  them being on the same security group.

  To test, I added rules to group A that allowed all communication, and
  associated the rules with itself (i.e. security group A) and voila,
  they could talk!

  I then thought I had remembered incorrectly that by default all
  traffic is allowed between instances on the same security group, so I
  double-checked the documentation, but according to the documentation I
  had remembered correctly:

  allow_same_net_traffic = True (BoolOpt) Whether to allow network
  traffic from same network

  ...I searched through my nova.conf files, but there was no
  'allow_same_net_traffic' entry, so the default ought to be True,
  right? Just to be sure, I explicitly added:

  allow_same_net_traffic = True

  to nova.conf and restarted nova services, but the security group rules
  are still being applied to communication between instances that are
  associated with the same security group.

  I thought the 'default' security group might be a special case, so I
  tested on another security group, but still get the same behaviour.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1360260/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367944] [NEW] tenant usage information api is consuming lot of memory

2014-09-10 Thread Tushar Patil
Public bug reported:

I have noticed that when a tenant usage information API is invoked for a
particular tenant owning large number of instances (both active 
terminated), then I see a sudden increase in nova-api process memory
consumption from 500 MB up to 2.3 GB.

It is due to a SQL retrieving large number of records of
instance_system_metadata for instances using where in clause.

At the time of getting tenant usage information, I had approx. 120,000
instances in the db for a particular tenant (few were active and
remaining terminated)

Also in this plugin, it unnecessarily  gets following information of the 
instances from the db further degrading the performance of the API.
1. metadata
2. info_cache
3. security_groups

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: ntt

** Tags added: ntt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367944

Title:
  tenant usage information api is consuming lot of memory

Status in OpenStack Compute (Nova):
  New

Bug description:
  I have noticed that when a tenant usage information API is invoked for
  a particular tenant owning large number of instances (both active 
  terminated), then I see a sudden increase in nova-api process memory
  consumption from 500 MB up to 2.3 GB.

  It is due to a SQL retrieving large number of records of
  instance_system_metadata for instances using where in clause.

  At the time of getting tenant usage information, I had approx. 120,000
  instances in the db for a particular tenant (few were active and
  remaining terminated)

  Also in this plugin, it unnecessarily  gets following information of the 
instances from the db further degrading the performance of the API.
  1. metadata
  2. info_cache
  3. security_groups

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1367944/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367964] [NEW] Unable to recover from timeout of detaching cinder volume

2014-09-10 Thread Tomoki Sekiyama
Public bug reported:

When cinder-volume is under heavy load, RPC call for terminate_connection of 
cinder volumes may take more time than RPC timeout.
When the timeout occurs, nova gives up the detaching volume and recover the 
volume state to 'in-use', but doesn't reattach volumes.
This will make DB inconsistent state:

  (1) libvirt is already detaches the volume from the instance
  (2) cinder volume is disconnected from the host by terminate_connection RPC 
(but nova doesn't know this because of timeout)
  (3) nova.block_device_mapping still remains because of timeout in (2)

and the volume becomes impossible to re-attach or to detach completely.
If volume-detach is issued again, it will fail by the exception 
exception.DiskNotFound:


2014-07-17 10:58:17.333 2586 AUDIT nova.compute.manager 
[req-e251f834-9653-47aa-969c-b9524d4a683d f8c2ac613325450fa6403a89d48ac644 
4be531199d5240f79733fb071e090e46] [instance: 
48c19bff-ec39-44c5-a63b-cac01ee813eb] Detach volume 
f7d90bc8-eb55-4d46-a2c4-294dc9c6a92a from mountpoint /dev/vdb
2014-07-17 10:58:17.337 2586 ERROR nova.compute.manager 
[req-e251f834-9653-47aa-969c-b9524d4a683d f8c2ac613325450fa6403a89d48ac644 
4be531199d5240f79733fb071e090e46] [instance: 
48c19bff-ec39-44c5-a63b-cac01ee813eb] Failed to detach volume 
f7d90bc8-eb55-4d46-a2c4-294dc9c6a92a from /dev/vdb
2014-07-17 10:58:17.337 2586 TRACE nova.compute.manager [instance: 
48c19bff-ec39-44c5-a63b-cac01ee813eb] Traceback (most recent call last):
2014-07-17 10:58:17.337 2586 TRACE nova.compute.manager [instance: 
48c19bff-ec39-44c5-a63b-cac01ee813eb]   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 4169, in 
_detach_volume
2014-07-17 10:58:17.337 2586 TRACE nova.compute.manager [instance: 
48c19bff-ec39-44c5-a63b-cac01ee813eb] encryption=encryption)
2014-07-17 10:58:17.337 2586 TRACE nova.compute.manager [instance: 
48c19bff-ec39-44c5-a63b-cac01ee813eb]   File 
/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py, line 1365, in 
detach_volume
2014-07-17 10:58:17.337 2586 TRACE nova.compute.manager [instance: 
48c19bff-ec39-44c5-a63b-cac01ee813eb] raise 
exception.DiskNotFound(location=disk_dev)
2014-07-17 10:58:17.337 2586 TRACE nova.compute.manager [instance: 
48c19bff-ec39-44c5-a63b-cac01ee813eb] DiskNotFound: No disk at vdb
2014-07-17 10:58:17.337 2586 TRACE nova.compute.manager [instance: 
48c19bff-ec39-44c5-a63b-cac01ee813eb] 


We should have the way to recover from this situation.

For instance, we need to have something like volume-detach --force
which ignores the DiskNotFound exception and continues to delete
nova.block_device_mapping entry.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367964

Title:
  Unable to recover from timeout of detaching cinder volume

Status in OpenStack Compute (Nova):
  New

Bug description:
  When cinder-volume is under heavy load, RPC call for terminate_connection of 
cinder volumes may take more time than RPC timeout.
  When the timeout occurs, nova gives up the detaching volume and recover the 
volume state to 'in-use', but doesn't reattach volumes.
  This will make DB inconsistent state:

(1) libvirt is already detaches the volume from the instance
(2) cinder volume is disconnected from the host by terminate_connection RPC 
(but nova doesn't know this because of timeout)
(3) nova.block_device_mapping still remains because of timeout in (2)

  and the volume becomes impossible to re-attach or to detach completely.
  If volume-detach is issued again, it will fail by the exception 
exception.DiskNotFound:

  
  2014-07-17 10:58:17.333 2586 AUDIT nova.compute.manager 
[req-e251f834-9653-47aa-969c-b9524d4a683d f8c2ac613325450fa6403a89d48ac644 
4be531199d5240f79733fb071e090e46] [instance: 
48c19bff-ec39-44c5-a63b-cac01ee813eb] Detach volume 
f7d90bc8-eb55-4d46-a2c4-294dc9c6a92a from mountpoint /dev/vdb
  2014-07-17 10:58:17.337 2586 ERROR nova.compute.manager 
[req-e251f834-9653-47aa-969c-b9524d4a683d f8c2ac613325450fa6403a89d48ac644 
4be531199d5240f79733fb071e090e46] [instance: 
48c19bff-ec39-44c5-a63b-cac01ee813eb] Failed to detach volume 
f7d90bc8-eb55-4d46-a2c4-294dc9c6a92a from /dev/vdb
  2014-07-17 10:58:17.337 2586 TRACE nova.compute.manager [instance: 
48c19bff-ec39-44c5-a63b-cac01ee813eb] Traceback (most recent call last):
  2014-07-17 10:58:17.337 2586 TRACE nova.compute.manager [instance: 
48c19bff-ec39-44c5-a63b-cac01ee813eb]   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 4169, in 
_detach_volume
  2014-07-17 10:58:17.337 2586 TRACE nova.compute.manager [instance: 
48c19bff-ec39-44c5-a63b-cac01ee813eb] encryption=encryption)
  2014-07-17 10:58:17.337 2586 TRACE nova.compute.manager [instance: 
48c19bff-ec39-44c5-a63b-cac01ee813eb]   File 

[Yahoo-eng-team] [Bug 1367968] [NEW] Data loss on reboot following Ceph volume-backed instance rebuild

2014-09-10 Thread Tim Goddard
Public bug reported:

Under particular circumstances, volume backed instances that are rebuilt
may revert to a pre-rebuild state, losing any data added since that
rebuild.

Our environment:

* OpenStack Havana
* Ceph Emperor Backend
* Using Ceph for Glance, Cinder and Object Store

All instances were created and started through Horizon.

Steps to reproduce:

* Create a new instance, using Instance Boot Source: Boot from image (creates a 
new volume).
  - Note at this point the instance details will show Image Name: (not found)
* Log in through SSH, create some identifiable pre-rebuild test files.
* Rebuild Instance, selecting the same original image.
  - Note at this point the instance details will show, for example, Image Name: 
ubuntu-12.04-x86_64
* Log back in through SSH. As expected the test files should not be present.
* Create a second set of post-rebuild test files.
* Shut down the instance from inside itself - shutdown -h now.
  - relevant note - bug was *not* triggered by a soft restart.
* Start the instance again from Horizon.
* Log in. The post-rebuild test files will no longer be present, the 
pre-rebuild test re-appear. Has reverted to pre-rebuild state.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367968

Title:
  Data loss on reboot following Ceph volume-backed instance rebuild

Status in OpenStack Compute (Nova):
  New

Bug description:
  Under particular circumstances, volume backed instances that are
  rebuilt may revert to a pre-rebuild state, losing any data added since
  that rebuild.

  Our environment:

  * OpenStack Havana
  * Ceph Emperor Backend
  * Using Ceph for Glance, Cinder and Object Store

  All instances were created and started through Horizon.

  Steps to reproduce:

  * Create a new instance, using Instance Boot Source: Boot from image (creates 
a new volume).
- Note at this point the instance details will show Image Name: (not found)
  * Log in through SSH, create some identifiable pre-rebuild test files.
  * Rebuild Instance, selecting the same original image.
- Note at this point the instance details will show, for example, Image 
Name: ubuntu-12.04-x86_64
  * Log back in through SSH. As expected the test files should not be present.
  * Create a second set of post-rebuild test files.
  * Shut down the instance from inside itself - shutdown -h now.
- relevant note - bug was *not* triggered by a soft restart.
  * Start the instance again from Horizon.
  * Log in. The post-rebuild test files will no longer be present, the 
pre-rebuild test re-appear. Has reverted to pre-rebuild state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1367968/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367976] [NEW] Horizon failure when floating IP and port are in different projects

2014-09-10 Thread Brad Pokorny
Public bug reported:

The Admin - Overview screen returns a 500 error page if a floating IP
is assigned to a different project than one of the ports associated with
the floating IP.

Example floating IP and port:

$ neutron floatingip-show 9ebcde9f-9d9c-497d-bace-799a5fb8de56
+-+--+
| Field   | Value|
+-+--+
| fixed_ip_address| 10.100.0.253 |
| floating_ip_address | 10.116.126.157   |
| floating_network_id | 76c8b58a-5d59-45ec-a48a-6ed22f3648f4 |
| id  | 9ebcde9f-9d9c-497d-bace-799a5fb8de56 |
| port_id | d87e7af3-ad35-409b-b60d-88ebc49f6931 |
| router_id   |  |
| tenant_id   | 3013df710d604ed68ce8e3daf8089386 |
+-+--+
$ neutron port-show d87e7af3-ad35-409b-b60d-88ebc49f6931
+---+-+
| Field | Value 
  |
+---+-+
| admin_state_up| True  
  |
| binding:capabilities  | {port_filter: true} 
  |
| binding:vif_type  | vrouter   
  |
| device_id | 82193f2f-cae1-436e-a1ca-d4a81f753967  
  |
| device_owner  |   
  |
| fixed_ips | {subnet_id: 495f8399-ef52-4a28-9c57-7321148bf38d, 
ip_address: 10.100.0.253} |
| id| d87e7af3-ad35-409b-b60d-88ebc49f6931  
  |
| mac_address   | 02:d8:7e:7a:f3:ad 
  |
| name  | d87e7af3-ad35-409b-b60d-88ebc49f6931  
  |
| network_id| 915b3c98-1ffd-4a26-a193-be8ef55c6999  
  |
| port_security_enabled | True  
  |
| security_groups   | 42714b40-acfe-4231-a6ca-5b5bd7145a45  
  |
| status| ACTIVE
  |
| tenant_id | 2a62c9ba436e46c59a9621a7f1618a83  
  |
+---+-+


Error seen on the Horizon page when debug is enabled:

KeyError at /admin/

u'd87e7af3-ad35-409b-b60d-88ebc49f6931'

Request Method: GET
Request URL:http://example.net/horizon/admin/
Django Version: 1.6.1
Exception Type: KeyError
Exception Value:

u'd87e7af3-ad35-409b-b60d-88ebc49f6931'

Exception Location: 
/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/neutron.py
 in list, line 340
Python Executable:  /usr/bin/python
Python Version: 2.7.6
Python Path:

['/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../..',
 '/usr/lib/python2.7',
 '/usr/lib/python2.7/plat-x86_64-linux-gnu',
 '/usr/lib/python2.7/lib-tk',
 '/usr/lib/python2.7/lib-old',
 '/usr/lib/python2.7/lib-dynload',
 '/usr/local/lib/python2.7/dist-packages',
 '/usr/lib/python2.7/dist-packages',
 '/usr/lib/pymodules/python2.7',
 '/usr/share/openstack-dashboard/',
 '/usr/share/openstack-dashboard/openstack_dashboard']

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1367976

Title:
  Horizon failure when floating IP and port are in different projects

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The Admin - Overview screen returns a 500 error page if a floating IP
  is assigned to a different project than one of the ports associated
  with the floating IP.

  Example floating IP and port:

  $ neutron floatingip-show 9ebcde9f-9d9c-497d-bace-799a5fb8de56
  +-+--+
  | Field   | Value|
  +-+--+
  | fixed_ip_address| 10.100.0.253 |
  | floating_ip_address | 

[Yahoo-eng-team] [Bug 1367982] [NEW] ERROR [tempest.scenario.test_volume_boot_pattern] ssh to server failed

2014-09-10 Thread John Griffith
Public bug reported:

Failure encountered in gate testing dsvm-full

http://logs.openstack.org/98/120298/2/check/check-tempest-dsvm-
full/a739161/console.html#_2014-09-10_15_53_23_821

It appears that the volume was created and nova reported it as booted
successfully however the ssh connection timed out.  Haven't looked
closely to see if perhaps this is an issue in the test itself.

There are some issues related to instance state listed in the n-cpu logs
of this run, but they don't appear to be related to this specific test.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367982

Title:
  ERROR[tempest.scenario.test_volume_boot_pattern] ssh to server
  failed

Status in OpenStack Compute (Nova):
  New

Bug description:
  Failure encountered in gate testing dsvm-full

  http://logs.openstack.org/98/120298/2/check/check-tempest-dsvm-
  full/a739161/console.html#_2014-09-10_15_53_23_821

  It appears that the volume was created and nova reported it as booted
  successfully however the ssh connection timed out.  Haven't looked
  closely to see if perhaps this is an issue in the test itself.

  There are some issues related to instance state listed in the n-cpu
  logs of this run, but they don't appear to be related to this specific
  test.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1367982/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367981] [NEW] Nova instance config drive Metadata Definition

2014-09-10 Thread Travis Tripp
Public bug reported:

A nova Juno FFE landed to support setting the img_config_drive property
on images to require images to be booted with a config drive.  The
Glance Metadata Definitions should include this property.

See Nova Blueprint: https://blueprints.launchpad.net/nova/+spec/config-
drive-image-property

** Affects: glance
 Importance: Undecided
 Assignee: Travis Tripp (travis-tripp)
 Status: New


** Tags: low-hanging-fruit

** Changed in: glance
 Assignee: (unassigned) = Travis Tripp (travis-tripp)

** Description changed:

- A nova FFE landed to support setting the img_config_drive property on
- images to require images to be booted with a config drive.  The Glance
- Metadata Definitions should include this property.
+ A nova Juno FFE landed to support setting the img_config_drive property
+ on images to require images to be booted with a config drive.  The
+ Glance Metadata Definitions should include this property.
  
  See Nova Blueprint: https://blueprints.launchpad.net/nova/+spec/config-
  drive-image-property

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1367981

Title:
  Nova instance config drive Metadata Definition

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  A nova Juno FFE landed to support setting the img_config_drive
  property on images to require images to be booted with a config drive.
  The Glance Metadata Definitions should include this property.

  See Nova Blueprint: https://blueprints.launchpad.net/nova/+spec
  /config-drive-image-property

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1367981/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367999] [NEW] live-migration causes VM network disconnected forever

2014-09-10 Thread Li Ma
Public bug reported:

OS: RHEL 6.5
OpenStack: RDO icehouse and master
Neutron: Linuxbridge + VxLAN + L2pop
Testbed: 1 controller node + 2 compute nodes + 1 network node

Reproduction procedure:

1. Start to ping VM from qrouter namespace using fixed IP
Start to ping VM from outside using floating IP

2. Live-migrate one VM from compute1 to computer2

3. VM Network disconnects after several seconds

4. Even if Nova reports that the migration is finished,
Ping is still not working.

Debug Info on network node:

Command: ['sudo', 'bridge', 'fdb', 'add', 'fa:16:3e:b3:fd:27', 'dev', 
'vxlan-1', 'dst', '192.168.2.103']
Exit code: 2
Stdout: ''
Stderr: 'RTNETLINK answers: File exists\n'

Cause:
Before migration, the original fdb entry is there. After migration, l2pop will 
updates the fdb entry of the VM.
It adds the new entry that causes ERROR.

The right operation should be 'replace' not 'add'.

By the way, 'replace' will safely add the new entry if old entry is not
existed.

I think this bug can be marked as High.

** Affects: neutron
 Importance: Undecided
 Assignee: Li Ma (nick-ma-z)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Li Ma (nick-ma-z)

** Description changed:

  OS: RHEL 6.5
  OpenStack: RDO icehouse and master
  Neutron: Linuxbridge + VxLAN + L2pop
  Testbed: 1 controller node + 2 compute nodes + 1 network node
  
  Reproduction procedure:
  
  1. Start to ping VM from qrouter namespace using fixed IP
- Start to ping VM from outside using floating IP
+ Start to ping VM from outside using floating IP
  
  2. Live-migrate one VM from compute1 to computer2
  
  3. VM Network disconnects after several seconds
  
- 4. Even if Nova reports that the migration is finished, 
+ 4. Even if Nova reports that the migration is finished,
  Ping is still not working.
- 
  
  Debug Info on network node:
  
  Command: ['sudo', 'bridge', 'fdb', 'add', 'fa:16:3e:b3:fd:27', 'dev', 
'vxlan-1', 'dst', '192.168.2.103']
  Exit code: 2
  Stdout: ''
  Stderr: 'RTNETLINK answers: File exists\n'
  
- 
  Cause:
  Before migration, the original fdb entry is there. After migration, l2pop 
will updates the fdb entry of the VM.
  It adds the new entry that causes ERROR.
  
  The right operation should be 'replace' not 'add'.
  
+ By the way, 'replace' will safely add the new entry if old entry is not
+ existed.
  
  I think this bug can be marked as High.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1367999

Title:
  live-migration causes VM network disconnected forever

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  OS: RHEL 6.5
  OpenStack: RDO icehouse and master
  Neutron: Linuxbridge + VxLAN + L2pop
  Testbed: 1 controller node + 2 compute nodes + 1 network node

  Reproduction procedure:

  1. Start to ping VM from qrouter namespace using fixed IP
  Start to ping VM from outside using floating IP

  2. Live-migrate one VM from compute1 to computer2

  3. VM Network disconnects after several seconds

  4. Even if Nova reports that the migration is finished,
  Ping is still not working.

  Debug Info on network node:

  Command: ['sudo', 'bridge', 'fdb', 'add', 'fa:16:3e:b3:fd:27', 'dev', 
'vxlan-1', 'dst', '192.168.2.103']
  Exit code: 2
  Stdout: ''
  Stderr: 'RTNETLINK answers: File exists\n'

  Cause:
  Before migration, the original fdb entry is there. After migration, l2pop 
will updates the fdb entry of the VM.
  It adds the new entry that causes ERROR.

  The right operation should be 'replace' not 'add'.

  By the way, 'replace' will safely add the new entry if old entry is
  not existed.

  I think this bug can be marked as High.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1367999/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368006] [NEW] ofagent: broken XenAPI support

2014-09-10 Thread YAMAMOTO Takashi
Public bug reported:

ofagent has code for agent-on-DomU support inherited from OVS agent.
However, it's incomplete and broken.  Because ofagent uses a direct
OpenFlow channel instead of ovs-ofctl command to program a switch,
the method to use the special rootwrap can not work.

** Affects: neutron
 Importance: Undecided
 Assignee: YAMAMOTO Takashi (yamamoto)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1368006

Title:
  ofagent: broken XenAPI support

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  ofagent has code for agent-on-DomU support inherited from OVS agent.
  However, it's incomplete and broken.  Because ofagent uses a direct
  OpenFlow channel instead of ovs-ofctl command to program a switch,
  the method to use the special rootwrap can not work.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1368006/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368010] [NEW] Specific roles should be allowed to view or configure quota

2014-09-10 Thread Xu Han Peng
Public bug reported:

Currently in Neutron, only admins are allowed to view or configure
tenant quota:

http://docs.openstack.org/user-guide-admin/content/cli_set_quotas.html

Only users with the admin role can change a quota value.

And it's hard coded in
https://github.com/openstack/neutron/blob/master/neutron/extensions/quotasv2.py
to check admin context.

We should allow specifying roles in policy.json to view or configure
quota for more flexible configuration.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1368010

Title:
  Specific roles should be allowed to view or configure quota

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Currently in Neutron, only admins are allowed to view or configure
  tenant quota:

  http://docs.openstack.org/user-guide-admin/content/cli_set_quotas.html

  Only users with the admin role can change a quota value.

  And it's hard coded in
  
https://github.com/openstack/neutron/blob/master/neutron/extensions/quotasv2.py
  to check admin context.

  We should allow specifying roles in policy.json to view or configure
  quota for more flexible configuration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1368010/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340473] Re: dhcp agent create broken network namespace

2014-09-10 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1340473

Title:
  dhcp agent create broken network namespace

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  Running dhcp agent, get error as below:

  2014-07-10 23:18:41.932 ERROR neutron.agent.dhcp_agent [-] Unable to enable 
dhcp for 72cad723-3ce1-402b-ac4b-746274cbad9d.
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent Traceback (most recent 
call last):
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent   File 
/opt/stack/neutron/neutron/agent/dhcp_agent.py, line 129, in call_driver
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent getattr(driver, 
action)(**action_kwargs)
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent   File 
/opt/stack/neutron/neutron/agent/linux/dhcp.py, line 191, in enable
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent interface_name = 
self.device_manager.setup(self.network)
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent   File 
/opt/stack/neutron/neutron/agent/linux/dhcp.py, line 894, in setup
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent 
namespace=network.namespace)
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent   File 
/opt/stack/neutron/neutron/agent/linux/interface.py, line 368, in plug
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent 
namespace2=namespace)
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent   File 
/opt/stack/neutron/neutron/agent/linux/ip_lib.py, line 125, in add_veth
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent 
self.ensure_namespace(namespace2)
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent   File 
/opt/stack/neutron/neutron/agent/linux/ip_lib.py, line 137, in 
ensure_namespace
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent lo.link.set_up()
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent   File 
/opt/stack/neutron/neutron/agent/linux/ip_lib.py, line 248, in set_up
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent 
self._as_root('set', self.name, 'up')
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent   File 
/opt/stack/neutron/neutron/agent/linux/ip_lib.py, line 229, in _as_root
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent 
kwargs.get('use_root_namespace', False))
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent   File 
/opt/stack/neutron/neutron/agent/linux/ip_lib.py, line 69, in _as_root
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent namespace)
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent   File 
/opt/stack/neutron/neutron/agent/linux/ip_lib.py, line 80, in _execute
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent 
root_helper=root_helper)
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent   File 
/opt/stack/neutron/neutron/agent/linux/utils.py, line 76, in execute
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent raise 
RuntimeError(m)
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent RuntimeError:
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent Command: ['sudo', 
'/usr/local/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 
'exec', 'qdhcp-72cad723-3ce1-402b-ac4b-746274cbad9d', 'ip', 'link', 'set', 
'lo', 'up']
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent Exit code: 1
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent Stdout: ''
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent Stderr: 'seting the 
network namespace qdhcp-72cad723-3ce1-402b-ac4b-746274cbad9d failed: Invalid 
argument\n'
  2014-07-10 23:18:41.932 TRACE neutron.agent.dhcp_agent

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1340473/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340194] Re: Removed security group rules are still persistent on instances

2014-09-10 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1340194

Title:
  Removed security group rules  are still persistent on instances

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  Even after removing the scurity group rules , able to do the
  operations like ssh/ping on vms.

  Erlier to this we added rules to allow ssh and ping , and then removed
  those rules.

  Below is log

   nova list
  
+--+-+++-+-+
  | ID   | Name| Status | Task State | 
Power State | Networks|
  
+--+-+++-+-+
  | a1426d0a-07df-40c8-b883-3f5fb34bbec2 | testvm1-az1 | ACTIVE | None   | 
Running | Net1=2.2.2.2, 10.233.53.105 |
  | 329b0493-e1f9-4baa-bfc9-5ecf9c2d4687 | testvm1-az2 | ACTIVE | None   | 
Running | Net1=2.2.2.4|
  
+--+-+++-+-+
  root@controller:~# nova show a1426d0a-07df-40c8-b883-3f5fb34bbec2
  
+--+--+
  | Property | Value
|
  
+--+--+
  | status   | ACTIVE   
|
  | updated  | 2014-07-03T06:34:31Z 
|
  | OS-EXT-STS:task_state| None 
|
  | OS-EXT-SRV-ATTR:host | compute1 
|
  | key_name | None 
|
  | image| CirrOS 0.3.1 
(ea93e47e-558e-4baf-bea1-777b4814ca5d)  |
  | hostId   | 
64a50db012ab0b483697b85be03d02d66535ff2656170b6c8fb9a8f8 |
  | Net1 network | 2.2.2.2, 10.233.53.105   
|
  | OS-EXT-STS:vm_state  | active   
|
  | OS-EXT-SRV-ATTR:instance_name| instance-0018
|
  | OS-SRV-USG:launched_at   | 2014-07-03T06:34:31.00   
|
  | OS-EXT-SRV-ATTR:hypervisor_hostname  | compute1 
|
  | flavor   | myF1 (6) 
|
  | id   | a1426d0a-07df-40c8-b883-3f5fb34bbec2 
|
  | security_groups  | [{u'name': u'default'}]  
| -- using default secgroup.
  | OS-SRV-USG:terminated_at | None 
|
  | user_id  | 0dc64e9cfb07442b8d6ce7d518200d06 
|
  | name | testvm1-az1  
|
  | created  | 2014-07-03T06:33:54Z 
|
  | tenant_id| 8a5dee0f17204539a73987d6a8f255cd 
|
  | OS-DCF:diskConfig| MANUAL   
|
  | metadata | {}   
|
  | os-extended-volumes:volumes_attached | []   
|
  | accessIPv4   |  
|
  | accessIPv6   |  
|
  | progress | 0
|
  | OS-EXT-STS:power_state   | 1
|
  | OS-EXT-AZ:availability_zone  | azhyd1   
|
  | config_drive |  
|
  
+--+--+
  root@controller:~# nova secgroup-list-rules default
  

[Yahoo-eng-team] [Bug 1368030] [NEW] nova-manage command when executed by non-root user, should give authorization error instead of low level database error

2014-09-10 Thread vishal yadav
Public bug reported:

Version of nova-compute and distribution/package (1:2014.1.2-0ubuntu1.1)

1) Execute below command using non-root user.
ubuntu@mc1:~$ nova-manage flavor list

It gives below error:

Command failed, please check log for more info
2014-09-11 13:43:17.501 12857 CRITICAL nova 
[req-07bc6065-3ece-4fd5-b478-48d37c63a2c6 None None] OperationalError: 
(OperationalError) unable to open database file None None

2) Execute above command using root user:
ubuntu@mc1:~$ sudo su -
root@mc1:~# nova-manage flavor list
m1.medium: Memory: 4096MB, VCPUS: 2, Root: 40GB, Ephemeral: 0Gb, FlavorID: 3, 
Swap: 0MB, RXTX Factor: 1.0, public, ExtraSpecs {}
m1.tiny: Memory: 512MB, VCPUS: 1, Root: 1GB, Ephemeral: 0Gb, FlavorID: 1, Swap: 
0MB, RXTX Factor: 1.0, public, ExtraSpecs {}
m1.large: Memory: 8192MB, VCPUS: 4, Root: 80GB, Ephemeral: 0Gb, FlavorID: 4, 
Swap: 0MB, RXTX Factor: 1.0, public, ExtraSpecs {}
m1.xlarge: Memory: 16384MB, VCPUS: 8, Root: 160GB, Ephemeral: 0Gb, FlavorID: 5, 
Swap: 0MB, RXTX Factor: 1.0, public, ExtraSpecs {}
m1.small: Memory: 2048MB, VCPUS: 1, Root: 20GB, Ephemeral: 0Gb, FlavorID: 2, 
Swap: 0MB, RXTX Factor: 1.0, public, ExtraSpecs {}

So instead of low level database error, it should give kind of
authorization error to operator or end-user of nova-manage CLI.

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: ubuntu
 Importance: Undecided
 Status: New


** Tags: nova-manage

** Summary changed:

- nova-manage command when executed by non-root user should give authorization 
error instead of low level database error
+ nova-manage command when executed by non-root user, should give 
authorization error instead of low level database error

** Also affects: ubuntu
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368030

Title:
  nova-manage command when executed by non-root user, should give
  authorization error instead of low level database error

Status in OpenStack Compute (Nova):
  New
Status in Ubuntu:
  New

Bug description:
  Version of nova-compute and distribution/package
  (1:2014.1.2-0ubuntu1.1)

  1) Execute below command using non-root user.
  ubuntu@mc1:~$ nova-manage flavor list

  It gives below error:

  Command failed, please check log for more info
  2014-09-11 13:43:17.501 12857 CRITICAL nova 
[req-07bc6065-3ece-4fd5-b478-48d37c63a2c6 None None] OperationalError: 
(OperationalError) unable to open database file None None

  2) Execute above command using root user:
  ubuntu@mc1:~$ sudo su -
  root@mc1:~# nova-manage flavor list
  m1.medium: Memory: 4096MB, VCPUS: 2, Root: 40GB, Ephemeral: 0Gb, FlavorID: 3, 
Swap: 0MB, RXTX Factor: 1.0, public, ExtraSpecs {}
  m1.tiny: Memory: 512MB, VCPUS: 1, Root: 1GB, Ephemeral: 0Gb, FlavorID: 1, 
Swap: 0MB, RXTX Factor: 1.0, public, ExtraSpecs {}
  m1.large: Memory: 8192MB, VCPUS: 4, Root: 80GB, Ephemeral: 0Gb, FlavorID: 4, 
Swap: 0MB, RXTX Factor: 1.0, public, ExtraSpecs {}
  m1.xlarge: Memory: 16384MB, VCPUS: 8, Root: 160GB, Ephemeral: 0Gb, FlavorID: 
5, Swap: 0MB, RXTX Factor: 1.0, public, ExtraSpecs {}
  m1.small: Memory: 2048MB, VCPUS: 1, Root: 20GB, Ephemeral: 0Gb, FlavorID: 2, 
Swap: 0MB, RXTX Factor: 1.0, public, ExtraSpecs {}

  So instead of low level database error, it should give kind of
  authorization error to operator or end-user of nova-manage CLI.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1368030/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368032] [NEW] Add metadata definitions for Aggregate filters added in Juno

2014-09-10 Thread Travis Tripp
Public bug reported:

The below spec implemented in Juno added numerous properties that can be
set on host aggregates.  The Metadata Definitions catalog should include
these properties.

https://github.com/openstack/nova-specs/blob/master/specs/juno/per-
aggregate-filters.rst

** Affects: horizon
 Importance: Undecided
 Assignee: Travis Tripp (travis-tripp)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Travis Tripp (travis-tripp)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1368032

Title:
  Add metadata definitions for Aggregate filters added in Juno

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The below spec implemented in Juno added numerous properties that can
  be set on host aggregates.  The Metadata Definitions catalog should
  include these properties.

  https://github.com/openstack/nova-specs/blob/master/specs/juno/per-
  aggregate-filters.rst

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1368032/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368033] [NEW] Separate Configuration from Freescale SDN ML2 mechanism Driver

2014-09-10 Thread Trinath Somanchi
Public bug reported:

In the current implementation, CRD configuration is existing within the code of 
ML2 mechanism driver.
When any other plugin/driver need to use this configuration needs to duplicate 
the complete configuration.
So the CRD configuration options are to be moved to a separate file to be used 
with other plugin/drivers.

** Affects: neutron
 Importance: Undecided
 Assignee: Trinath Somanchi (trinath-somanchi)
 Status: New


** Tags: freescale ml2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1368033

Title:
  Separate Configuration from Freescale SDN ML2 mechanism Driver

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In the current implementation, CRD configuration is existing within the code 
of ML2 mechanism driver.
  When any other plugin/driver need to use this configuration needs to 
duplicate the complete configuration.
  So the CRD configuration options are to be moved to a separate file to be 
used with other plugin/drivers.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1368033/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368037] [NEW] tempest-dsvm-postgres-full fail with 'Error. Unable to associate floating ip'

2014-09-10 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Jenkins job failed with 'Error. Unable to associate floating ip'.
Logs can be found here:
http://logs.openstack.org/67/120067/2/check/check-tempest-dsvm-postgres-full/1a45f89/console.html

Log snippet:
2014-09-10 18:55:16.527 | 2014-09-10 18:30:55,125 24275 INFO 
[tempest.common.rest_client] Request (TestVolumeBootPattern:_run_cleanups): 404 
GET 
http://127.0.0.1:8776/v1/5e4676bdfb7548b3b4dd4b084cee1752/volumes/651d8b59-df65-442c-9165-1b993374b24a
 0.025s
2014-09-10 18:55:16.527 | 2014-09-10 18:30:55,157 24275 INFO 
[tempest.common.rest_client] Request (TestVolumeBootPattern:_run_cleanups): 404 
GET 
http://127.0.0.1:8774/v2/5e4676bdfb7548b3b4dd4b084cee1752/servers/25b9a6b8-dc48-47d3-9569-620e47ff0495
 0.032s
2014-09-10 18:55:16.527 | 2014-09-10 18:30:55,191 24275 INFO 
[tempest.common.rest_client] Request (TestVolumeBootPattern:_run_cleanups): 404 
GET 
http://127.0.0.1:8774/v2/5e4676bdfb7548b3b4dd4b084cee1752/servers/bbf6f5c4-7f2f-48ec-9d86-50c89d636a6d
 0.032s
2014-09-10 18:55:16.527 | }}}
2014-09-10 18:55:16.527 | 
2014-09-10 18:55:16.527 | Traceback (most recent call last):
2014-09-10 18:55:16.527 |   File tempest/test.py, line 128, in wrapper
2014-09-10 18:55:16.527 | return f(self, *func_args, **func_kwargs)
2014-09-10 18:55:16.528 |   File 
tempest/scenario/test_volume_boot_pattern.py, line 164, in 
test_volume_boot_pattern
2014-09-10 18:55:16.528 | keypair)
2014-09-10 18:55:16.528 |   File 
tempest/scenario/test_volume_boot_pattern.py, line 108, in _ssh_to_server
2014-09-10 18:55:16.528 | floating_ip['ip'], server['id'])
2014-09-10 18:55:16.528 |   File 
tempest/services/compute/json/floating_ips_client.py, line 80, in 
associate_floating_ip_to_server
2014-09-10 18:55:16.528 | resp, body = self.post(url, post_body)
2014-09-10 18:55:16.528 |   File tempest/common/rest_client.py, line 219, 
in post
2014-09-10 18:55:16.528 | return self.request('POST', url, 
extra_headers, headers, body)
2014-09-10 18:55:16.528 |   File tempest/common/rest_client.py, line 435, 
in request
2014-09-10 18:55:16.528 | resp, resp_body)
2014-09-10 18:55:16.529 |   File tempest/common/rest_client.py, line 484, 
in _error_checker
2014-09-10 18:55:16.529 | raise exceptions.BadRequest(resp_body)
2014-09-10 18:55:16.529 | BadRequest: Bad request
2014-09-10 18:55:16.529 | Details: {u'message': u'Error. Unable to 
associate floating ip', u'code': 400}

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: dsvm-postgres jenkins
-- 
tempest-dsvm-postgres-full fail with 'Error. Unable to associate floating ip'
https://bugs.launchpad.net/bugs/1368037
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp