[Yahoo-eng-team] [Bug 1263628] [NEW] retrieve network related quota usage info from neutron instead of nova db

2013-12-23 Thread Yaguang Tang
Public bug reported:

nova quota-show  conflicts neutron quota-show


neutron quota-show

# neutron quota-show

| Field | Value |

| Floatingip | 50 | ←
| Network | 10 |
| Port | 50 |
| Router | 10 |
| Security_group | 10 |
| Security_group_rule | 100 |
| Subnet | 10 |


# Nova quota-show

| Property | Value |

| Metadata_items | 128 |
| Injected_file_content_bytes | 10240 |
| Ram | 51200 |
| Floating_ips | 10 | ←
| Key_pairs | 100 |
| Instances | 10 |
| Security_group_rules | 20 |
| Injected_files | 5 |
| Cores | 20 |
| Fixed_ips | -1 |
| Injected_file_path_bytes | 255 |
| Security_groups | 10 |

security_groups, floating_ips security_group_rule should get from
neutron when using Nuetron as network service.

** Affects: nova
 Importance: Undecided
 Assignee: Yaguang Tang (heut2008)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Yaguang Tang (heut2008)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1263628

Title:
  retrieve network related quota usage info from neutron instead of nova
  db

Status in OpenStack Compute (Nova):
  New

Bug description:
  nova quota-show  conflicts neutron quota-show


  neutron quota-show

  # neutron quota-show
  
  | Field | Value |
  
  | Floatingip | 50 | ←
  | Network | 10 |
  | Port | 50 |
  | Router | 10 |
  | Security_group | 10 |
  | Security_group_rule | 100 |
  | Subnet | 10 |
  

  # Nova quota-show
  
  | Property | Value |
  
  | Metadata_items | 128 |
  | Injected_file_content_bytes | 10240 |
  | Ram | 51200 |
  | Floating_ips | 10 | ←
  | Key_pairs | 100 |
  | Instances | 10 |
  | Security_group_rules | 20 |
  | Injected_files | 5 |
  | Cores | 20 |
  | Fixed_ips | -1 |
  | Injected_file_path_bytes | 255 |
  | Security_groups | 10 |

  security_groups, floating_ips security_group_rule should get from
  neutron when using Nuetron as network service.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1263628/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1263637] [NEW] can't enable ml2

2013-12-23 Thread li,chen
Public bug reported:

Hi list,
I'm working under CentOS 6.4 + Havana:
openstack-neutron.noarch   
2013.2.1-1.el6  @openstack-havana
openstack-neutron-linuxbridge.noarch   2013.2.1-1.el6  
@openstack-havana
openstack-neutron-ml2.noarch 
2013.2.1-1.el6  @openstack-havana
openstack-neutron-openvswitch.noarch2013.2.1-1.el6  
@openstack-havana
python-neutron.noarch   
2013.2.1-1.el6  @openstack-havana
python-neutronclient.noarch  
2.3.1-1.el6 @openstack-havana


Everything works when I was working under core_plugin = 
neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2.


While after I set   core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin,
I can't start neutron-server.

The log in /var/log/neutron/server.log is :

2013-12-23 08:35:00.660 37948 INFO neutron.common.config [-] Logging enabled!
2013-12-23 08:35:00.661 37948 ERROR neutron.common.legacy [-] Skipping unknown 
group key: firewall_driver
2013-12-23 08:35:00.664 37948 INFO neutron.common.config [-] Config paste file: 
/etc/neutron/api-paste.ini
2013-12-23 08:35:00.693 37948 INFO neutron.manager [-] Loading Plugin: 
neutron.plugins.ml2.plugin.Ml2Plugin
2013-12-23 08:35:00.735 37948 INFO neutron.plugins.ml2.managers [-] Configured 
type driver names: ['local', 'flat', 'vlan', 'gre', 'vxlan']
2013-12-23 08:35:00.750 37948 INFO neutron.plugins.ml2.drivers.type_flat [-] 
Allowable flat physical_network names: []
2013-12-23 08:35:00.752 37948 INFO neutron.plugins.ml2.drivers.type_vlan [-] 
Network VLAN ranges: {}
2013-12-23 08:35:00.752 37948 INFO neutron.plugins.ml2.drivers.type_local [-] 
ML2 LocalTypeDriver initialization complete
2013-12-23 08:35:00.758 37948 INFO neutron.plugins.ml2.managers [-] Loaded type 
driver names: ['flat', 'vlan', 'local', 'gre', 'vxlan']
2013-12-23 08:35:00.758 37948 INFO neutron.plugins.ml2.managers [-] Registered 
types: ['flat', 'vlan', 'local', 'gre', 'vxlan']
2013-12-23 08:35:00.758 37948 INFO neutron.plugins.ml2.managers [-] Tenant 
network_types: ['local', 'flat', 'vlan', 'gre', 'vxlan']
2013-12-23 08:35:00.758 37948 INFO neutron.plugins.ml2.managers [-] Configured 
mechanism driver names: ['openvswitch', 'linuxbridge']
2013-12-23 08:35:00.759 37948 ERROR neutron.service [-] Unrecoverable error: 
please check log for details.
2013-12-23 08:35:00.759 37948 TRACE neutron.service Traceback (most recent call 
last):
2013-12-23 08:35:00.759 37948 TRACE neutron.service   File 
/usr/lib/python2.6/site-packages/neutron/service.py, line 99, in serve_wsgi
2013-12-23 08:35:00.759 37948 TRACE neutron.service service.start()
2013-12-23 08:35:00.759 37948 TRACE neutron.service   File 
/usr/lib/python2.6/site-packages/neutron/service.py, line 68, in start
2013-12-23 08:35:00.759 37948 TRACE neutron.service self.wsgi_app = 
_run_wsgi(self.app_name)
2013-12-23 08:35:00.759 37948 TRACE neutron.service   File 
/usr/lib/python2.6/site-packages/neutron/service.py, line 112, in _run_wsgi
2013-12-23 08:35:00.759 37948 TRACE neutron.service app = 
config.load_paste_app(app_name)
2013-12-23 08:35:00.759 37948 TRACE neutron.service   File 
/usr/lib/python2.6/site-packages/neutron/common/config.py, line 144, in 
load_paste_app
2013-12-23 08:35:00.759 37948 TRACE neutron.service app = 
deploy.loadapp(config:%s % config_path, name=app_name)
2013-12-23 08:35:00.759 37948 TRACE neutron.service   File 
/usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/loadwsgi.py,
 line 247, in loadapp
2013-12-23 08:35:00.759 37948 TRACE neutron.service return loadobj(APP, 
uri, name=name, **kw)
2013-12-23 08:35:00.759 37948 TRACE neutron.service   File 
/usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/loadwsgi.py,
 line 272, in loadobj
2013-12-23 08:35:00.759 37948 TRACE neutron.service return context.create()
2013-12-23 08:35:00.759 37948 TRACE neutron.service   File 
/usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/loadwsgi.py,
 line 710, in create
2013-12-23 08:35:00.759 37948 TRACE neutron.service return 
self.object_type.invoke(self)
2013-12-23 08:35:00.759 37948 TRACE neutron.service   File 
/usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/loadwsgi.py,
 line 144, in invoke
2013-12-23 08:35:00.759 37948 TRACE neutron.service **context.local_conf)
2013-12-23 08:35:00.759 37948 TRACE neutron.service   File 
/usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/util.py,
 line 59, in fix_call
2013-12-23 08:35:00.759 37948 TRACE neutron.service reraise(*exc_info)
2013-12-23 08:35:00.759 37948 TRACE neutron.service   File 
/usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/compat.py,
 line 22, in reraise
2013-12-23 08:35:00.759 37948 TRACE 

[Yahoo-eng-team] [Bug 1263638] [NEW] use vlan tenant network type for InfiniBand Fabric

2013-12-23 Thread Irena Berezovsky
Public bug reported:

Change tenant network type usage for InfiniBand Fabric to ‘vlan’ type.
Mellanox neutron plugin supports embedded switch functionality as part of VPI, 
i.e. Ethernet and InfiniBand HCA.
Tenant network isolation is achieved via Layer-2 network partitioning. While 
Ethernet packets are tagged with VLAN-ID, InfiniBand packets are tagged with 
P-Key. This has quite the same meaning.
An additional incentive to fix the IB tenant network type is the compatibility 
with Network Node running LinuxBridge Agent as L2 agent to enable DHCP and L3 
functions.

** Affects: neutron
 Importance: Undecided
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1263638

Title:
  use vlan tenant network type for InfiniBand Fabric

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  Change tenant network type usage for InfiniBand Fabric to ‘vlan’ type.
  Mellanox neutron plugin supports embedded switch functionality as part of 
VPI, i.e. Ethernet and InfiniBand HCA.
  Tenant network isolation is achieved via Layer-2 network partitioning. While 
Ethernet packets are tagged with VLAN-ID, InfiniBand packets are tagged with 
P-Key. This has quite the same meaning.
  An additional incentive to fix the IB tenant network type is the 
compatibility with Network Node running LinuxBridge Agent as L2 agent to enable 
DHCP and L3 functions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1263638/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1263639] [NEW] glance/tests/unit/test_migrations.py does not calculate the tmp db path properly for sqlite

2013-12-23 Thread David Koo
Public bug reported:

The current method of resetting temp tests databases in the 
TestMigrations._reset_databases() method is:
1) use urlparse.urlparse to first parse the connection string
2) strip the path component of the result of step (1) of all / prefixes 
and use the result as the path to the database (to clear the database for the 
test).

For a sqlite connection, this works fine for the default connection string 
(sqlite:///test_migrations.db - i.e. create the test database in the 
current directory):
- step 1 yields: ParseResult(scheme='sqlite', netloc='', 
path='/test_migrations.db', params='', query='', fragment='')
- step 2 yeids: 'test_migrations.db'

But if the test database is specified to be another directory (e.g. 
/tmp/test_migrations.db) then the / prefix stripping leads to incorrect 
path calculation:
- Note: The connection string should be: sqlite:tmp/test_migrations.db
- step 1 yields: ParseResult(scheme='sqlite', netloc='', 
path='//tmp/test_migrations.db', params='', query='', fragment='')
- step 2 yields: 'tmp/test_migrations.db'
This path doesn't exist in the current directory and so the subsequent 
resetting code does nothing. But when sqlalchemy actually connects to the DB 
(which is in /tmp/test_migrations.db) the test finds that the DB hasn't been 
reset and so fails.

The fix for this should be not to strip all leading /s from the path
component of the result of step (1) but rather to just strip the first
leading /.

** Affects: glance
 Importance: Undecided
 Assignee: David Koo (kpublicmail)
 Status: In Progress

** Changed in: glance
 Assignee: (unassigned) = David Koo (kpublicmail)

** Changed in: glance
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1263639

Title:
  glance/tests/unit/test_migrations.py does not calculate the tmp db
  path properly for sqlite

Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress

Bug description:
  The current method of resetting temp tests databases in the 
TestMigrations._reset_databases() method is:
  1) use urlparse.urlparse to first parse the connection string
  2) strip the path component of the result of step (1) of all / 
prefixes and use the result as the path to the database (to clear the database 
for the test).

  For a sqlite connection, this works fine for the default connection string 
(sqlite:///test_migrations.db - i.e. create the test database in the 
current directory):
  - step 1 yields: ParseResult(scheme='sqlite', netloc='', 
path='/test_migrations.db', params='', query='', fragment='')
  - step 2 yeids: 'test_migrations.db'

  But if the test database is specified to be another directory (e.g. 
/tmp/test_migrations.db) then the / prefix stripping leads to incorrect 
path calculation:
  - Note: The connection string should be: 
sqlite:tmp/test_migrations.db
  - step 1 yields: ParseResult(scheme='sqlite', netloc='', 
path='//tmp/test_migrations.db', params='', query='', fragment='')
  - step 2 yields: 'tmp/test_migrations.db'
  This path doesn't exist in the current directory and so the subsequent 
resetting code does nothing. But when sqlalchemy actually connects to the DB 
(which is in /tmp/test_migrations.db) the test finds that the DB hasn't been 
reset and so fails.

  The fix for this should be not to strip all leading /s from the
  path component of the result of step (1) but rather to just strip
  the first leading /.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1263639/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1262616] Re: 'Multiple plugins for service %s were configured', 'L3_ROUTER_NAT'

2013-12-23 Thread Oleg Bondarev
L3RouterPlugin service plugin should be configured for plugins that
delegate away L3 routing functionality. OVSNeutronPluginV2 implements l3
functionality itself. So clearing service_plugins option should solve
the issue. Putting to invalid.

** Changed in: neutron
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1262616

Title:
  'Multiple plugins for service %s were configured', 'L3_ROUTER_NAT'

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  When installing havan on debian 64bit wheezy, using the repositories:

  deb http://havana.pkgs.enovance.com/debian havana main
  deb http://archive.gplhost.com/debian havana-backports main

  The neutron-server fails to start because of this check in 
  https://github.com/openstack/neutron/blob/master/neutron/manager.py#L174 

  Why raise an exception and not simply do the following:
  if plugin_inst.get_plugin_type() not in self.service_plugins:
self.service_plugins[plugin_inst.get_plugin_type()] = 
plugin_inst

  With the existing code I can not make neutron-server run no matter, what I do.
  The error is:

  2013-12-19 12:00:07.896 13128 TRACE neutron.service Traceback (most recent 
call last):
  2013-12-19 12:00:07.896 13128 TRACE neutron.service   File 
/usr/lib/python2.7/dist-packages/neutron/service.py, line 99, in serve_wsgi
  2013-12-19 12:00:07.896 13128 TRACE neutron.service service.start()
  2013-12-19 12:00:07.896 13128 TRACE neutron.service   File 
/usr/lib/python2.7/dist-packages/neutron/service.py, line 68, in start
  2013-12-19 12:00:07.896 13128 TRACE neutron.service self.wsgi_app = 
_run_wsgi(self.app_name)
  2013-12-19 12:00:07.896 13128 TRACE neutron.service   File 
/usr/lib/python2.7/dist-packages/neutron/service.py, line 112, in _run_wsgi
  2013-12-19 12:00:07.896 13128 TRACE neutron.service app = 
config.load_paste_app(app_name)
  2013-12-19 12:00:07.896 13128 TRACE neutron.service   File 
/usr/lib/python2.7/dist-packages/neutron/common/config.py, line 144, in 
load_paste_app
  2013-12-19 12:00:07.896 13128 TRACE neutron.service app = 
deploy.loadapp(config:%s % config_path, name=app_name)
  2013-12-19 12:00:07.896 13128 TRACE neutron.service   File 
/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 247, in 
loadapp
  2013-12-19 12:00:07.896 13128 TRACE neutron.service return loadobj(APP, 
uri, name=name, **kw)
  2013-12-19 12:00:07.896 13128 TRACE neutron.service   File 
/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 272, in 
loadobj
  2013-12-19 12:00:07.896 13128 TRACE neutron.service return 
context.create()
  2013-12-19 12:00:07.896 13128 TRACE neutron.service   File 
/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 710, in create
  2013-12-19 12:00:07.896 13128 TRACE neutron.service return 
self.object_type.invoke(self)
  2013-12-19 12:00:07.896 13128 TRACE neutron.service   File 
/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 144, in invoke
  2013-12-19 12:00:07.896 13128 TRACE neutron.service **context.local_conf)
  2013-12-19 12:00:07.896 13128 TRACE neutron.service   File 
/usr/lib/python2.7/dist-packages/paste/deploy/util.py, line 56, in fix_call
  2013-12-19 12:00:07.896 13128 TRACE neutron.service val = callable(*args, 
**kw)
  2013-12-19 12:00:07.896 13128 TRACE neutron.service   File 
/usr/lib/python2.7/dist-packages/paste/urlmap.py, line 25, in urlmap_factory
  2013-12-19 12:00:07.896 13128 TRACE neutron.service app = 
loader.get_app(app_name, global_conf=global_conf)
  2013-12-19 12:00:07.896 13128 TRACE neutron.service   File 
/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 350, in 
get_app
  2013-12-19 12:00:07.896 13128 TRACE neutron.service name=name, 
global_conf=global_conf).create()
  2013-12-19 12:00:07.896 13128 TRACE neutron.service   File 
/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 710, in create
  2013-12-19 12:00:07.896 13128 TRACE neutron.service return 
self.object_type.invoke(self)
  2013-12-19 12:00:07.896 13128 TRACE neutron.service   File 
/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 144, in invoke
  2013-12-19 12:00:07.896 13128 TRACE neutron.service **context.local_conf)
  2013-12-19 12:00:07.896 13128 TRACE neutron.service   File 
/usr/lib/python2.7/dist-packages/paste/deploy/util.py, line 56, in fix_call
  2013-12-19 12:00:07.896 13128 TRACE neutron.service val = callable(*args, 
**kw)
  2013-12-19 12:00:07.896 13128 TRACE neutron.service   File 
/usr/lib/python2.7/dist-packages/neutron/auth.py, line 59, in pipeline_factory
  2013-12-19 12:00:07.896 13128 TRACE neutron.service app = 
loader.get_app(pipeline[-1])
  2013-12-19 12:00:07.896 13128 TRACE neutron.service   File 

[Yahoo-eng-team] [Bug 1263647] [NEW] log image id incorectly when creating image

2013-12-23 Thread Liusheng
Public bug reported:

when I use nova image-create my_server_id  name to create an image,
the log /var/log/glance/api.log record the below texts:

glance.common.wsgi.Resource object at 0x38a7150} __call__ 
/usr/lib/python2.7/dist-packages/routes/middleware.py:103
2013-12-23 20:24:19.168 20884 INFO glance.registry.api.v1.images 
[0f0f1d19-9d63-4f2a-88d1-f48c0e28f6f4 e0f87f6761614850b064f9717ac83a36 
3b520afd03d94532b64fef4d863230f6] Successfully created image None

but the image id in DB is not None but an uuid.

** Affects: glance
 Importance: Undecided
 Assignee: Liusheng (liusheng)
 Status: New

** Changed in: glance
 Assignee: (unassigned) = Liusheng (liusheng)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1263647

Title:
  log image id incorectly when creating image

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  when I use nova image-create my_server_id  name to create an image,
  the log /var/log/glance/api.log record the below texts:

  glance.common.wsgi.Resource object at 0x38a7150} __call__ 
/usr/lib/python2.7/dist-packages/routes/middleware.py:103
  2013-12-23 20:24:19.168 20884 INFO glance.registry.api.v1.images 
[0f0f1d19-9d63-4f2a-88d1-f48c0e28f6f4 e0f87f6761614850b064f9717ac83a36 
3b520afd03d94532b64fef4d863230f6] Successfully created image None

  but the image id in DB is not None but an uuid.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1263647/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1263636] Re: no correct message prompt when migrate instance without enough cpu by Vcenter dirver

2013-12-23 Thread dingxy
** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1263636

Title:
  no correct message prompt when migrate instance without enough cpu by
  Vcenter dirver

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  1. boot two instances with flavor 13, then there is only 5 vCPU available.
  [root@10-1-0-71 nova]# nova flavor-list
  
++-+++---+--++-+---+
  | ID | Name| Memory_MB  | Disk   | Ephemeral | Swap | VCPUs  
| RXTX_Factor | Is_Public |
  
++-+++---+--++-+---+
  | 1  | m1.tiny | 512| 1  | 0 |  | 1  
| 1.0 | True  |
  | 13 | migrate | 512| 5  | 0 |  | 20 
| 1.0 | True  |
  | 14 | migrate_ram | 51200  | 5  | 0 |  | 1  
| 1.0 | True  |
  | 2  | m1.small| 2048   | 20 | 0 |  | 1  
| 1.0 | True  |
  | 3  | m1.medium   | 4096   | 40 | 0 |  | 2  
| 1.0 | True  |
  | 4  | m1.large| 8192   | 80 | 0 |  | 4  
| 1.0 | True  |
  | 5  | m1.xlarge   | 16384  | 160| 0 |  | 8  
| 1.0 | True  |
  
++-+++---+--++-+---+

  2. Then migrate test_1 to same host with command nova migrate test_1, but 
instance test_1 became to error status finally
  [root@10-1-0-71 nova]# nova list
  
+--+--++--+-+---+
  | ID   | Name | Status | Task State   
| Power State | Networks  |
  
+--+--++--+-+---+
  | 2bffa204-b5d2-4ddf-abd5-15a5db32884d | test_1   | RESIZE | 
resize_migrating | Running | network1=10.0.1.6 |
  
+--+--++--+-+---+
  [root@10-1-0-71 nova]# nova list
  
+--+--+++-+---+
  | ID   | Name | Status | Task State | 
Power State | Networks  |
  
+--+--+++-+---+
  | 2bffa204-b5d2-4ddf-abd5-15a5db32884d | test_1   | ERROR  | None   | 
Running | network1=10.0.1.6 |
  
+--+--+++-+---+

  3. nova show the instance, see following message:

  | fault| {u'message': u'Error caused by file 
/vmfs/volumes/52664587-e6a52545-4f65-3440b5e539d0/2bffa204-b5d2-4ddf-abd5-15a5db32884d/2bffa204-b5d2-4ddf-abd5-15a5db32884d.vmdk',
 u'code': 500, u'details': u'  File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 270, in 
decorated_function |
  |  | return function(self, context, 
*args, **kwargs)


   |
  |  |   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 3117, in 
resize_instance 

 |
  |  | block_device_info)   



 |
  |  |   File 
/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py, line 443, in 
migrate_disk_and_power_off  

|
  |  | dest, flavor)

[Yahoo-eng-team] [Bug 1263657] [NEW] documentation uses incorrect urls for glance endpoint

2013-12-23 Thread Kirill Zaborsky
Public bug reported:

In 
docs.openstack.org/havana/install-guide/install/apt/openstack-install-guide-apt-havana.pdf
 I see
command
# keystone endpoint-create \
--service-id=the_service_id_above \
--publicurl=http://controller:9292 \
--internalurl=http://controller:9292 \
--adminurl=http://controller:9292

I tried to use it but received error 404. After adding trailing slash to
glance URL I got it working

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: documentation

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1263657

Title:
  documentation uses incorrect urls for glance endpoint

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  In 
docs.openstack.org/havana/install-guide/install/apt/openstack-install-guide-apt-havana.pdf
 I see
  command
  # keystone endpoint-create \
  --service-id=the_service_id_above \
  --publicurl=http://controller:9292 \
  --internalurl=http://controller:9292 \
  --adminurl=http://controller:9292

  I tried to use it but received error 404. After adding trailing slash
  to glance URL I got it working

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1263657/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1263663] [NEW] IPv6 test are not skipped when ipv6 is disabled

2013-12-23 Thread mouadino
Public bug reported:

A platform can still have ipv6 disable for example by doing  sudo
sysctl -n net.ipv6.conf.all.disable_ipv6=1 but the ipv6 realted test
will not be skipped and that is because of the naive way of detecting
ipv6 support in the system.

I am proposing in this patch a better way to detect wether ipv6 is
enabled or no in a machine.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1263663

Title:
  IPv6 test are not skipped when ipv6 is disabled

Status in OpenStack Identity (Keystone):
  New

Bug description:
  A platform can still have ipv6 disable for example by doing  sudo
  sysctl -n net.ipv6.conf.all.disable_ipv6=1 but the ipv6 realted test
  will not be skipped and that is because of the naive way of detecting
  ipv6 support in the system.

  I am proposing in this patch a better way to detect wether ipv6 is
  enabled or no in a machine.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1263663/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1263665] [NEW] Number of GET requests grows exponentially when multiple rows are being updated in the table

2013-12-23 Thread Anastasia Karpinska
Public bug reported:

1. In Launch instance dialog select number of instances 10.
2. Create 10 instances.
2. While instances are being created and table rows are being updated the 
number of row update requests grows exponentially and a queue of pending 
requests still exists after all rows had beed updated.

There is a request type:
Request 
URL:http://donkey017/project/instances/?action=row_updatetable=instancesobj_id=7c4eaf35-ebc0-4ea3-a702-7554c8c36cf2
Request Method:GET

** Affects: horizon
 Importance: Undecided
 Assignee: Anastasia Karpinska (anastasia-karpinska)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) = Anastasia Karpinska (anastasia-karpinska)

** Changed in: horizon
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1263665

Title:
  Number of GET requests grows exponentially when multiple rows are
  being updated in the table

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  1. In Launch instance dialog select number of instances 10.
  2. Create 10 instances.
  2. While instances are being created and table rows are being updated the 
number of row update requests grows exponentially and a queue of pending 
requests still exists after all rows had beed updated.

  There is a request type:
  Request 
URL:http://donkey017/project/instances/?action=row_updatetable=instancesobj_id=7c4eaf35-ebc0-4ea3-a702-7554c8c36cf2
  Request Method:GET

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1263665/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1263673] [NEW] Reboot and terminate instances buttons left enabled after instance has been deleted

2013-12-23 Thread Anastasia Karpinska
Public bug reported:

Current behavior:
1. Select some instance and terminate it.
2. While instance is being terminated select this instance with checkbox. 
Buttons Soft reboot instances and Terminate instances become enabled.
3. After instance has been deleted from the table buttons remain enabled.

Expected behavior:
Soft reboot instances and Terminate instances must be disabled after the 
instance had been deleted.

** Affects: horizon
 Importance: Undecided
 Assignee: Anastasia Karpinska (anastasia-karpinska)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) = Anastasia Karpinska (anastasia-karpinska)

** Changed in: horizon
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1263673

Title:
  Reboot and terminate instances buttons left enabled after instance has
  been deleted

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Current behavior:
  1. Select some instance and terminate it.
  2. While instance is being terminated select this instance with checkbox. 
Buttons Soft reboot instances and Terminate instances become enabled.
  3. After instance has been deleted from the table buttons remain enabled.

  Expected behavior:
  Soft reboot instances and Terminate instances must be disabled after the 
instance had been deleted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1263673/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1263684] [NEW] DB2: GET image with long ID (non-existed) Glance would return 500, but not 404

2013-12-23 Thread Fei Long Wang
Public bug reported:

GET:
http://9.12.27.148:9292/v1/images/82ff46a0-f49b-4279-bdad-e665c203777w232323232eer34r3r3r3qer3r3r3r
would return 500 instead of 400.

The root cause is as below:
2013-12-23 07:43:15.387 27806 INFO glance.wsgi.server 
[687bccf6-ed9e-4efa-988e-a05faafb6c4b 51336fdd98e449ef911f57ad3d03c818 
d9f056c22dac4022bf2ebb1ed70fd25e] Traceback (most recent call last):
  File /usr/lib/python2.6/site-packages/eventlet/wsgi.py, line 384, in 
handle_one_response
result = self.application(self.environ, start_response)
  File 
/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py, 
line 571, in __call__
return self.app(env, start_response)
  File /usr/lib/python2.6/site-packages/webob/dec.py, line 130, in __call__
resp = self.call_func(req, *args, **self.kwargs)
  File /usr/lib/python2.6/site-packages/webob/dec.py, line 195, in call_func
return self.func(req, *args, **kwargs)
  File /usr/lib/python2.6/site-packages/glance/common/wsgi.py, line 368, in 
__call__
response = req.get_response(self.application)
  File /usr/lib/python2.6/site-packages/webob/request.py, line 1296, in send
application, catch_exc_info=False)
  File /usr/lib/python2.6/site-packages/webob/request.py, line 1260, in 
call_application
app_iter = application(self.environ, start_response)
  File /usr/lib/python2.6/site-packages/webob/dec.py, line 144, in __call__
return resp(environ, start_response)
  File /usr/lib/python2.6/site-packages/routes/middleware.py, line 131, in 
__call__
response = self.app(environ, start_response)
  File /usr/lib/python2.6/site-packages/webob/dec.py, line 144, in __call__
return resp(environ, start_response)
  File /usr/lib/python2.6/site-packages/webob/dec.py, line 130, in __call__
resp = self.call_func(req, *args, **self.kwargs)
  File /usr/lib/python2.6/site-packages/webob/dec.py, line 195, in call_func
return self.func(req, *args, **kwargs)
  File /usr/lib/python2.6/site-packages/glance/common/wsgi.py, line 620, in 
__call__
request, **action_args)
  File /usr/lib/python2.6/site-packages/glance/common/wsgi.py, line 646, in 
dispatch
return method(*args, **kwargs)
  File /usr/lib/python2.6/site-packages/glance/registry/api/v1/images.py, 
line 312, in show
image = self.db_api.image_get(req.context, id)
  File /usr/lib/python2.6/site-packages/glance/db/sqlalchemy/api.py, line 
315, in image_get
force_show_deleted=force_show_deleted)
  File /usr/lib/python2.6/site-packages/glance/db/sqlalchemy/api.py, line 
343, in _image_get
raise e
DataError: (DataError) ibm_db_dbi::DataError: Statement Execute Failed: 
[IBM][CLI Driver] CLI0109E  String data right truncation. SQLSTATE=22001 
SQLCODE=-9 'SELECT images.created_at AS images_created_at, 
images.updated_at AS images_updated_at, images.deleted_at AS images_deleted_at, 
images.deleted AS images_deleted, images.id AS images_id, images.name AS 
images_name, images.disk_format AS images_disk_format, images.container_format 
AS images_container_format, images.size AS images_size, images.status AS 
images_status, images.is_public AS images_is_public, images.checksum AS 
images_checksum, images.min_disk AS images_min_disk, images.min_ram AS 
images_min_ram, images.owner AS images_owner, images.protected AS 
images_protected, image_properties_1.created_at AS 
image_properties_1_created_at, image_properties_1.updated_at AS 
image_properties_1_updated_at, image_properties_1.deleted_at AS 
image_properties_1_deleted_at, image_properties_1.deleted AS 
image_properties_1_deleted, image_pr
 operties_1.id AS image_properties_1_id, image_properties_1.image_id AS 
image_properties_1_image_id, image_properties_1.name AS 
image_properties_1_name, image_properties_1.value AS 
image_properties_1_value, image_locations_1.created_at AS 
image_locations_1_created_at, image_locations_1.updated_at AS 
image_locations_1_updated_at, image_locations_1.deleted_at AS 
image_locations_1_deleted_at, image_locations_1.deleted AS 
image_locations_1_deleted, image_locations_1.id AS image_locations_1_id, 
image_locations_1.image_id AS image_locations_1_image_id, 
image_locations_1.value AS image_locations_1_value, 
image_locations_1.meta_data AS image_locations_1_meta_data \nFROM images LEFT 
OUTER JOIN image_properties AS image_properties_1 ON images.id = 
image_properties_1.image_id LEFT OUTER JOIN image_locations AS 
image_locations_1 ON images.id = image_locations_1.image_id \nWHERE images.id = 
?' 
('82ff46a0-f49b-4279-bdad-e665c203777w232323232eer34r3r3r3qer3r3r3r',)

** Affects: glance
 Importance: Low
 Assignee: Fei Long Wang (flwang)
 Status: Confirmed


** Tags: db2 error-code

** Changed in: glance
 Assignee: (unassigned) = Fei Long Wang (flwang)

** Changed in: glance
   Importance: Undecided = Low

** Changed in: glance
   Status: New = Confirmed

** Tags added: db2

** Tags added: error-code

-- 
You received this bug notification because 

[Yahoo-eng-team] [Bug 1263686] [NEW] Race between list_networks and delete_network in OVS plugin

2013-12-23 Thread Eugene Nikanorov
Public bug reported:

When listing networks, OVS plugin fetches networks bindings via
additional per-network db-query.

In case some network (and its binding) from the list is getting deleted
in the process of extending networks dict, _extend_network_dict_provider
throws an exception:

TRACE neutron.api.v2.resource Traceback (most recent call last):
TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/api/v2/resource.py, line 84, in resource
TRACE neutron.api.v2.resource result = method(request=request, **args)
TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/api/v2/base.py, line 279, in index
TRACE neutron.api.v2.resource return self._items(request, True, parent_id)
TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/api/v2/base.py, line 233, in _items
TRACE neutron.api.v2.resource obj_list = obj_getter(request.context, 
**kwargs)
TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/plugins/openvswitch/ovs_neutron_plugin.py, line 
567, in get_networks
TRACE neutron.api.v2.resource self._extend_network_dict_provider(context, 
net)
TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/plugins/openvswitch/ovs_neutron_plugin.py, line 
381, in _extend_network_dict_provider
TRACE neutron.api.v2.resource network[provider.NETWORK_TYPE] = 
binding.network_type
TRACE neutron.api.v2.resource AttributeError: 'NoneType' object has no 
attribute 'network_type'

This issue is reproducible easily with
tempest.api.network.test_networks.BulkNetworkOps

** Affects: neutron
 Importance: Medium
 Assignee: Eugene Nikanorov (enikanorov)
 Status: New


** Tags: db ovs

** Tags added: db ovs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1263686

Title:
  Race between list_networks and delete_network in OVS plugin

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When listing networks, OVS plugin fetches networks bindings via
  additional per-network db-query.

  In case some network (and its binding) from the list is getting
  deleted in the process of extending networks dict,
  _extend_network_dict_provider throws an exception:

  TRACE neutron.api.v2.resource Traceback (most recent call last):
  TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/api/v2/resource.py, line 84, in resource
  TRACE neutron.api.v2.resource result = method(request=request, **args)
  TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/api/v2/base.py, line 279, in index
  TRACE neutron.api.v2.resource return self._items(request, True, parent_id)
  TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/api/v2/base.py, line 233, in _items
  TRACE neutron.api.v2.resource obj_list = obj_getter(request.context, 
**kwargs)
  TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/plugins/openvswitch/ovs_neutron_plugin.py, line 
567, in get_networks
  TRACE neutron.api.v2.resource self._extend_network_dict_provider(context, 
net)
  TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/plugins/openvswitch/ovs_neutron_plugin.py, line 
381, in _extend_network_dict_provider
  TRACE neutron.api.v2.resource network[provider.NETWORK_TYPE] = 
binding.network_type
  TRACE neutron.api.v2.resource AttributeError: 'NoneType' object has no 
attribute 'network_type'

  This issue is reproducible easily with
  tempest.api.network.test_networks.BulkNetworkOps

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1263686/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1255609] Re: VMware: possible collision of VNC ports

2013-12-23 Thread Gary Kotton
** Changed in: nova
   Importance: Medium = High

** Also affects: openstack-vmwareapi-team
   Importance: Undecided
   Status: New

** Changed in: openstack-vmwareapi-team
   Importance: Undecided = High

** Tags added: havana-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1255609

Title:
  VMware: possible collision of VNC ports

Status in OpenStack Compute (Nova):
  In Progress
Status in The OpenStack VMwareAPI subTeam:
  New

Bug description:
  We assign VNC ports to VM instances with the following method:

  def _get_vnc_port(vm_ref):
  Return VNC port for an VM.
  vm_id = int(vm_ref.value.replace('vm-', ''))
  port = CONF.vmware.vnc_port + vm_id % CONF.vmware.vnc_port_total
  return port

  the vm_id is a simple counter in vSphere which increments fast and
  there is a chance to get the same port number if the vm_ids are equal
  modulo vnc_port_total (1 by default).

  A report was received that if the port number is reused you may get
  access to the VNC console of another tenant. We need to fix the
  implementation to always choose a port number which is not taken or
  report an error if there are no free ports available.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1255609/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1263729] [NEW] Image size won't be updated if locations are updated to empty

2013-12-23 Thread Fei Long Wang
Public bug reported:

See
https://github.com/openstack/glance/blob/master/glance/api/v2/images.py#L225

Based on current design, the image status will be updated to 'queued' if
all locations are removed from the target image, but for now, the image
size won't be updated. It doesn't make sense and will confuse the end
user.

** Affects: glance
 Importance: Medium
 Assignee: Fei Long Wang (flwang)
 Status: Confirmed

** Changed in: glance
 Assignee: (unassigned) = Fei Long Wang (flwang)

** Changed in: glance
   Importance: Undecided = Medium

** Changed in: glance
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1263729

Title:
  Image size won't be updated if locations are updated to empty

Status in OpenStack Image Registry and Delivery Service (Glance):
  Confirmed

Bug description:
  See
  https://github.com/openstack/glance/blob/master/glance/api/v2/images.py#L225

  Based on current design, the image status will be updated to 'queued'
  if all locations are removed from the target image, but for now, the
  image size won't be updated. It doesn't make sense and will confuse
  the end user.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1263729/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1263657] Re: documentation uses incorrect urls for glance endpoint

2013-12-23 Thread Fei Long Wang
If so, I don't think it's a Glance bug, since there is nothing wrong
with Glance. Maybe you should route it to doc team or Keystone team.

** Changed in: glance
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1263657

Title:
  documentation uses incorrect urls for glance endpoint

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  In 
docs.openstack.org/havana/install-guide/install/apt/openstack-install-guide-apt-havana.pdf
 I see
  command
  # keystone endpoint-create \
  --service-id=the_service_id_above \
  --publicurl=http://controller:9292 \
  --internalurl=http://controller:9292 \
  --adminurl=http://controller:9292

  I tried to use it but received error 404. After adding trailing slash
  to glance URL I got it working

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1263657/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1263735] [NEW] NVP plugin does not support VIF ports error

2013-12-23 Thread Bhuvan Arumugam
Public bug reported:

Unable to delete a server using nova api. No errors logged at nova side.

On neutron side, it complain that it doesn't support regular VIF ports
on external networks.

2013-12-23 17:19:11,516 (NeutronPlugin): ERROR NeutronPlugin _nvp_delete_port 
NVP plugin does not support regular V
IF ports on external networks. Port 6d382dfd-f826-45d3-8818-48c34f8a8908 will 
be down.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1263735

Title:
  NVP plugin does not support VIF ports error

Status in OpenStack Compute (Nova):
  New

Bug description:
  Unable to delete a server using nova api. No errors logged at nova
  side.

  On neutron side, it complain that it doesn't support regular VIF ports
  on external networks.

  2013-12-23 17:19:11,516 (NeutronPlugin): ERROR NeutronPlugin _nvp_delete_port 
NVP plugin does not support regular V
  IF ports on external networks. Port 6d382dfd-f826-45d3-8818-48c34f8a8908 will 
be down.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1263735/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1263745] [NEW] lbaas-haproxy default config causing long delay with downed member

2013-12-23 Thread Simon
Public bug reported:

The timeout check 5s option is causing a 5 second delay with a chance
of 1/N when one of the N backend nodes is down.

backend 36279594-bd3f-47dd-b2fa-96e0bfc02ac6
mode tcp
balance roundrobin
timeout check 5s
server 7ad07521-3034-4e1c-9b3c-33ac42ac50d8 172.20.144.74:80 weight 1 
check inter 3s fall 5
server b87715a5-853d-412d-9d74-ed48f83e5fc4 172.20.144.75:80 weight 1 
check inter 3s fall 5

Removing this option gives the correct behavior that the downed node is
simply ignored, instead of still being redirected to and causing 5s
delay.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1263745

Title:
  lbaas-haproxy default config causing long delay with downed member

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The timeout check 5s option is causing a 5 second delay with a
  chance of 1/N when one of the N backend nodes is down.

  backend 36279594-bd3f-47dd-b2fa-96e0bfc02ac6
  mode tcp
  balance roundrobin
  timeout check 5s
  server 7ad07521-3034-4e1c-9b3c-33ac42ac50d8 172.20.144.74:80 weight 1 
check inter 3s fall 5
  server b87715a5-853d-412d-9d74-ed48f83e5fc4 172.20.144.75:80 weight 1 
check inter 3s fall 5

  Removing this option gives the correct behavior that the downed node
  is simply ignored, instead of still being redirected to and causing 5s
  delay.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1263745/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1263766] [NEW] Hyper-V agent fails when ceilometer ACLs are already applied

2013-12-23 Thread Alessandro Pilotti
Public bug reported:

The Hyper-V agent fails when the restarted after the ceilometer ACLs
have been already applied.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: havana-backport-potential hyper-v

** Tags added: hyper-v

** Tags added: havana-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1263766

Title:
  Hyper-V agent fails when ceilometer ACLs are already applied

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The Hyper-V agent fails when the restarted after the ceilometer ACLs
  have been already applied.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1263766/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251784] Re: nova+neutron scheduling error: Connection to neutron failed: Maximum attempts reached

2013-12-23 Thread Mark McClain
Removing Neutron since this bug has not shown up for 30 days.

http://logstash.openstack.org/#eyJmaWVsZHMiOltdLCJzZWFyY2giOiJtZXNzYWdlOlwiQ29ubmVjdGlvbiB0byBuZXV0cm9uIGZhaWxlZDogTWF4aW11bSBhdHRlbXB0cyByZWFjaGVkXCIgQU5EIGZpbGVuYW1lOlwibG9ncy9zY3JlZW4tbi1zY2gudHh0XCIiLCJ0aW1lZnJhbWUiOiJjdXN0b20iLCJncmFwaG1vZGUiOiJjb3VudCIsIm9mZnNldCI6MCwidGltZSI6eyJmcm9tIjoiMjAxMy0xMS0yM1QxOTo0ODoxNyswMDowMCIsInRvIjoiMjAxMy0xMi0yM1QxOTo0ODoxNyswMDowMCIsInVzZXJfaW50ZXJ2YWwiOiIwIn0sInN0YW1wIjoxMzg3ODMxMzA5NDQ1fQ==

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1251784

Title:
  nova+neutron scheduling error: Connection to neutron failed: Maximum
  attempts reached

Status in OpenStack Compute (Nova):
  Fix Released
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  VMs are failing to schedule with the following error

  2013-11-15 20:50:21.405 ERROR nova.scheduler.filter_scheduler [req-
  d2c26348-53e6-448a-8975-4f22f4e89782 demo demo] [instance: c8069c13
  -593f-48fb-aae9-198961097eb2] Error from last host: devstack-precise-
  hpcloud-az3-662002 (node devstack-precise-hpcloud-az3-662002):
  [u'Traceback (most recent call last):\n', u'  File
  /opt/stack/new/nova/nova/compute/manager.py, line 1030, in
  _build_instance\nset_access_ip=set_access_ip)\n', u'  File
  /opt/stack/new/nova/nova/compute/manager.py, line 1439, in _spawn\n
  LOG.exception(_(\'Instance failed to spawn\'), instance=instance)\n',
  u'  File /opt/stack/new/nova/nova/compute/manager.py, line 1436, in
  _spawn\nblock_device_info)\n', u'  File
  /opt/stack/new/nova/nova/virt/libvirt/driver.py, line 2100, in
  spawn\nadmin_pass=admin_password)\n', u'  File
  /opt/stack/new/nova/nova/virt/libvirt/driver.py, line 2451, in
  _create_image\ncontent=files, extra_md=extra_md,
  network_info=network_info)\n', u'  File
  /opt/stack/new/nova/nova/api/metadata/base.py, line 165, in
  __init__\n
  ec2utils.get_ip_info_for_instance_from_nw_info(network_info)\n', u'
  File /opt/stack/new/nova/nova/api/ec2/ec2utils.py, line 149, in
  get_ip_info_for_instance_from_nw_info\nfixed_ips =
  nw_info.fixed_ips()\n', u'  File
  /opt/stack/new/nova/nova/network/model.py, line 368, in
  _sync_wrapper\nself.wait()\n', u'  File
  /opt/stack/new/nova/nova/network/model.py, line 400, in wait\n
  self[:] = self._gt.wait()\n', u'  File /usr/local/lib/python2.7/dist-
  packages/eventlet/greenthread.py, line 168, in wait\nreturn
  self._exit_event.wait()\n', u'  File /usr/local/lib/python2.7/dist-
  packages/eventlet/event.py, line 120, in wait\n
  current.throw(*self._exc)\n', u'  File /usr/local/lib/python2.7/dist-
  packages/eventlet/greenthread.py, line 194, in main\nresult =
  function(*args, **kwargs)\n', u'  File
  /opt/stack/new/nova/nova/compute/manager.py, line 1220, in
  _allocate_network_async\ndhcp_options=dhcp_options)\n', u'  File
  /opt/stack/new/nova/nova/network/neutronv2/api.py, line 359, in
  allocate_for_instance\nnw_info =
  self._get_instance_nw_info(context, instance, networks=nets)\n', u'
  File /opt/stack/new/nova/nova/network/api.py, line 49, in wrapper\n
  res = f(self, context, *args, **kwargs)\n', u'  File
  /opt/stack/new/nova/nova/network/neutronv2/api.py, line 458, in
  _get_instance_nw_info\nnw_info =
  self._build_network_info_model(context, instance, networks)\n', u'
  File /opt/stack/new/nova/nova/network/neutronv2/api.py, line 1022,
  in _build_network_info_model\nsubnets =
  self._nw_info_get_subnets(context, port, network_IPs)\n', u'  File
  /opt/stack/new/nova/nova/network/neutronv2/api.py, line 924, in
  _nw_info_get_subnets\nsubnets =
  self._get_subnets_from_port(context, port)\n', u'  File
  /opt/stack/new/nova/nova/network/neutronv2/api.py, line 1066, in
  _get_subnets_from_port\ndata =
  neutronv2.get_client(context).list_ports(**search_opts)\n', u'  File
  /opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py,
  line 111, in with_params\nret = self.function(instance, *args,
  **kwargs)\n', u'  File /opt/stack/new/python-
  neutronclient/neutronclient/v2_0/client.py, line 306, in list_ports\n
  **_params)\n', u'  File /opt/stack/new/python-
  neutronclient/neutronclient/v2_0/client.py, line 1250, in list\n
  for r in self._pagination(collection, path, **params):\n', u'  File
  /opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py,
  line 1263, in _pagination\nres = self.get(path, params=params)\n',
  u'  File /opt/stack/new/python-
  neutronclient/neutronclient/v2_0/client.py, line 1236, in get\n
  headers=headers, params=params)\n', u'  File /opt/stack/new/python-
  neutronclient/neutronclient/v2_0/client.py, line 1228, in
  retry_request\nraise exceptions.ConnectionFailed(reason=_(Maximum
  attempts reached))\n', u'ConnectionFailed: Connection to neutron
  failed: Maximum attempts 

[Yahoo-eng-team] [Bug 1263790] [NEW] ipmitool does not support OPERATOR priv level

2013-12-23 Thread Devananda van der Veen
Public bug reported:

If the BMC / IPMI credentials being used for management of hardware were
only granted OPERATOR privileges, there is no way to inform Nova's
baremetal driver or Ironic's ipmitool driver to use this non-default
privilege level. These will issue ipmitool commands with no -L
parameter, resulting in privilege errors, because the default ipmitool
privlvl is ADMINISTRATOR.

This could be fixed by adding an optional field to store the privilege
level.

** Affects: ironic
 Importance: Medium
 Status: Triaged

** Affects: nova
 Importance: Medium
 Status: Triaged


** Tags: baremetal

** Changed in: ironic
   Status: New = Triaged

** Changed in: ironic
   Importance: Undecided = Medium

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New = Triaged

** Changed in: nova
   Importance: Undecided = Medium

** Tags added: baremetal

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1263790

Title:
  ipmitool does not support OPERATOR priv level

Status in Ironic (Bare Metal Provisioning):
  Triaged
Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  If the BMC / IPMI credentials being used for management of hardware
  were only granted OPERATOR privileges, there is no way to inform
  Nova's baremetal driver or Ironic's ipmitool driver to use this non-
  default privilege level. These will issue ipmitool commands with no
  -L parameter, resulting in privilege errors, because the default
  ipmitool privlvl is ADMINISTRATOR.

  This could be fixed by adding an optional field to store the privilege
  level.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1263790/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1263804] [NEW] EC2 token creation no longer possible for admin

2013-12-23 Thread Dirk Mueller
Public bug reported:

with the security fix for [OSSA 2013-032] Keystone trust circumvention
through EC2-style tokens (CVE-2013-6391), Bug #1242597, it is no longer
possible to create ec2 credentials with just admin service token +
service endpoint.

Log file gives:

2013-12-23 22:13:12.197 22611 WARNING keystone.common.wsgi [-]
Authorization failed. Could not find token, 120726431688. from
192.168.226.81


where 120... is the admin token.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1263804

Title:
  EC2 token creation no longer possible for admin

Status in OpenStack Identity (Keystone):
  New

Bug description:
  with the security fix for [OSSA 2013-032] Keystone trust
  circumvention through EC2-style tokens (CVE-2013-6391), Bug #1242597,
  it is no longer possible to create ec2 credentials with just admin
  service token + service endpoint.

  Log file gives:

  2013-12-23 22:13:12.197 22611 WARNING keystone.common.wsgi [-]
  Authorization failed. Could not find token, 120726431688. from
  192.168.226.81

  
  where 120... is the admin token.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1263804/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1263813] [NEW] TypeError - not enough args for format string in i18n_cfg.py

2013-12-23 Thread Jay Pipes
Public bug reported:

jpipes@uberbox:~/repos/openstack/neutron$ git log --oneline | head -1
84aeb9a Merge Imported Translations from Transifex

Running tox -eALL resulted in:

i18n runtests: commands[0] | python ./tools/check_i18n.py ./neutron 
./tools/i18n_cfg.py
Traceback (most recent call last):
  File ./tools/check_i18n.py, line 151, in module
debug):
  File ./tools/check_i18n.py, line 112, in check_i18n
ASTWalker())
  File /usr/lib/python2.7/compiler/visitor.py, line 106, in walk
walker.preorder(tree, visitor)
  File /usr/lib/python2.7/compiler/visitor.py, line 63, in preorder
self.dispatch(tree, *args) # XXX *args make sense?
  File /usr/lib/python2.7/compiler/visitor.py, line 57, in dispatch
return meth(node, *args)
  File ./tools/check_i18n.py, line 38, in default
compiler.visitor.ASTVisitor.default(self, node, *args)
  File /usr/lib/python2.7/compiler/visitor.py, line 40, in default
self.dispatch(child, *args)
  File /usr/lib/python2.7/compiler/visitor.py, line 57, in dispatch
return meth(node, *args)
  File ./tools/check_i18n.py, line 38, in default
compiler.visitor.ASTVisitor.default(self, node, *args)
  File /usr/lib/python2.7/compiler/visitor.py, line 40, in default
self.dispatch(child, *args)
  File /usr/lib/python2.7/compiler/visitor.py, line 57, in dispatch
return meth(node, *args)
  File ./tools/check_i18n.py, line 38, in default
compiler.visitor.ASTVisitor.default(self, node, *args)
  File /usr/lib/python2.7/compiler/visitor.py, line 40, in default
self.dispatch(child, *args)
  File /usr/lib/python2.7/compiler/visitor.py, line 57, in dispatch
return meth(node, *args)
  File ./tools/check_i18n.py, line 38, in default
compiler.visitor.ASTVisitor.default(self, node, *args)
  File /usr/lib/python2.7/compiler/visitor.py, line 40, in default
self.dispatch(child, *args)
  File /usr/lib/python2.7/compiler/visitor.py, line 57, in dispatch
return meth(node, *args)
  File ./tools/check_i18n.py, line 38, in default
compiler.visitor.ASTVisitor.default(self, node, *args)
  File /usr/lib/python2.7/compiler/visitor.py, line 40, in default
self.dispatch(child, *args)
  File /usr/lib/python2.7/compiler/visitor.py, line 57, in dispatch
return meth(node, *args)
  File ./tools/check_i18n.py, line 38, in default
compiler.visitor.ASTVisitor.default(self, node, *args)
  File /usr/lib/python2.7/compiler/visitor.py, line 40, in default
self.dispatch(child, *args)
  File /usr/lib/python2.7/compiler/visitor.py, line 57, in dispatch
return meth(node, *args)
  File ./tools/check_i18n.py, line 38, in default
compiler.visitor.ASTVisitor.default(self, node, *args)
  File /usr/lib/python2.7/compiler/visitor.py, line 40, in default
self.dispatch(child, *args)
  File /usr/lib/python2.7/compiler/visitor.py, line 57, in dispatch
return meth(node, *args)
  File ./tools/check_i18n.py, line 38, in default
compiler.visitor.ASTVisitor.default(self, node, *args)
  File /usr/lib/python2.7/compiler/visitor.py, line 40, in default
self.dispatch(child, *args)
  File /usr/lib/python2.7/compiler/visitor.py, line 57, in dispatch
return meth(node, *args)
  File ./tools/check_i18n.py, line 38, in default
compiler.visitor.ASTVisitor.default(self, node, *args)
  File /usr/lib/python2.7/compiler/visitor.py, line 40, in default
self.dispatch(child, *args)
  File /usr/lib/python2.7/compiler/visitor.py, line 57, in dispatch
return meth(node, *args)
  File ./tools/check_i18n.py, line 38, in default
compiler.visitor.ASTVisitor.default(self, node, *args)
  File /usr/lib/python2.7/compiler/visitor.py, line 40, in default
self.dispatch(child, *args)
  File /usr/lib/python2.7/compiler/visitor.py, line 57, in dispatch
return meth(node, *args)
  File ./tools/check_i18n.py, line 38, in default
compiler.visitor.ASTVisitor.default(self, node, *args)
  File /usr/lib/python2.7/compiler/visitor.py, line 40, in default
self.dispatch(child, *args)
  File /usr/lib/python2.7/compiler/visitor.py, line 57, in dispatch
return meth(node, *args)
  File ./tools/check_i18n.py, line 38, in default
compiler.visitor.ASTVisitor.default(self, node, *args)
  File /usr/lib/python2.7/compiler/visitor.py, line 40, in default
self.dispatch(child, *args)
  File /usr/lib/python2.7/compiler/visitor.py, line 57, in dispatch
return meth(node, *args)
  File ./tools/check_i18n.py, line 62, in visitConst
self.lines[node.lineno - 1][:-1], msg),
TypeError: not enough arguments for format string
ERROR: InvocationError: 
'/home/jpipes/repos/openstack/neutron/.tox/i18n/bin/python 
./tools/check_i18n.py ./neutron ./tools/i18n_cfg.py'

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1263813

Title:
  TypeError - not enough args for format string in 

[Yahoo-eng-team] [Bug 1263823] [NEW] Some select options have periods at the end

2013-12-23 Thread Ana Krivokapić
Public bug reported:

Some of the select boxes (e.g. in Launch Instance, Create Volume, etc)
have options which end with a period. That makes the UI inconsistent,
since the period is sometimes there and sometimes not. Moreover, it is
unusual to have a period placed at the end of select options. The
strings should be amended so that these periods are removed.

See attached screenshot for an example.

** Affects: horizon
 Importance: Undecided
 Assignee: Ana Krivokapić (akrivoka)
 Status: In Progress

** Attachment added: Screenshot from 2013-12-24 04:33:53.png
   
https://bugs.launchpad.net/bugs/1263823/+attachment/3934923/+files/Screenshot%20from%202013-12-24%2004%3A33%3A53.png

** Changed in: horizon
   Status: New = In Progress

** Changed in: horizon
 Assignee: (unassigned) = Ana Krivokapić (akrivoka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1263823

Title:
  Some select options have periods at the end

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Some of the select boxes (e.g. in Launch Instance, Create Volume, etc)
  have options which end with a period. That makes the UI inconsistent,
  since the period is sometimes there and sometimes not. Moreover, it is
  unusual to have a period placed at the end of select options. The
  strings should be amended so that these periods are removed.

  See attached screenshot for an example.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1263823/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1263824] [NEW] grenade is failing on euca and bundle tests

2013-12-23 Thread Joe Gordon
Public bug reported:

During gate runs we are seeing the euca and bundle devstack tests fail
on the havana.

http://logs.openstack.org/51/63551/2/check/check-grenade-
dsvm/6ad42b5/console.html


 2013-12-24 03:06:45.831 | + BUCKET=testbucket
 2013-12-24 03:06:45.831 | + IMAGE=bundle.img
 2013-12-24 03:06:45.832 | + truncate -s 5M /tmp/bundle.img
 2013-12-24 03:06:45.833 | + euca-bundle-image -i /tmp/bundle.img
 2013-12-24 03:06:46.007 | image should be of type file
 2013-12-24 03:06:46.018 | + die 52 'Failure bundling image bundle.img'
 2013-12-24 03:06:46.018 | + local exitcode=1
 2013-12-24 03:06:46.018 | + set +o xtrace
 2013-12-24 03:06:46.019 | [Call Trace]
 2013-12-24 03:06:46.019 | /opt/stack/old/devstack/exercises/bundle.sh:52:die
 2013-12-24 03:06:46.024 | [ERROR] 
/opt/stack/old/devstack/exercises/bundle.sh:52 Failure bundling image bundle.img

** Affects: grenade
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Also affects: grenade
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1263824

Title:
  grenade is failing on euca and bundle tests

Status in Grenade - OpenStack upgrade testing:
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  During gate runs we are seeing the euca and bundle devstack tests fail
  on the havana.

  http://logs.openstack.org/51/63551/2/check/check-grenade-
  dsvm/6ad42b5/console.html

  
   2013-12-24 03:06:45.831 | + BUCKET=testbucket
   2013-12-24 03:06:45.831 | + IMAGE=bundle.img
   2013-12-24 03:06:45.832 | + truncate -s 5M /tmp/bundle.img
   2013-12-24 03:06:45.833 | + euca-bundle-image -i /tmp/bundle.img
   2013-12-24 03:06:46.007 | image should be of type file
   2013-12-24 03:06:46.018 | + die 52 'Failure bundling image bundle.img'
   2013-12-24 03:06:46.018 | + local exitcode=1
   2013-12-24 03:06:46.018 | + set +o xtrace
   2013-12-24 03:06:46.019 | [Call Trace]
   2013-12-24 03:06:46.019 | /opt/stack/old/devstack/exercises/bundle.sh:52:die
   2013-12-24 03:06:46.024 | [ERROR] 
/opt/stack/old/devstack/exercises/bundle.sh:52 Failure bundling image bundle.img

To manage notifications about this bug go to:
https://bugs.launchpad.net/grenade/+bug/1263824/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1232166] Re: run_tests.sh misleading, broken, and might be unused

2013-12-23 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1232166

Title:
  run_tests.sh misleading, broken, and might be unused

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  As per conversation in #openstack-neutron

  124337 #openstack-neutron mestery | roaet: How are you running the 
tests?
  124358 #openstack-neutron   roaet | mestery: I am using 
./run_tests.sh I used to use tox... maybe I should use tox
  124433 #openstack-neutron mestery | I don't think run_tests.sh is 
supported or works anymore.
  124436 #openstack-neutron mestery | I don't know why it's still there 
in fact.
  124440 #openstack-neutron mestery | But someone else will know for 
sure.
  124443 #openstack-neutron mestery | I woudl recommend using tox
  124448 #openstack-neutron mestery | something like this should work: 
tox -v -epy27
  124507 #openstack-neutron   roaet | Yeah I keep on making the same 
mistake, using run-tests
  124511 #openstack-neutron   roaet | I'll just use tox.
  124511 #openstack-neutron mestery | :)
  124513 #openstack-neutron   roaet | my apologies
  124513 #openstack-neutron mestery | Cool
  124517 #openstack-neutron mestery | I think you'll have better luck
  124532 #openstack-neutron mestery | And someone needs to explain to 
me why run_tests.sh is still there, as you're not the first person to ask this 
question. :)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1232166/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1263824] Re: grenade is failing on euca and bundle tests

2013-12-23 Thread Joe Gordon
** Also affects: openstack-ci
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1263824

Title:
  grenade is failing on euca and bundle tests

Status in Grenade - OpenStack upgrade testing:
  New
Status in OpenStack Compute (Nova):
  New
Status in OpenStack Core Infrastructure:
  New

Bug description:
  During gate runs we are seeing the euca and bundle devstack tests fail
  on the havana.

  http://logs.openstack.org/51/63551/2/check/check-grenade-
  dsvm/6ad42b5/console.html

  
   2013-12-24 03:06:45.831 | + BUCKET=testbucket
   2013-12-24 03:06:45.831 | + IMAGE=bundle.img
   2013-12-24 03:06:45.832 | + truncate -s 5M /tmp/bundle.img
   2013-12-24 03:06:45.833 | + euca-bundle-image -i /tmp/bundle.img
   2013-12-24 03:06:46.007 | image should be of type file
   2013-12-24 03:06:46.018 | + die 52 'Failure bundling image bundle.img'
   2013-12-24 03:06:46.018 | + local exitcode=1
   2013-12-24 03:06:46.018 | + set +o xtrace
   2013-12-24 03:06:46.019 | [Call Trace]
   2013-12-24 03:06:46.019 | /opt/stack/old/devstack/exercises/bundle.sh:52:die
   2013-12-24 03:06:46.024 | [ERROR] 
/opt/stack/old/devstack/exercises/bundle.sh:52 Failure bundling image bundle.img

To manage notifications about this bug go to:
https://bugs.launchpad.net/grenade/+bug/1263824/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1263824] Re: grenade is failing on euca and bundle tests

2013-12-23 Thread Joe Gordon
Fix https://review.openstack.org/#/c/63865/

** No longer affects: nova

** No longer affects: grenade

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1263824

Title:
  grenade is failing on euca and bundle tests

Status in OpenStack Core Infrastructure:
  New

Bug description:
  During gate runs we are seeing the euca and bundle devstack tests fail
  on the havana.

  http://logs.openstack.org/51/63551/2/check/check-grenade-
  dsvm/6ad42b5/console.html

  
   2013-12-24 03:06:45.831 | + BUCKET=testbucket
   2013-12-24 03:06:45.831 | + IMAGE=bundle.img
   2013-12-24 03:06:45.832 | + truncate -s 5M /tmp/bundle.img
   2013-12-24 03:06:45.833 | + euca-bundle-image -i /tmp/bundle.img
   2013-12-24 03:06:46.007 | image should be of type file
   2013-12-24 03:06:46.018 | + die 52 'Failure bundling image bundle.img'
   2013-12-24 03:06:46.018 | + local exitcode=1
   2013-12-24 03:06:46.018 | + set +o xtrace
   2013-12-24 03:06:46.019 | [Call Trace]
   2013-12-24 03:06:46.019 | /opt/stack/old/devstack/exercises/bundle.sh:52:die
   2013-12-24 03:06:46.024 | [ERROR] 
/opt/stack/old/devstack/exercises/bundle.sh:52 Failure bundling image bundle.img

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-ci/+bug/1263824/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp