[Yahoo-eng-team] [Bug 1476732] [NEW] large pages support in flavor metadefs

2015-07-21 Thread Waldemar Znoinski
Public bug reported:

flavor namespace in glance metadefs should be extended to support
hw:mem_page_size flavor-key

** Affects: glance
 Importance: Undecided
 Assignee: Waldemar Znoinski (wznoinsk)
 Status: New


** Tags: metadef

** Changed in: glance
 Assignee: (unassigned) = Waldemar Znoinski (wznoinsk)

** Tags added: metadef

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1476732

Title:
  large pages support in flavor metadefs

Status in Glance:
  New

Bug description:
  flavor namespace in glance metadefs should be extended to support
  hw:mem_page_size flavor-key

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1476732/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476720] [NEW] Openstack endpoint create with invalid url should be suppressed

2015-07-21 Thread jiaxi
Public bug reported:

stack@openstack:~$ openstack endpoint create cinder public 
'http://127.0.0.1:8774 /v1.1/\$(tenant_i d)s'
+--+-+
| Field| Value   |
+--+-+
| enabled  | True|
| id   | 59b2c8b05aff41eab4c5c92209ac6602|
| interface| public  |
| region   | None|
| region_id| None|
| service_id   | d6a73e95a522431aa416bf32744b8d32|
| service_name | cinder  |
| service_type | volume  |
| url  | http://127.0.0.1:8774 /v1.1/\$(tenant_i d)s |
+--+-+


It's different from bug #1471034 (This one is totally about identity api v2)


This bug is for identity api V3.


So can be fixed in different patch set.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1476720

Title:
  Openstack endpoint create with invalid url should be suppressed

Status in Keystone:
  New

Bug description:
  stack@openstack:~$ openstack endpoint create cinder public 
'http://127.0.0.1:8774 /v1.1/\$(tenant_i d)s'
  +--+-+
  | Field| Value   |
  +--+-+
  | enabled  | True|
  | id   | 59b2c8b05aff41eab4c5c92209ac6602|
  | interface| public  |
  | region   | None|
  | region_id| None|
  | service_id   | d6a73e95a522431aa416bf32744b8d32|
  | service_name | cinder  |
  | service_type | volume  |
  | url  | http://127.0.0.1:8774 /v1.1/\$(tenant_i d)s |
  +--+-+

  
  It's different from bug #1471034 (This one is totally about identity api v2)

  
  This bug is for identity api V3.

  
  So can be fixed in different patch set.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1476720/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476714] Re: Invalid test condition in test_ha_router_failover

2015-07-21 Thread venkata anil
** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1476714

Title:
  Invalid test condition in test_ha_router_failover

Status in neutron:
  Invalid

Bug description:
  In test_ha_router_failover function, w have the below test conditions

  
  utils.wait_until_true(lambda: router1.ha_state == 'master')
  utils.wait_until_true(lambda: router2.ha_state == 'backup')

  self.fail_ha_router(router1)

  utils.wait_until_true(lambda: router2.ha_state == 'master')
  utils.wait_until_true(lambda: router1.ha_state != 'backup')

  
  I think the last test condition is incorrect and it should be

  utils.wait_until_true(lambda: router1.ha_state == 'backup')

  instead of

  utils.wait_until_true(lambda: router1.ha_state != 'backup')

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1476714/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476734] [NEW] numa placement and topology in flavor metadef

2015-07-21 Thread Waldemar Znoinski
Public bug reported:

numa placement and topology flavor-keys should be supported in glance
flavor metadef :

hw:numa_mempolicy
hw:numa_cpus.X
hw:numa_cpus.X

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: metadef

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1476734

Title:
  numa placement and topology in flavor metadef

Status in Glance:
  New

Bug description:
  numa placement and topology flavor-keys should be supported in glance
  flavor metadef :

  hw:numa_mempolicy
  hw:numa_cpus.X
  hw:numa_cpus.X

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1476734/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476714] [NEW] Invalid test condition in test_ha_router_failover

2015-07-21 Thread venkata anil
Public bug reported:

In test_ha_router_failover function, w have the below test conditions


utils.wait_until_true(lambda: router1.ha_state == 'master')
utils.wait_until_true(lambda: router2.ha_state == 'backup')

self.fail_ha_router(router1)

utils.wait_until_true(lambda: router2.ha_state == 'master')
utils.wait_until_true(lambda: router1.ha_state != 'backup')


I think the last test condition is incorrect and it should be

utils.wait_until_true(lambda: router1.ha_state == 'backup')

instead of

utils.wait_until_true(lambda: router1.ha_state != 'backup')

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-ha low-hanging-fruit

** Tags added: low-hanging-fruit

** Tags added: l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1476714

Title:
  Invalid test condition in test_ha_router_failover

Status in neutron:
  New

Bug description:
  In test_ha_router_failover function, w have the below test conditions

  
  utils.wait_until_true(lambda: router1.ha_state == 'master')
  utils.wait_until_true(lambda: router2.ha_state == 'backup')

  self.fail_ha_router(router1)

  utils.wait_until_true(lambda: router2.ha_state == 'master')
  utils.wait_until_true(lambda: router1.ha_state != 'backup')

  
  I think the last test condition is incorrect and it should be

  utils.wait_until_true(lambda: router1.ha_state == 'backup')

  instead of

  utils.wait_until_true(lambda: router1.ha_state != 'backup')

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1476714/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476701] [NEW] vendor-data is not functional in NoCloud datasource

2015-07-21 Thread Scott Moser
Public bug reported:

vendor-data is not working with the NoCloud datasource.
The issue is that when read from the seed/source, it is assigned to vendordata, 
not vendordata_raw .
as a result, it doesn't ever get processed.

ProblemType: Bug
DistroRelease: Ubuntu 14.04
Package: cloud-init 0.7.5-0ubuntu1.6
ProcVersionSignature: Ubuntu 3.19.0-22.22-generic 3.19.8-ckt1
Uname: Linux 3.19.0-22-generic x86_64
NonfreeKernelModules: veth overlay xt_CHECKSUM iptable_mangle ipt_MASQUERADE 
nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 
nf_nat nf_conntrack xt_tcpudp bridge stp llc iptable_filter ip_tables x_tables 
ppdev kvm_intel kvm crct10dif_pclmul crc32_pclmul ghash_clmulni_intel serio_raw 
parport_pc parport autofs4 aesni_intel aes_x86_64 glue_helper lrw gf128mul 
ablk_helper cryptd psmouse floppy
ApportVersion: 2.14.1-0ubuntu3.11
Architecture: amd64
Date: Tue Jul 21 14:24:06 2015
PackageArchitecture: all
ProcEnviron:
 TERM=screen
 PATH=(custom, no user)
 LANG=en_US.UTF-8
 SHELL=/bin/bash
SourcePackage: cloud-init
UpgradeStatus: No upgrade log present (probably fresh install)

** Affects: cloud-init
 Importance: Medium
 Status: Confirmed

** Affects: cloud-init (Ubuntu)
 Importance: Medium
 Status: Confirmed


** Tags: amd64 apport-bug trusty uec-images

** Also affects: cloud-init
   Importance: Undecided
   Status: New

** Changed in: cloud-init
   Status: New = Confirmed

** Changed in: cloud-init (Ubuntu)
   Status: New = Confirmed

** Changed in: cloud-init
   Importance: Undecided = Medium

** Changed in: cloud-init (Ubuntu)
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1476701

Title:
  vendor-data is not functional in NoCloud datasource

Status in cloud-init:
  Confirmed
Status in cloud-init package in Ubuntu:
  Confirmed

Bug description:
  vendor-data is not working with the NoCloud datasource.
  The issue is that when read from the seed/source, it is assigned to 
vendordata, not vendordata_raw .
  as a result, it doesn't ever get processed.

  ProblemType: Bug
  DistroRelease: Ubuntu 14.04
  Package: cloud-init 0.7.5-0ubuntu1.6
  ProcVersionSignature: Ubuntu 3.19.0-22.22-generic 3.19.8-ckt1
  Uname: Linux 3.19.0-22-generic x86_64
  NonfreeKernelModules: veth overlay xt_CHECKSUM iptable_mangle ipt_MASQUERADE 
nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 
nf_nat nf_conntrack xt_tcpudp bridge stp llc iptable_filter ip_tables x_tables 
ppdev kvm_intel kvm crct10dif_pclmul crc32_pclmul ghash_clmulni_intel serio_raw 
parport_pc parport autofs4 aesni_intel aes_x86_64 glue_helper lrw gf128mul 
ablk_helper cryptd psmouse floppy
  ApportVersion: 2.14.1-0ubuntu3.11
  Architecture: amd64
  Date: Tue Jul 21 14:24:06 2015
  PackageArchitecture: all
  ProcEnviron:
   TERM=screen
   PATH=(custom, no user)
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  SourcePackage: cloud-init
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1476701/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476770] Re: _translate_from_glance fails with AttributeError: id in grenade

2015-07-21 Thread Davanum Srinivas (DIMS)
** Changed in: oslo.vmware
 Assignee: (unassigned) = Davanum Srinivas (DIMS) (dims-v)

** Changed in: oslo.vmware
   Importance: Undecided = High

** Changed in: oslo.vmware
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1476770

Title:
  _translate_from_glance fails with AttributeError: id in grenade

Status in Glance:
  Invalid
Status in OpenStack-Gate:
  In Progress
Status in oslo.vmware:
  Fix Released

Bug description:
  http://logs.openstack.org/28/204128/2/check/gate-grenade-
  dsvm/80607dc/logs/old/screen-n-api.txt.gz?level=TRACE

  2015-07-21 17:05:37.447 ERROR nova.api.openstack 
[req-9854210d-b9fc-47ff-9f00-1a0270266e2a tempest-ServersTestJSON-34270062 
tempest-ServersTestJSON-745803609] Caught error: id
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack Traceback (most recent 
call last):
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/opt/stack/old/nova/nova/api/openstack/__init__.py, line 125, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
req.get_response(self.application)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/request.py, line 1317, in send
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/request.py, line 1281, in 
call_application
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py,
 line 634, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
self._call_app(env, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py,
 line 554, in _call_app
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
self._app(env, _fake_start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/routes/middleware.py, line 136, in 
__call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
self.func(req, *args, **kwargs)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/opt/stack/old/nova/nova/api/openstack/wsgi.py, line 756, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack content_type, 
body, accept)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/opt/stack/old/nova/nova/api/openstack/wsgi.py, line 821, in _process_stack
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/opt/stack/old/nova/nova/api/openstack/wsgi.py, line 911, in dispatch
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
method(req=request, **action_args)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/opt/stack/old/nova/nova/api/openstack/compute/servers.py, line 636, in create
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack 
self._handle_create_exception(*sys.exc_info())
  2015-07-21 17:05:37.447 21251 TRACE 

[Yahoo-eng-team] [Bug 1476912] [NEW] No nova.conf.sample, and no details procedures to describe how to generate a full list of nova.conf.sample

2015-07-21 Thread frozen.lcl
Public bug reported:

1. Version of Nova:  1:2015.1.0
There is no nova.conf.sample in source code at github.com
And there is no detailed documents to tell users(not developers) to generate a 
full list of nova.conf.sample.

Options described in deployment guide on docs.openstack.org is very
different from the generated nova.conf.sample by the command tox
-egenconfig.

2. No logs needed, just check the source code on github.com

3. Reproduce steps.
step 1. check the options listed in deployment guide on docs.openstack.org
step 2. generate nova.conf.sample as commond listed in 
https://github.com/openstack/nova/blob/stable/kilo/etc/nova/README-nova.conf.txt
step 3. try to find out options listed in deployment guide.

Suggest from a user (not a developer), why not generate a full list of
nova.conf.sample like other projects like neutron, glance?

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1476912

Title:
  No nova.conf.sample, and no details procedures to describe how to
  generate a full list of nova.conf.sample

Status in OpenStack Compute (nova):
  New

Bug description:
  1. Version of Nova:  1:2015.1.0
  There is no nova.conf.sample in source code at github.com
  And there is no detailed documents to tell users(not developers) to generate 
a full list of nova.conf.sample.

  Options described in deployment guide on docs.openstack.org is very
  different from the generated nova.conf.sample by the command tox
  -egenconfig.

  2. No logs needed, just check the source code on github.com

  3. Reproduce steps.
  step 1. check the options listed in deployment guide on docs.openstack.org
  step 2. generate nova.conf.sample as commond listed in 
https://github.com/openstack/nova/blob/stable/kilo/etc/nova/README-nova.conf.txt
  step 3. try to find out options listed in deployment guide.

  Suggest from a user (not a developer), why not generate a full list of
  nova.conf.sample like other projects like neutron, glance?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1476912/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1430300] Re: nova does not allow instances to be paused during live migration

2015-07-21 Thread Pawel Koniszewski
This will be implemented as part of
https://blueprints.launchpad.net/nova/+spec/refresh-abort-live-migration
similarly to https://review.openstack.org/#/c/179149/

** Changed in: nova
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1430300

Title:
  nova does not allow instances to be paused during live migration

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  QEMU/KVM is able to pause instance during live migration. This
  operation does not abort live migration process - VM is paused on
  source host and after successful live migration VM is back online on
  destination host (hypervisor automatically starts VM).

  In current approach in nova there is no way to do anything with VM
  that is being live migrated, even when the process will take ages (or
  in the worst case scenario - will never end). To address this issue
  nova should be able to pause instances during live migration
  operation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1430300/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476752] [NEW] ZVM Agent ids for Neutron

2015-07-21 Thread Dirk Mueller
Public bug reported:

The networking-zvm neutron agent (https://wiki.openstack.org/wiki
/Networking-zvm) uses this code to patch neutron code on the fly for
defining VIF_TYPE and the AGENT_TYPE:

   /bin/sed -i $a\VIF_TYPE_ZVM = 'zvm' {toxinidir}/\
 .tox/py27/src/neutron/neutron/extensions/portbindings.py
   /bin/sed -i $a\AGENT_TYPE_ZVM = 'zVM agent' {toxinidir}/\
 .tox/py27/src/neutron/neutron/common/constants.py

Since those definitions must not ever change it makes a lot of sense to
have them defined in the neutron code even though the actual driver is
still in stackforge:

https://git.openstack.org/cgit/stackforge/networking-zvm/

This enhancement bug report is about tracking these identifiers to be
merged into Neutron core.

** Affects: neutron
 Importance: Wishlist
 Assignee: Dirk Mueller (dmllr)
 Status: In Progress

** Description changed:

  The networking-zvm neutron agent (https://wiki.openstack.org/wiki
  /Networking-zvm) uses this code to patch neutron code on the fly for
  defining VIF_TYPE and the AGENT_TYPE:
  
-/bin/sed -i $a\VIF_TYPE_ZVM = 'zvm' {toxinidir}/\
-  .tox/py27/src/neutron/neutron/extensions/portbindings.py
-/bin/sed -i $a\AGENT_TYPE_ZVM = 'zVM agent' {toxinidir}/\
-  .tox/py27/src/neutron/neutron/common/constants.py
+    /bin/sed -i $a\VIF_TYPE_ZVM = 'zvm' {toxinidir}/\
+  .tox/py27/src/neutron/neutron/extensions/portbindings.py
+    /bin/sed -i $a\AGENT_TYPE_ZVM = 'zVM agent' {toxinidir}/\
+  .tox/py27/src/neutron/neutron/common/constants.py
  
- 
- Since those definitions must not ever change it makes a lot of sense to have 
them defined in the neutron code even though the actual driver is still in 
stackforge:
+ Since those definitions must not ever change it makes a lot of sense to
+ have them defined in the neutron code even though the actual driver is
+ still in stackforge:
  
  https://git.openstack.org/cgit/stackforge/networking-zvm/
+ 
+ This enhancement bug report is about tracking these identifiers to be
+ merged into Neutron core.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1476752

Title:
  ZVM Agent ids for Neutron

Status in neutron:
  In Progress

Bug description:
  The networking-zvm neutron agent (https://wiki.openstack.org/wiki
  /Networking-zvm) uses this code to patch neutron code on the fly for
  defining VIF_TYPE and the AGENT_TYPE:

     /bin/sed -i $a\VIF_TYPE_ZVM = 'zvm' {toxinidir}/\
   .tox/py27/src/neutron/neutron/extensions/portbindings.py
     /bin/sed -i $a\AGENT_TYPE_ZVM = 'zVM agent' {toxinidir}/\
   .tox/py27/src/neutron/neutron/common/constants.py

  Since those definitions must not ever change it makes a lot of sense
  to have them defined in the neutron code even though the actual driver
  is still in stackforge:

  https://git.openstack.org/cgit/stackforge/networking-zvm/

  This enhancement bug report is about tracking these identifiers to be
  merged into Neutron core.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1476752/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476771] [NEW] No need to convert angular based messages to use python based interpolation

2015-07-21 Thread Doug Fish
Public bug reported:

While reviewing translation code at the mid-cycle we discussed that
there is a troublesome set of code that is converting javascript/angular
style interpolation to python style interpolation and back.

I don't think we need to do that. Have a look at the existing files:
https://github.com/openstack/horizon/blob/master/openstack_dashboard/locale/django.pot#L3420
msgid Namespace Details: {{ namespace.namespace }}

https://github.com/openstack/horizon/blob/master/openstack_dashboard/locale/djangojs.pot#L285
The selected image source requires a flavor with at least %(minDisk)s GB 

we already use different style interpolation strings in our pot files. I don't 
think that's going to be a problem. Let's remove the code that does that 
conversion and avoid the complexity. I'm thinking of code such as:
https://github.com/openstack/horizon/blob/17342c2d42a508090fa7a77c1bf291a3829531aa/horizon/utils/babel_extract_angular.py#L58

maybe there is more?

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1476771

Title:
  No need to convert angular based messages to use python based
  interpolation

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  While reviewing translation code at the mid-cycle we discussed that
  there is a troublesome set of code that is converting
  javascript/angular style interpolation to python style interpolation
  and back.

  I don't think we need to do that. Have a look at the existing files:
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/locale/django.pot#L3420
  msgid Namespace Details: {{ namespace.namespace }}

  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/locale/djangojs.pot#L285
  The selected image source requires a flavor with at least %(minDisk)s GB 

  we already use different style interpolation strings in our pot files. I 
don't think that's going to be a problem. Let's remove the code that does that 
conversion and avoid the complexity. I'm thinking of code such as:
  
https://github.com/openstack/horizon/blob/17342c2d42a508090fa7a77c1bf291a3829531aa/horizon/utils/babel_extract_angular.py#L58

  maybe there is more?

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1476771/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398997] Re: cloud-init does not have the SmartOS data source as a configuration option

2015-07-21 Thread Scott Moser
** Also affects: cloud-init
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1398997

Title:
  cloud-init does not have the SmartOS data source as a configuration
  option

Status in cloud-init:
  New
Status in cloud-init package in Ubuntu:
  Confirmed
Status in cloud-init source package in Trusty:
  Confirmed
Status in cloud-init source package in Vivid:
  Confirmed
Status in cloud-init source package in Wily:
  Confirmed

Bug description:
  The generic Ubuntu *-server-cloudimg-amd64-disk1.img images
  available here...

http://cloud-images.ubuntu.com/utopic/current/

  ... do not work on a SmartOS hypervisor: they try to detect a
  datasource, but ultimately fail.  It appears that this is because the
  SmartOS datasource is not in the list of datasources to try.  This
  appears to be an oversight, as the cloud-init project source
  includes a fallback configuration for when no configuration is
  provided by the image:

  ---
  - cloudinit/settings.py
  ---
  CFG_BUILTIN = {
  'datasource_list': [
  'NoCloud',
  'ConfigDrive',
  'OpenNebula',
  'Azure',
  'AltCloud',
  'OVF',
  'MAAS',
  'GCE',
  'OpenStack',
  'Ec2',
  'CloudSigma',
  'CloudStack',
  'SmartOS',
  # At the end to act as a 'catch' when none of the above work...
  'None',
  ],
  ...
  ---

  This list seems to be overridden in the generic images as shipped on
  ubuntu.com:

  ---
  - etc/cloud/cloud.cfg.d/90_dpkg.cfg
  ---
  # to update this file, run dpkg-reconfigure cloud-init
  datasource_list: [ NoCloud, ConfigDrive, OpenNebula, Azure, AltCloud,OVF, 
MAAS, GCE, OpenStack, CloudSigma, Ec2, CloudStack, None ]
  ---

  SmartOS is the only datasource type that appears in the default
  CFG_BUILTIN list but is missing from the overridden list as shipped in
  the images.  Can this list please be updated for at least the 14.04
  and 14.10 generic cloud images to include SmartOS?

  Thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1398997/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476775] [NEW] --compilemessages CommandError: Can't find msgfmt

2015-07-21 Thread Tyr Johanson
Public bug reported:

Attempting to rebuild message catalog for Horizon and hit the following:

$ ./run_tests.sh --compilemessages
Checking environment.
Environment is up to date.
CommandError: Can't find msgfmt. Make sure you have GNU gettext tools 0.15 or 
newer installed.

Ideally this utility would be satisfied from the virtual environment,
otherwise perhaps some documentation is necessary to describe how to
setup a development environment.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1476775

Title:
  --compilemessages CommandError: Can't find msgfmt

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Attempting to rebuild message catalog for Horizon and hit the
  following:

  $ ./run_tests.sh --compilemessages
  Checking environment.
  Environment is up to date.
  CommandError: Can't find msgfmt. Make sure you have GNU gettext tools 0.15 or 
newer installed.

  Ideally this utility would be satisfied from the virtual environment,
  otherwise perhaps some documentation is necessary to describe how to
  setup a development environment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1476775/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476770] [NEW] _translate_from_glance fails with AttributeError: id in grenade

2015-07-21 Thread Matt Riedemann
Public bug reported:

http://logs.openstack.org/28/204128/2/check/gate-grenade-
dsvm/80607dc/logs/old/screen-n-api.txt.gz?level=TRACE

2015-07-21 17:05:37.447 ERROR nova.api.openstack 
[req-9854210d-b9fc-47ff-9f00-1a0270266e2a tempest-ServersTestJSON-34270062 
tempest-ServersTestJSON-745803609] Caught error: id
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack Traceback (most recent 
call last):
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/opt/stack/old/nova/nova/api/openstack/__init__.py, line 125, in __call__
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
req.get_response(self.application)
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/request.py, line 1317, in send
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack application, 
catch_exc_info=False)
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/request.py, line 1281, in 
call_application
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return resp(environ, 
start_response)
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py,
 line 634, in __call__
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
self._call_app(env, start_response)
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py,
 line 554, in _call_app
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
self._app(env, _fake_start_response)
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return resp(environ, 
start_response)
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return resp(environ, 
start_response)
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/routes/middleware.py, line 136, in 
__call__
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack response = 
self.app(environ, start_response)
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return resp(environ, 
start_response)
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
self.func(req, *args, **kwargs)
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/opt/stack/old/nova/nova/api/openstack/wsgi.py, line 756, in __call__
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack content_type, body, 
accept)
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/opt/stack/old/nova/nova/api/openstack/wsgi.py, line 821, in _process_stack
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/opt/stack/old/nova/nova/api/openstack/wsgi.py, line 911, in dispatch
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
method(req=request, **action_args)
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/opt/stack/old/nova/nova/api/openstack/compute/servers.py, line 636, in create
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack 
self._handle_create_exception(*sys.exc_info())
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/opt/stack/old/nova/nova/api/openstack/compute/servers.py, line 465, in 
_handle_create_exception
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack 
six.reraise(*exc_info)
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/opt/stack/old/nova/nova/api/openstack/compute/servers.py, line 621, in create
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack 
check_server_group_quota=check_server_group_quota)
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/opt/stack/old/nova/nova/hooks.py, line 149, in inner
2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack 

[Yahoo-eng-team] [Bug 1476497] Re: Not able to acces internet on cirros vm in openstack

2015-07-21 Thread Markus Zoeller
It does sound more like a configuration problem and less like an issue
within OpenStack, that's why I close this bug. You could try the
resources below to solve your problem. If you still think this is an
issue within OpenStack, feel free to reopen this bug by setting its
status to New.

* 
http://thenewstack.io/solving-a-common-beginners-problem-when-pinging-from-an-openstack-instance/
* https://ask.openstack.org/en/question/5241/no-internet-connection-in-vms/
* 
https://ask.openstack.org/en/question/5341/instances-cant-reach-external-network-and-internet/


** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1476497

Title:
  Not able to acces internet on cirros vm in openstack

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  I have a Juno multinode setup (Controller node, Network node, 2
  Compute nodes) and I am trying to acces internet on cirros VM but i am
  unable to acces it. i want to install an app on cirros vm for which i
  need internet access. However i am able to ping this vm through ext-
  net.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1476497/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463067] Re: SSH module to generate ed25519 host keys

2015-07-21 Thread Scott Moser
*** This bug is a duplicate of bug 1461242 ***
https://bugs.launchpad.net/bugs/1461242

** This bug is no longer a duplicate of bug 1382118
   Cloud-init doesn't support SSH ed25519 keys
** This bug has been marked a duplicate of bug 1461242
   cloud-init does not generate ed25519 keys

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1463067

Title:
  SSH module to generate ed25519 host keys

Status in cloud-init:
  New

Bug description:
  Hi,
  Since openssh-server 6.5 there is support for a new host key, ed25519. It 
appears that cloudinit 0.7.6 doesn't generate this; ed25519 is not listed as a 
valid key type in ssh_util.py. Is this correct that cloud init doesnt generate 
this host key, and if so, should this be supported? Generation is easy, on new 
versions of openssh-server:

  ssh-keygen  -f /etc/ssh/ssh_host_ed25519_key -N '' -t ed25519

  Thanks.
James

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1463067/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476919] [NEW] Cannot use multi-byte role name in policy.json

2015-07-21 Thread Dixon Siu
Public bug reported:

Japanese (multi-byte) role name can be used when creating a new role in
Horizon. However, there is no way to define the following rule (replase
japanese_word_for_admin by a Japanese word) in policy.json.

{
admin_required: role:admin or role:japanese_word_for_admin
}

At least support of role_id:xxx might fix the above issue.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: multi-byte policy.json role

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1476919

Title:
  Cannot use multi-byte role name in policy.json

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Japanese (multi-byte) role name can be used when creating a new role
  in Horizon. However, there is no way to define the following rule
  (replase japanese_word_for_admin by a Japanese word) in policy.json.

  {
  admin_required: role:admin or role:japanese_word_for_admin
  }

  At least support of role_id:xxx might fix the above issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1476919/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420731] Re: Icehouse nova quota-delete also erase currently used absolute limits

2015-07-21 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1420731

Title:
  Icehouse nova quota-delete also erase currently used absolute limits

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Example :

  nova absolute-limits --tenant test_fil
  +-+---+
  | Name| Value |
  +-+---+
  | maxServerMeta   | 128   |
  | maxPersonality  | 5 |
  | maxImageMeta| 128   |
  | maxPersonalitySize  | 10240 |
  | maxTotalRAMSize | 8192  |
  | maxSecurityGroupRules   | 20|
  | maxTotalKeypairs| 4 |
  | totalRAMUsed| 2048  |
  | maxSecurityGroups   | 10|
  | totalFloatingIpsUsed| 0 |
  | totalInstancesUsed  | 1 |
  | totalSecurityGroupsUsed | 1 |
  | maxTotalFloatingIps | 4 |
  | maxTotalInstances   | 4 |
  | totalCoresUsed  | 1 |
  | maxTotalCores   | 4 |
  +-+---+

  nova --debug quota-delete --tenant test_fil
  REQ: curl -i 'http://controller1:8774/v2/admin/os-quota-sets/test_fil' -X 
DELETE -H X-Auth-Project-Id: admin -H User-Agent: python-novaclient -H 
Accept: application/json -H X-Auth-Token: ...

  nova absolute-limits --tenant test_fil
  +-+---+
  | Name| Value |
  +-+---+
  | maxServerMeta   | 128   |
  | maxPersonality  | 5 |
  | maxImageMeta| 128   |
  | maxPersonalitySize  | 10240 |
  | maxTotalRAMSize | 8192  |
  | maxSecurityGroupRules   | 20|
  | maxTotalKeypairs| 4 |
  | totalRAMUsed| 0 |
  | maxSecurityGroups   | 10|
  | totalFloatingIpsUsed| 0 |
  | totalInstancesUsed  | 0 |
  | totalSecurityGroupsUsed | 0 |
  | maxTotalFloatingIps | 4 |
  | maxTotalInstances   | 4 |
  | totalCoresUsed  | 0 |
  | maxTotalCores   | 4 |
  +-+---+

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1420731/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476806] [NEW] Unable to delete instance with attached volumes which failed to boot

2015-07-21 Thread Maxim Nestratov
Public bug reported:

I ran devstack deployment on this git nova version:

commit 35375133398d862a61334783c1e7a90b95f34cdb
Merge: 83623dd b2c5542
Author: Jenkins jenk...@review.openstack.org
Date:   Thu Jul 16 02:01:05 2015 +

Merge Port crypto to Python 3

If you try to start an instance with the following config and end up
with the following erro:

 Error defining a domain with XML: domain type=parallels
  uuidf81e862a-644b-4145-ab44-86d5c468106f/uuid
  nameinstance-0001/name
  memory2097152/memory
  vcpu1/vcpu
  metadata
nova:instance xmlns:nova=http://openstack.org/xmlns/libvirt/nova/1.0;
  nova:package version=12.0.0/
  nova:namect-volume/nova:name
  nova:creationTime2015-07-21 17:46:34/nova:creationTime
  nova:flavor name=m1.small
nova:memory2048/nova:memory
nova:disk20/nova:disk
nova:swap0/nova:swap
nova:ephemeral0/nova:ephemeral
nova:vcpus1/nova:vcpus
  /nova:flavor
  nova:owner
nova:user uuid=5ff3594c1b8b4694acf2cf2ee13a27acadmin/nova:user
nova:project 
uuid=ee1f664443ef4f1e8056b45baa1e83a5demo/nova:project
  /nova:owner
/nova:instance
  /metadata
  os
typehvm/type
boot dev=hd/
  /os
  clock offset=utc/
  devices
disk type=block device=disk
  driver type=raw cache=none/
  source 
dev=/dev/disk/by-path/ip-10.27.68.210:3260-iscsi-iqn.2010-10.org.openstack:volume-b147e00f-000f-4fbc-8141-afeb44e92549-lun-1/
  target bus=sata dev=sda/
  serialb147e00f-000f-4fbc-8141-afeb44e92549/serial
/disk
interface type=bridge
  mac address=fa:16:3e:3f:f4:1a/
  source bridge=qbr5a84792b-d8/
  target dev=tap5a84792b-d8/
/interface
graphics type=vnc autoport=yes listen=10.27.68.210/
video
  model type=vga/
/video
  /devices
/domain

Then you can't terminate the instance with the following error:

2015-07-21 13:54:15.418 ERROR nova.compute.manager 
[req-8184e7f2-5cec-4c51-9f24-f39a17d8b6eb admin demo] [instance: 
f81e862a-644b-4145-ab44-86d5c468106f] Setting instance vm_state to ERROR
2015-07-21 13:54:15.418 17566 ERROR nova.compute.manager [instance: 
f81e862a-644b-4145-ab44-86d5c468106f] Traceback (most recent call last):
2015-07-21 13:54:15.418 17566 ERROR nova.compute.manager [instance: 
f81e862a-644b-4145-ab44-86d5c468106f]   File 
/vz/stack/nova/nova/compute/manager.py, line 2361, in do_terminate_instance
2015-07-21 13:54:15.418 17566 ERROR nova.compute.manager [instance: 
f81e862a-644b-4145-ab44-86d5c468106f] self._delete_instance(context, 
instance, bdms, quotas)
2015-07-21 13:54:15.418 17566 ERROR nova.compute.manager [instance: 
f81e862a-644b-4145-ab44-86d5c468106f]   File /vz/stack/nova/nova/hooks.py, 
line 149, in inner
2015-07-21 13:54:15.418 17566 ERROR nova.compute.manager [instance: 
f81e862a-644b-4145-ab44-86d5c468106f] rv = f(*args, **kwargs)
2015-07-21 13:54:15.418 17566 ERROR nova.compute.manager [instance: 
f81e862a-644b-4145-ab44-86d5c468106f]   File 
/vz/stack/nova/nova/compute/manager.py, line 2340, in _delete_instance
2015-07-21 13:54:15.418 17566 ERROR nova.compute.manager [instance: 
f81e862a-644b-4145-ab44-86d5c468106f] quotas.rollback()
2015-07-21 13:54:15.418 17566 ERROR nova.compute.manager [instance: 
f81e862a-644b-4145-ab44-86d5c468106f]   File 
/usr/lib/python2.7/site-packages/oslo_utils/excutils.py, line 119, in __exit__
2015-07-21 13:54:15.418 17566 ERROR nova.compute.manager [instance: 
f81e862a-644b-4145-ab44-86d5c468106f] six.reraise(self.type_, self.value, 
self.tb)
2015-07-21 13:54:15.418 17566 ERROR nova.compute.manager [instance: 
f81e862a-644b-4145-ab44-86d5c468106f]   File 
/vz/stack/nova/nova/compute/manager.py, line 2310, in _delete_instance
2015-07-21 13:54:15.418 17566 ERROR nova.compute.manager [instance: 
f81e862a-644b-4145-ab44-86d5c468106f] self._shutdown_instance(context, 
instance, bdms)
2015-07-21 13:54:15.418 17566 ERROR nova.compute.manager [instance: 
f81e862a-644b-4145-ab44-86d5c468106f]   File 
/vz/stack/nova/nova/compute/manager.py, line 2246, in _shutdown_instance
2015-07-21 13:54:15.418 17566 ERROR nova.compute.manager [instance: 
f81e862a-644b-4145-ab44-86d5c468106f] self.volume_api.detach(context, 
bdm.volume_id)
2015-07-21 13:54:15.418 17566 ERROR nova.compute.manager [instance: 
f81e862a-644b-4145-ab44-86d5c468106f]   File 
/vz/stack/nova/nova/volume/cinder.py, line 224, in wrapper
2015-07-21 13:54:15.418 17566 ERROR nova.compute.manager [instance: 
f81e862a-644b-4145-ab44-86d5c468106f] six.reraise(exc_value, None, 
exc_trace)
2015-07-21 13:54:15.418 17566 ERROR nova.compute.manager [instance: 
f81e862a-644b-4145-ab44-86d5c468106f]   File 
/vz/stack/nova/nova/volume/cinder.py, line 213, in wrapper
2015-07-21 13:54:15.418 17566 ERROR nova.compute.manager [instance: 
f81e862a-644b-4145-ab44-86d5c468106f] res = method(self, ctx, volume_id, 
*args, **kwargs)
2015-07-21 13:54:15.418 17566 ERROR nova.compute.manager [instance: 

[Yahoo-eng-team] [Bug 1364656] Re: nova start always fails with ephemeral disk on Icehouse

2015-07-21 Thread Michael Still
Icehouse is no longer supported.

** Changed in: nova
   Status: Confirmed = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1364656

Title:
  nova start always fails with ephemeral disk on Icehouse

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  nova start command always fails for instance which has been attached an 
ephemeral disk.
  The following are the details.

  compute.log: 'No such file or directory: */disk.local'

  2014-09-02 12:22:24.157 3360 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py, line 4766, in 
get_instance_disk_info
  2014-09-02 12:22:24.157 3360 TRACE oslo.messaging.rpc.dispatcher dk_size 
= int(os.path.getsize(path))
  2014-09-02 12:22:24.157 3360 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib64/python2.6/genericpath.py, line 49, in getsize
  2014-09-02 12:22:24.157 3360 TRACE oslo.messaging.rpc.dispatcher return 
os.stat(filename).st_size
  2014-09-02 12:22:24.157 3360 TRACE oslo.messaging.rpc.dispatcher OSError: 
[Errno 2] No such file or directory: 
'/var/lib/nova/instances/deb5467b-3bfb-48ee-bb1b-730c527153bf/disk.local'
  2014-09-02 12:22:24.157 3360 TRACE oslo.messaging.rpc.dispatcher
  2014-09-02 12:22:24.177 3360 ERROR oslo.messaging._drivers.common [-] 
Returning exception [Errno 2] No such file or directory: 
'/var/lib/nova/instances/deb5467b-3bfb-48ee-bb1b-730c527153bf/disk.local' to 
caller

  I examine actual ephemeral disk name. That is below '*/disk.eph0'. so
  it is inconsistent.

  [root@localhost ~(keystone_demo)]# find 
/var/lib/nova/instances/deb5467b-3bfb-48ee-bb1b-730c527153bf/
  /var/lib/nova/instances/deb5467b-3bfb-48ee-bb1b-730c527153bf/
  /var/lib/nova/instances/deb5467b-3bfb-48ee-bb1b-730c527153bf/disk
  /var/lib/nova/instances/deb5467b-3bfb-48ee-bb1b-730c527153bf/disk.eph0
  /var/lib/nova/instances/deb5467b-3bfb-48ee-bb1b-730c527153bf/libvirt.xml
  /var/lib/nova/instances/deb5467b-3bfb-48ee-bb1b-730c527153bf/console.log
  /var/lib/nova/instances/deb5467b-3bfb-48ee-bb1b-730c527153bf/disk.info

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1364656/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476770] Re: _translate_from_glance fails with AttributeError: id in grenade

2015-07-21 Thread Matt Riedemann
Confirmed:

(1:34:41 PM) mriedem: could there be a thing where urllib3 1.11 is overwriting 
what requests uses?
(1:35:41 PM) sigmavirus24: mriedem: unless we're using system packages (apt-get 
install requests ; pip install urllib3) no
(1:38:00 PM) sigmavirus24: I'm all for being wrong, but I'm failing to see how 
urllib3 would be causing this beyond a correlation in time (which wouldn't 
imply causation)
(1:39:35 PM) sigmavirus24: What about backports around images in Nova
(1:39:41 PM) mriedem: sigmavirus24: that's exactly what's happening
(1:39:57 PM) mriedem: python-requests is apt-get installed (it's in the image)
(1:39:58 PM) mriedem: ii  python-requests  2.2.1-1ubuntu0.3 
 all  elegant and simple HTTP library for Python, 
built for human beings
(1:40:21 PM) kragniz: ah, and the debian packages strip out the vendoring, 
right?
(1:40:21 PM) mriedem: urllib3 is pip installed via oslo.vmware dependency:
(1:40:22 PM) mriedem: 
http://logs.openstack.org/28/204128/2/check/gate-grenade-dsvm/80607dc/logs/grenade.sh.txt.gz#_2015-07-21_16_56_35_728

** Also affects: openstack-gate
   Importance: Undecided
   Status: New

** Changed in: openstack-gate
   Status: New = In Progress

** Changed in: openstack-gate
 Assignee: (unassigned) = Matt Riedemann (mriedem)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1476770

Title:
  _translate_from_glance fails with AttributeError: id in grenade

Status in Glance:
  Confirmed
Status in OpenStack-Gate:
  In Progress

Bug description:
  http://logs.openstack.org/28/204128/2/check/gate-grenade-
  dsvm/80607dc/logs/old/screen-n-api.txt.gz?level=TRACE

  2015-07-21 17:05:37.447 ERROR nova.api.openstack 
[req-9854210d-b9fc-47ff-9f00-1a0270266e2a tempest-ServersTestJSON-34270062 
tempest-ServersTestJSON-745803609] Caught error: id
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack Traceback (most recent 
call last):
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/opt/stack/old/nova/nova/api/openstack/__init__.py, line 125, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
req.get_response(self.application)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/request.py, line 1317, in send
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/request.py, line 1281, in 
call_application
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py,
 line 634, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
self._call_app(env, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py,
 line 554, in _call_app
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
self._app(env, _fake_start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/routes/middleware.py, line 136, in 
__call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func
  2015-07-21 17:05:37.447 21251 TRACE 

[Yahoo-eng-team] [Bug 1476770] Re: _translate_from_glance fails with AttributeError: id in grenade

2015-07-21 Thread Matt Riedemann
http://lists.openstack.org/pipermail/openstack-dev/2015-July/070132.html

** Changed in: glance
   Status: Confirmed = Invalid

** Also affects: oslo.vmware
   Importance: Undecided
   Status: New

** Changed in: oslo.vmware
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1476770

Title:
  _translate_from_glance fails with AttributeError: id in grenade

Status in Glance:
  Invalid
Status in OpenStack-Gate:
  In Progress
Status in oslo.vmware:
  In Progress

Bug description:
  http://logs.openstack.org/28/204128/2/check/gate-grenade-
  dsvm/80607dc/logs/old/screen-n-api.txt.gz?level=TRACE

  2015-07-21 17:05:37.447 ERROR nova.api.openstack 
[req-9854210d-b9fc-47ff-9f00-1a0270266e2a tempest-ServersTestJSON-34270062 
tempest-ServersTestJSON-745803609] Caught error: id
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack Traceback (most recent 
call last):
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/opt/stack/old/nova/nova/api/openstack/__init__.py, line 125, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
req.get_response(self.application)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/request.py, line 1317, in send
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/request.py, line 1281, in 
call_application
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py,
 line 634, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
self._call_app(env, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py,
 line 554, in _call_app
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
self._app(env, _fake_start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/routes/middleware.py, line 136, in 
__call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
self.func(req, *args, **kwargs)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/opt/stack/old/nova/nova/api/openstack/wsgi.py, line 756, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack content_type, 
body, accept)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/opt/stack/old/nova/nova/api/openstack/wsgi.py, line 821, in _process_stack
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/opt/stack/old/nova/nova/api/openstack/wsgi.py, line 911, in dispatch
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
method(req=request, **action_args)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
/opt/stack/old/nova/nova/api/openstack/compute/servers.py, line 636, in create
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack 

[Yahoo-eng-team] [Bug 1476846] [NEW] Project field empty after uploading image from file

2015-07-21 Thread Justin Pomeroy
Public bug reported:

From the Admin - Images page, use Create Image to upload an image from
a file.  After the image is uploaded the project field is initially not
populated.  Refreshing the page makes the project show up.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: project_empty.png
   
https://bugs.launchpad.net/bugs/1476846/+attachment/4432207/+files/project_empty.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1476846

Title:
  Project field empty after uploading image from file

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  From the Admin - Images page, use Create Image to upload an image
  from a file.  After the image is uploaded the project field is
  initially not populated.  Refreshing the page makes the project show
  up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1476846/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476847] [NEW] document static_url and static_root

2015-07-21 Thread David Lyle
Public bug reported:

STATIC_URL and STATIC_ROOT are configurable settings in Django, that
should remain configurable in Horizon. This documents there existence
and default values.

** Affects: horizon
 Importance: Low
 Assignee: David Lyle (david-lyle)
 Status: In Progress


** Tags: documentation

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1476847

Title:
  document static_url and static_root

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  STATIC_URL and STATIC_ROOT are configurable settings in Django, that
  should remain configurable in Horizon. This documents there existence
  and default values.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1476847/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1353939] Re: Rescue fails with 'Failed to terminate process: Device or resource busy' in the n-cpu log

2015-07-21 Thread Liang Chen
** Description changed:

+ [Impact]
+ 
+  * Users may sometimes fail to shutdown an instance if the associated qemu
+process is on uninterruptable sleep (typically IO).
+ 
+ [Test Case]
+ 
+  * 1. create some IO load in a VM
+2. look at the associated qemu, make sure it has STAT D in ps output
+3. shutdown the instance
+4. with the patch in place, nova will retry calling libvirt to shutdown
+   the instance 3 times to wait for the signal to be delivered to the 
+   qemu process.
+ 
+ [Regression Potential]
+ 
+  * None
+ 
+ 
  message: Failed to terminate process AND
  message:'InstanceNotRescuable' AND message: 'Exception during message
  handling' AND tags:screen-n-cpu.txt
  
  The above log stash-query reports back only the failed jobs, the 'Failed to 
terminate process' close other failed rescue tests,
  but tempest does not always reports them as an error at the end.
  
  message: Failed to terminate process AND tags:screen-n-cpu.txt
  
  Usual console log:
  Details: (ServerRescueTestJSON:test_rescue_unrescue_instance) Server 
0573094d-53da-40a5-948a-747d181462f5 failed to reach RESCUE status and task 
state None within the required time (196 s). Current status: SHUTOFF. Current 
task state: None.
  
  http://logs.openstack.org/82/107982/2/gate/gate-tempest-dsvm-postgres-
  full/90726cb/console.html#_2014-08-07_03_50_26_520
  
  Usual n-cpu exception:
  
http://logs.openstack.org/82/107982/2/gate/gate-tempest-dsvm-postgres-full/90726cb/logs/screen-n-cpu.txt.gz#_2014-08-07_03_32_02_855
  
  2014-08-07 03:32:02.855 ERROR oslo.messaging.rpc.dispatcher 
[req-39ce7a3d-5ceb-41f5-8f9f-face7e608bd1 ServerRescueTestJSON-2035684545 
ServerRescueTestJSON-1017508309] Exception during message handling: Instance 
0573094d-53da-40a5-948a-747d181462f5 cannot be rescued: Driver Error: Failed to 
terminate process 26425 with SIGKILL: Device or resource busy
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
134, in _dispatch_and_reply
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
177, in _dispatch
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
123, in _do_dispatch
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/compute/manager.py, line 408, in decorated_function
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/exception.py, line 88, in wrapped
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher payload)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/openstack/common/excutils.py, line 82, in __exit__
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/exception.py, line 71, in wrapped
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/compute/manager.py, line 292, in decorated_function
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher pass
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/openstack/common/excutils.py, line 82, in __exit__
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/compute/manager.py, line 278, in decorated_function
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/compute/manager.py, line 342, in decorated_function
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2014-08-07 03:32:02.855 22829 TRACE 

[Yahoo-eng-team] [Bug 1443970] Re: nova-manage create networks with wrong dhcp_server in DB(nova)

2015-07-21 Thread Alan Pevec
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
   Importance: Undecided = Medium

** Changed in: nova/kilo
   Status: New = In Progress

** Changed in: nova/kilo
 Assignee: (unassigned) = Sergey Nikitin (snikitin)

** Changed in: nova/kilo
Milestone: None = 2015.1.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1443970

Title:
  nova-manage create networks with wrong dhcp_server in DB(nova)

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  In Progress

Bug description:
  Using nova network and creating new network 'dhcp_server' values are
  wrong.

  command (example)
  /usr/bin/nova-manage network create novanetwork 10.0.0.0/16 3 8 --vlan_start 
103 --dns1 8.8.8.8 --dns2 8.8.4.4

  This happens because in file nova/network/manager.py in method
  _do_create_networks() the variable 'enable_dhcp' is incorrectly used
  in loop.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1443970/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475190] Re: Network Profile is not supported in Kilo

2015-07-21 Thread Alan Pevec
What about other drivers?

** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

** Changed in: horizon/kilo
   Status: New = In Progress

** Changed in: horizon/kilo
 Assignee: (unassigned) = Shiv Prasad Rao (shivrao)

** Changed in: horizon
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1475190

Title:
  Network Profile is not supported in Kilo

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in OpenStack Dashboard (Horizon) kilo series:
  In Progress

Bug description:
  Cisco N1kv ML2 driver does not support network profiles extension in
  kilo. The network profile support in horizon needs to be removed for
  stable/kilo.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1475190/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476490] [NEW] wrong expected code for os-resetState action

2015-07-21 Thread Eli Qiao
Public bug reported:

os-resetState action have no 400(BadRequest Error) comes out, we need to
remove it.

** Affects: nova
 Importance: Undecided
 Assignee: Eli Qiao (taget-9)
 Status: In Progress


** Tags: api

** Changed in: nova
 Assignee: (unassigned) = Eli Qiao (taget-9)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1476490

Title:
  wrong expected code for os-resetState  action

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  os-resetState action have no 400(BadRequest Error) comes out, we need
  to remove it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1476490/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476530] [NEW] Iscsi session still connect after detach volume from paused instance

2015-07-21 Thread lyanchih
Public bug reported:

My test environment are lvm/ISCSI,  libvirt/QEMU

How to reproduce:
1. create instance
2 .create volume
3. attach volume to instance
4. pause instance
5. detach volume from instance

Nova will not disconnect with volume. You can enter following command to verify.
 sudo iscsiadm -m node --rescan
It will display the session which was build in previous steps.
Of course, you can also find device will still exist in /sys/block

Because of nova will search all block devices which are defined in xml for all 
guest.
Then nova will disconnect ISCSI block which was existed in /dev/disk/by-path 
and didn't been define in any guest.
But paused instance's xml define will still contain dev which prefer to remove.
Therefore nova will not disconnect with this volume.

There are two kind solution:
1. Logout iscsci connection manually. (sudo iscsiadm -m node -T Target --logout)
2. Reattach same volume.(lol)

But we still need to handle this bug with paused instance.

** Affects: nova
 Importance: Undecided
 Assignee: lyanchih (lyanchih)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) = lyanchih (lyanchih)

** Changed in: nova
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1476530

Title:
  Iscsi session still connect after detach volume from paused instance

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  My test environment are lvm/ISCSI,  libvirt/QEMU

  How to reproduce:
  1. create instance
  2 .create volume
  3. attach volume to instance
  4. pause instance
  5. detach volume from instance

  Nova will not disconnect with volume. You can enter following command to 
verify.
   sudo iscsiadm -m node --rescan
  It will display the session which was build in previous steps.
  Of course, you can also find device will still exist in /sys/block

  Because of nova will search all block devices which are defined in xml for 
all guest.
  Then nova will disconnect ISCSI block which was existed in /dev/disk/by-path 
and didn't been define in any guest.
  But paused instance's xml define will still contain dev which prefer to 
remove.
  Therefore nova will not disconnect with this volume.

  There are two kind solution:
  1. Logout iscsci connection manually. (sudo iscsiadm -m node -T Target 
--logout)
  2. Reattach same volume.(lol)

  But we still need to handle this bug with paused instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1476530/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476497] [NEW] Not able to acces internet on cirros vm in openstack

2015-07-21 Thread Amandeep
Public bug reported:

I have a Juno multinode setup (Controller node, Network node, 2 Compute
nodes) and I am trying to acces internet on cirros VM but i am unable to
acces it. i want to install an app on cirros vm for which i need
internet access. However i am able to ping this vm through ext-net.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1476497

Title:
  Not able to acces internet on cirros vm in openstack

Status in OpenStack Compute (nova):
  New

Bug description:
  I have a Juno multinode setup (Controller node, Network node, 2
  Compute nodes) and I am trying to acces internet on cirros VM but i am
  unable to acces it. i want to install an app on cirros vm for which i
  need internet access. However i am able to ping this vm through ext-
  net.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1476497/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476591] [NEW] volume in_use after VM has terminated

2015-07-21 Thread Silvan Kaiser
Public bug reported:

Running the Tempest test
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern
with our CI system runs into a timing issue in the interaction between
Nova and Cinder.

Essentially on the Cinder side the volume status of the volume  used in
the test is 'in_use' for several seconds _after_ the tests VM has
already been terminated and cleaned up, including the umounting of the
given volume on the Nova side. As Cinder sees the volume is still
'in_use' it calls the Nova API to perform the snapshot deletion but
since the volume has already been umounted on the Nova side the
resulting qemu-img operation triggered by Nova runs into a 'no such file
or directory' error.

The following log excerpts show this issue. Please refer to the given
time stamps (all components run in a devstack in the same machine so
times should be comparable):

1) Cinder log (please note i added two debug lines printing the volume status 
data in this snippet that are not part of the upstream driver):
2015-07-21 08:14:10.163 DEBUG cinder.volume.drivers.remotefs 
[req-6dae7608-e1f4-43af-ac8e-22d6aa7d72c9 
tempest-TestVolumeBootPatternV2-966547300] Deleting snapshot 
ca09f072-c736-41ac-b6f3-7a137b0ee072: _delete_s
napshot /opt/stack/cinder/cinder/volume/drivers/remotefs.py:915
2015-07-21 08:14:10.164 DEBUG cinder.volume.drivers.remotefs 
[req-6dae7608-e1f4-43af-ac8e-22d6aa7d72c9 
tempest-TestVolumeBootPatternV2-966547300] Initial snapshot deletion volume 
status is in-use _delete_snapsho
t /opt/stack/cinder/cinder/volume/drivers/remotefs.py:918
2015-07-21 08:14:10.420 DEBUG cinder.volume.drivers.remotefs 
[req-6dae7608-e1f4-43af-ac8e-22d6aa7d72c9 
tempest-TestVolumeBootPatternV2-966547300] snapshot_file for this snap is: 
volume-7f4a5711-9f2c-4aa7-a6af-c6
ccf476702d.ca09f072-c736-41ac-b6f3-7a137b0ee072 _delete_snapshot 
/opt/stack/cinder/cinder/volume/drivers/remotefs.py:941
2015-07-21 08:14:10.420 DEBUG oslo_concurrency.processutils 
[req-6dae7608-e1f4-43af-ac8e-22d6aa7d72c9 
tempest-TestVolumeBootPatternV2-966547300] Running cmd (subprocess): sudo 
cinder-rootwrap /etc/cinder/rootwra
p.conf env LC_ALL=C qemu-img info 
/opt/stack/data/cinder/mnt/38effaab97f9ee27ecb78121f511fa07/volume-7f4a5711-9f2c-4aa7-a6af-c6ccf476702d.ca09f072-c736-41ac-b6f3-7a137b0ee072
 execute /usr/local/lib/python2.7/dis
t-packages/oslo_concurrency/processutils.py:230
2015-07-21 08:14:11.549 DEBUG oslo_concurrency.processutils 
[req-6dae7608-e1f4-43af-ac8e-22d6aa7d72c9 
tempest-TestVolumeBootPatternV2-966547300] CMD sudo cinder-rootwrap 
/etc/cinder/rootwrap.conf env LC_ALL=C q
emu-img info 
/opt/stack/data/cinder/mnt/38effaab97f9ee27ecb78121f511fa07/volume-7f4a5711-9f2c-4aa7-a6af-c6ccf476702d.ca09f072-c736-41ac-b6f3-7a137b0ee072
 returned: 0 in 1.129s execute /usr/local/lib/python2.7/d
ist-packages/oslo_concurrency/processutils.py:260
2015-07-21 08:14:11.700 DEBUG cinder.volume.drivers.remotefs 
[req-6dae7608-e1f4-43af-ac8e-22d6aa7d72c9 
tempest-TestVolumeBootPatternV2-966547300] In delete_snapshot, volume status is 
in-use _delete_snapshot /opt
/stack/cinder/cinder/volume/drivers/remotefs.py:956

2) Nova log:
2015-07-21 08:14:05.153 INFO nova.compute.manager 
[req-e9bea12a-88ef-4f08-a320-d6e8e3b21228 
tempest-TestVolumeBootPatternV2-1465284168 
tempest-TestVolumeBootPatternV2-966547300] [instance: adbe1fa4-7136-4d29-91d
0-5772dbf4900d] Terminating instance  
2015-07-21 08:14:05.167 6183 INFO nova.virt.libvirt.driver [-] [instance: 
adbe1fa4-7136-4d29-91d0-5772dbf4900d] Instance destroyed successfully.  
2015-07-21 08:14:05.167 DEBUG nova.virt.libvirt.vif 
[req-e9bea12a-88ef-4f08-a320-d6e8e3b21228 
tempest-TestVolumeBootPatternV2-1465284168 
tempest-TestVolumeBootPatternV2-966547300] vif_type=bridge instance=Instan
ce(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone=None,cell_name=None,cleaned=False,config_drive='True',created_at=2015-07-21T08:13:32Z,default_ephemeral_device=No
ne,default_swap_device=None,deleted=False,deleted_at=None,disable_terminate=False,display_description='tempest-TestVolumeBootPatternV2-147549174',display_name='tempest-TestVolumeBootPatternV2-147549174',ec2_ids=
?,ephemeral_gb=0,ephemeral_key_uuid=None,fault=?,flavor=Flavor(6),host='manualvm',hostname='tempest-testvolumebootpatternv2-147549174',id=140,image_ref='',info_cache=InstanceInfoCache,instance_type_id=6,kernel_id='f203cd87-b5ed-491a-b7a5-46ad2e63bcbf',key_data='ssh-rsa
 
B3NzaC1yc2EDAQABAAABAQDJ2yTWvjXK/IwK8xlmSjTn4drDwH5nMr65gYD4j3mkZBfBuk5Zuf10mLwVUFVDTgacJVcG4AE96poXofHD64pbGEcP9NqYGbLV7JSJzgNznUO/RkMdBBmmmfJiACvtWYpigAMIPPddgmpQZVaG+q8VelEWfDiErOBiaajz/NjL8/mmP78Lq0xvuP44vy/EElaSob8iiqFeg6rTlMpzhkFUV1a52CSQYjmD80mSf932ljXgNU9A4pE8NkO4lDOkflQfLO2F6wxV9H94VKxTBJgwnH8BANP7V8rjeIOwwUmpF1F6D4LceA66j1ONHk4ssyU3iimWt+UMwGCFWF6aa58n
 

[Yahoo-eng-team] [Bug 1475738] [NEW] Cinder VMDK driver : uuid of VMDK disk file needs to be the same as of corresponding Cinder volume ID

2015-07-21 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

When we create Cinder volume of VMDK type, the uuid of virtual disk is 
auto-generated.  Because of this it's difficult for an end user to identify the 
Openstack Cinder created/attached volumes from within the guest OS.
Change proposed - use the uuid of Cinder volume as the uuid for Virtual disk as 
well when the shadow VM template is updated in Cinder VMDK driver.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
Cinder VMDK driver : uuid of VMDK disk file needs to be the same as of 
corresponding Cinder volume ID
https://bugs.launchpad.net/bugs/1475738
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475738] Re: Cinder VMDK driver : uuid of VMDK disk file needs to be the same as of corresponding Cinder volume ID

2015-07-21 Thread Sunitha K
Related defect report in Nova 
- https://bugs.launchpad.net/nova/+bug/1475740


** Project changed: nova = cinder

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1475738

Title:
  Cinder VMDK driver : uuid of VMDK disk file needs to be the same as of
  corresponding Cinder volume ID

Status in Cinder:
  New

Bug description:
  When we create Cinder volume of VMDK type, the uuid of virtual disk is 
auto-generated.  Because of this it's difficult for an end user to identify the 
Openstack Cinder created/attached volumes from within the guest OS.
  Change proposed - use the uuid of Cinder volume as the uuid for Virtual disk 
as well when the shadow VM template is updated in Cinder VMDK driver.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1475738/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475738] Re: Cinder VMDK driver : uuid of VMDK disk file needs to be the same as of corresponding Cinder volume ID

2015-07-21 Thread Johnson koil raj
To view the volume uuid inside the guest vm, the vmx file needed to be
updated with disk.EnableUUID = True

** Project changed: cinder = nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1475738

Title:
  Cinder VMDK driver : uuid of VMDK disk file needs to be the same as of
  corresponding Cinder volume ID

Status in Cinder:
  New

Bug description:
  When we create Cinder volume of VMDK type, the uuid of virtual disk is 
auto-generated.  Because of this it's difficult for an end user to identify the 
Openstack Cinder created/attached volumes from within the guest OS.
  Change proposed - use the uuid of Cinder volume as the uuid for Virtual disk 
as well when the shadow VM template is updated in Cinder VMDK driver.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1475738/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476527] [NEW] RFE - Add common classifier resource

2015-07-21 Thread YujiAzama
Public bug reported:

[Existing problem]
   This spec introduces a common classifier resource to Neutron API extension.
   This new resource can be leveraged to better realize other Neutron services
   needing traffic classification.

   Traffic classification is commonly needed by many Neutron services (e.g.
   Service Function Chaining [1]_, QoS [2]_, Tap as a Service [3]_, FWaaS [4]_,
   Group-Based Policy [5]_, and Forwarding Rule [6]_), but each service uses its
   own classifier resource similar to others. A common traffic classifier 
resource
   should exist in Neutron so that it could be leveraged across network 
services.

[What is the enhancement?]
   Add traffic classifiers to Neutron API as extension.

[Related information]
   Spec: https://review.openstack.org/#/c/190463/

[1] API for Service Chaining,
   https://review.openstack.org/#/c/192933/
[2] QoS,
   https://review.openstack.org/#/c/88599/ 
[3] Tap as a Service,
   https://review.openstack.org/#/c/96149/ 
[4] Firewall as a Service,
   
http://specs.openstack.org/openstack/neutron-specs/specs/api/firewall_as_a_service__fwaas_.html
[5] Group-based Policy,
   https://review.openstack.org/#/c/168444/ 
[6] Forwarding Rule,
   https://review.openstack.org/#/c/186663/
[7] IANA Protocol Numbers,
   http://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml

** Affects: neutron
 Importance: Undecided
 Assignee: YujiAzama (azama-yuji)
 Status: New


** Tags: rfe

** Description changed:

  [Existing problem]
-This spec introduces a common classifier resource to Neutron API extension.
-This new resource can be leveraged to better realize other Neutron services
-needing traffic classification.
+    This spec introduces a common classifier resource to Neutron API extension.
+    This new resource can be leveraged to better realize other Neutron services
+    needing traffic classification.
  
-Traffic classification is commonly needed by many Neutron services (e.g.
-Service Function Chaining [1]_, QoS [2]_, Tap as a Service [3]_, FWaaS 
[4]_,   
-Group-Based Policy [5]_, and Forwarding Rule [6]_), but each service uses 
its
-own classifier resource similar to others. A common traffic classifier 
resource
-should exist in Neutron so that it could be leveraged across network 
services.
+    Traffic classification is commonly needed by many Neutron services (e.g.
+    Service Function Chaining [1]_, QoS [2]_, Tap as a Service [3]_, FWaaS 
[4]_,
+    Group-Based Policy [5]_, and Forwarding Rule [6]_), but each service uses 
its
+    own classifier resource similar to others. A common traffic classifier 
resource
+    should exist in Neutron so that it could be leveraged across network 
services.
  
  [What is the enhancement?]
-Add traffic classifiers to Neutron API as extension.
+    Add traffic classifiers to Neutron API as extension.
  
  [Related information]
-Spec: https://review.openstack.org/#/c/190463/
+    Spec: https://review.openstack.org/#/c/190463/
+ 
+ [1] API for Service Chaining,
+https://review.openstack.org/#/c/192933/
+ [2] QoS,
+https://review.openstack.org/#/c/88599/ 
+ [3] Tap as a Service,
+https://review.openstack.org/#/c/96149/ 
+ [4] Firewall as a Service,
+
http://specs.openstack.org/openstack/neutron-specs/specs/api/firewall_as_a_service__fwaas_.html
+ [5] Group-based Policy,
+https://review.openstack.org/#/c/168444/ 
+ [6] Forwarding Rule,
+https://review.openstack.org/#/c/186663/
+ [7] IANA Protocol Numbers,
+http://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml

** Changed in: neutron
 Assignee: (unassigned) = YujiAzama (azama-yuji)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1476527

Title:
  RFE - Add common classifier resource

Status in neutron:
  New

Bug description:
  [Existing problem]
     This spec introduces a common classifier resource to Neutron API extension.
     This new resource can be leveraged to better realize other Neutron services
     needing traffic classification.

     Traffic classification is commonly needed by many Neutron services (e.g.
     Service Function Chaining [1]_, QoS [2]_, Tap as a Service [3]_, FWaaS 
[4]_,
     Group-Based Policy [5]_, and Forwarding Rule [6]_), but each service uses 
its
     own classifier resource similar to others. A common traffic classifier 
resource
     should exist in Neutron so that it could be leveraged across network 
services.

  [What is the enhancement?]
     Add traffic classifiers to Neutron API as extension.

  [Related information]
     Spec: https://review.openstack.org/#/c/190463/

  [1] API for Service Chaining,
 https://review.openstack.org/#/c/192933/
  [2] QoS,
 https://review.openstack.org/#/c/88599/ 
  [3] Tap as a Service,
 https://review.openstack.org/#/c/96149/ 
  [4] Firewall as a Service,
 

[Yahoo-eng-team] [Bug 1476674] [NEW] During image update metadata Danger: There was an error submitting the form. Please try again.

2015-07-21 Thread Eran Kuris
Public bug reported:

Log in to horizon as  Admin user  Navigate to image in admin Tab 
create image and then type on  update metadata 

we get error : Danger: There was an error submitting the form. Please
try again.

In log file  of horizon : 
2015-07-21 13:35:14,607 28006 ERROR django.request Internal Server Error: 
/dashboard/admin/images/06b7c500-768c-4c17-abbe-7a5258674cd2/update_metadata/
Traceback (most recent call last):
  File /usr/lib/python2.7/site-packages/django/core/handlers/base.py, line 
132, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
  File /usr/lib/python2.7/site-packages/horizon/decorators.py, line 36, in dec
return view_func(request, *args, **kwargs)
  File /usr/lib/python2.7/site-packages/horizon/decorators.py, line 84, in dec
return view_func(request, *args, **kwargs)
  File /usr/lib/python2.7/site-packages/horizon/decorators.py, line 52, in dec
return view_func(request, *args, **kwargs)
  File /usr/lib/python2.7/site-packages/horizon/decorators.py, line 36, in dec
return view_func(request, *args, **kwargs)
  File /usr/lib/python2.7/site-packages/horizon/decorators.py, line 84, in dec
return view_func(request, *args, **kwargs)
  File /usr/lib/python2.7/site-packages/django/views/generic/base.py, line 
71, in view
return self.dispatch(request, *args, **kwargs)
  File /usr/lib/python2.7/site-packages/django/views/generic/base.py, line 
89, in dispatch
return handler(request, *args, **kwargs)
  File /usr/lib/python2.7/site-packages/django/views/generic/edit.py, line 
215, in post
return self.form_valid(form)
  File /usr/lib/python2.7/site-packages/horizon/forms/views.py, line 173, in 
form_valid
exceptions.handle(self.request)
  File /usr/lib/python2.7/site-packages/horizon/exceptions.py, line 364, in 
handle
six.reraise(exc_type, exc_value, exc_traceback)
  File /usr/lib/python2.7/site-packages/horizon/forms/views.py, line 170, in 
form_valid
handled = form.handle(self.request, form.cleaned_data)
  File 
/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/admin/images/forms.py,
 line 65, in handle
_('Unable to update the image metadata.'))
  File /usr/lib/python2.7/site-packages/horizon/exceptions.py, line 364, in 
handle
six.reraise(exc_type, exc_value, exc_traceback)
  File 
/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/admin/images/forms.py,
 line 48, in handle
new_metadata = json.loads(self.data['metadata'])
  File /usr/lib64/python2.7/json/__init__.py, line 338, in loads
return _default_decoder.decode(s)
  File /usr/lib64/python2.7/json/decoder.py, line 365, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File /usr/lib64/python2.7/json/decoder.py, line 383, in raw_decode
raise ValueError(No JSON object could be decoded)
ValueError: No JSON object could be decoded


version : 
[root@puma16 ~(keystone_admin)]# rpm -qa |grep horizon 
python-django-horizon-2015.1.0-10.el7ost.noarch

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1476674

Title:
  During image update metadata Danger: There was an error submitting the
  form. Please try again.

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Log in to horizon as  Admin user  Navigate to image in admin Tab 
  create image and then type on  update metadata 

  we get error : Danger: There was an error submitting the form. Please
  try again.

  In log file  of horizon : 
  2015-07-21 13:35:14,607 28006 ERROR django.request Internal Server Error: 
/dashboard/admin/images/06b7c500-768c-4c17-abbe-7a5258674cd2/update_metadata/
  Traceback (most recent call last):
File /usr/lib/python2.7/site-packages/django/core/handlers/base.py, line 
132, in get_response
  response = wrapped_callback(request, *callback_args, **callback_kwargs)
File /usr/lib/python2.7/site-packages/horizon/decorators.py, line 36, in 
dec
  return view_func(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/horizon/decorators.py, line 84, in 
dec
  return view_func(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/horizon/decorators.py, line 52, in 
dec
  return view_func(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/horizon/decorators.py, line 36, in 
dec
  return view_func(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/horizon/decorators.py, line 84, in 
dec
  return view_func(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/django/views/generic/base.py, line 
71, in view
  return self.dispatch(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/django/views/generic/base.py, line 
89, in dispatch
 

[Yahoo-eng-team] [Bug 1476681] [NEW] VPNaaS: Fix phase2alg for AH in ipsec.conf.template

2015-07-21 Thread Elena Ezhova
Public bug reported:

Any attempt to create IPSec site connection with policy that specifies
AH protocol instead of ESP leads to the following error:

2015-07-21 13:41:28.948 ERROR neutron.agent.linux.utils 
[req-e3237103-fb81-4c23-8a69-98c755c177c9 admin 
4733f78a7cd749f38b20bc134b9675a0] 
Command: ['ip', 'netns', 'exec', 
u'qrouter-3d68e902-ce44-411a-bd4e-6ff9a33d8a85', 'ipsec', 'addconn', 
'--ctlbase', 
u'/opt/stack/data/neutron/ipsec/3d68e902-ce44-411a-bd4e-6ff9a33d8a85/var/run/pluto.ctl',
 '--defaultroutenexthop', u'172.24.4.3', '--config', 
u'/opt/stack/data/neutron/ipsec/3d68e902-ce44-411a-bd4e-6ff9a33d8a85/etc/ipsec.conf',
 u'd0fa8bac-b8eb-4c33-a023-8c96560c99ec']
Exit code: 34
Stdin: 
Stdout: 034 esp string error: Non initial digit found for auth keylen, just 
after aes128- (old_state=ST_AA_END)

Stderr: 
2015-07-21 13:41:28.949 ERROR neutron_vpnaas.services.vpn.device_drivers.ipsec 
[req-e3237103-fb81-4c23-8a69-98c755c177c9 admin 
4733f78a7cd749f38b20bc134b9675a0] Failed to enable vpn process on router 
3d68e902-ce44-411a-bd4e-6ff9a33d8a85
2015-07-21 13:41:28.949 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 
Traceback (most recent call last):
2015-07-21 13:41:28.949 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec  
 File 
/opt/stack/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/ipsec.py,
 line 255, in enable
2015-07-21 13:41:28.949 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec  
   self.start()
2015-07-21 13:41:28.949 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec  
 File 
/opt/stack/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/ipsec.py,
 line 437, in start
2015-07-21 13:41:28.949 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec  
   ipsec_site_conn['id']
2015-07-21 13:41:28.949 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec  
 File 
/opt/stack/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/ipsec.py,
 line 336, in _execute
2015-07-21 13:41:28.949 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec  
   extra_ok_codes=extra_ok_codes)
2015-07-21 13:41:28.949 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec  
 File /opt/stack/neutron/neutron/agent/linux/ip_lib.py, line 701, in execute
2015-07-21 13:41:28.949 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec  
   extra_ok_codes=extra_ok_codes, **kwargs)
2015-07-21 13:41:28.949 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec  
 File /opt/stack/neutron/neutron/agent/linux/utils.py, line 138, in execute
2015-07-21 13:41:28.949 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec  
   raise RuntimeError(m)
2015-07-21 13:41:28.949 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 
RuntimeError:
2015-07-21 13:41:28.949 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 
Command: ['ip', 'netns', 'exec', 
u'qrouter-3d68e902-ce44-411a-bd4e-6ff9a33d8a85', 'ipsec', 'addconn', 
'--ctlbase', 
u'/opt/stack/data/neutron/ipsec/3d68e902-ce44-411a-bd4e-6ff9a33d8a85/var/run/pluto.ctl',
 '--defaultroutenexthop', u'172.24.4.3', '--config', 
u'/opt/stack/data/neutron/ipsec/3d68e902-ce44-411a-bd4e-6ff9a33d8a85/etc/ipsec.conf',
 u'd0fa8bac-b8eb-4c33-a023-8c96560c99ec']
2015-07-21 13:41:28.949 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 
Exit code: 34
2015-07-21 13:41:28.949 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 
Stdin:
2015-07-21 13:41:28.949 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 
Stdout: 034 esp string error: Non initial digit found for auth keylen, just 
after aes128- (old_state=ST_AA_END)
2015-07-21 13:41:28.949 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 
2015-07-21 13:41:28.949 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 
Stderr:
2015-07-21 13:41:28.949 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 


The reason is that AH protocol doesn't have any encryption. That is why 
phase2alg in ipsec.conf template should be modified to exclude encryption for 
AH.

** Affects: neutron
 Importance: Undecided
 Assignee: Elena Ezhova (eezhova)
 Status: New


** Tags: vpnaas

** Tags added: vpnaas

** Changed in: neutron
 Assignee: (unassigned) = Elena Ezhova (eezhova)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1476681

Title:
  VPNaaS: Fix phase2alg for AH in ipsec.conf.template

Status in neutron:
  New

Bug description:
  Any attempt to create IPSec site connection with policy that specifies
  AH protocol instead of ESP leads to the following error:

  2015-07-21 13:41:28.948 ERROR neutron.agent.linux.utils 
[req-e3237103-fb81-4c23-8a69-98c755c177c9 admin 
4733f78a7cd749f38b20bc134b9675a0] 
  Command: ['ip', 'netns', 'exec', 
u'qrouter-3d68e902-ce44-411a-bd4e-6ff9a33d8a85', 'ipsec', 'addconn', 
'--ctlbase', 
u'/opt/stack/data/neutron/ipsec/3d68e902-ce44-411a-bd4e-6ff9a33d8a85/var/run/pluto.ctl',
 '--defaultroutenexthop', u'172.24.4.3', '--config', 

[Yahoo-eng-team] [Bug 1469029] Re: Migrations fail going from juno - kilo

2015-07-21 Thread Roman Podoliaka
The last Sam's comment means there is a problem with Keystone migration
scripts, or I should really say, they simply do not override the
horrible MySQL default for collation and new tables created are not in
utf8.

** Changed in: oslo.db
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1469029

Title:
  Migrations fail going from juno - kilo

Status in Keystone:
  In Progress
Status in oslo.db:
  Invalid

Bug description:
  Trying to upgrade from Juno - Kilo

  keystone-manage db_version
  55

  keystone-manage db_sync
  2015-06-26 16:52:47.494 6169 CRITICAL keystone [-] ProgrammingError: 
(ProgrammingError) (1146, Table 'keystone_k.identity_provider' doesn't exist) 
'ALTER TABLE identity_provider Engine=InnoDB' ()
  2015-06-26 16:52:47.494 6169 TRACE keystone Traceback (most recent call last):
  2015-06-26 16:52:47.494 6169 TRACE keystone   File 
/opt/kilo/bin/keystone-manage, line 10, in module
  2015-06-26 16:52:47.494 6169 TRACE keystone execfile(__file__)
  2015-06-26 16:52:47.494 6169 TRACE keystone   File 
/opt/keystone/bin/keystone-manage, line 44, in module
  2015-06-26 16:52:47.494 6169 TRACE keystone cli.main(argv=sys.argv, 
config_files=config_files)
  2015-06-26 16:52:47.494 6169 TRACE keystone   File 
/opt/keystone/keystone/cli.py, line 585, in main
  2015-06-26 16:52:47.494 6169 TRACE keystone CONF.command.cmd_class.main()
  2015-06-26 16:52:47.494 6169 TRACE keystone   File 
/opt/keystone/keystone/cli.py, line 76, in main
  2015-06-26 16:52:47.494 6169 TRACE keystone 
migration_helpers.sync_database_to_version(extension, version)
  2015-06-26 16:52:47.494 6169 TRACE keystone   File 
/opt/keystone/keystone/common/sql/migration_helpers.py, line 247, in 
sync_database_to_version
  2015-06-26 16:52:47.494 6169 TRACE keystone 
_sync_extension_repo(default_extension, version)
  2015-06-26 16:52:47.494 6169 TRACE keystone   File 
/opt/keystone/keystone/common/sql/migration_helpers.py, line 232, in 
_sync_extension_repo
  2015-06-26 16:52:47.494 6169 TRACE keystone _fix_federation_tables(engine)
  2015-06-26 16:52:47.494 6169 TRACE keystone   File 
/opt/keystone/keystone/common/sql/migration_helpers.py, line 167, in 
_fix_federation_tables
  2015-06-26 16:52:47.494 6169 TRACE keystone engine.execute(ALTER TABLE 
identity_provider Engine=InnoDB)
  2015-06-26 16:52:47.494 6169 TRACE keystone   File 
/opt/kilo/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py, line 
1863, in execute
  2015-06-26 16:52:47.494 6169 TRACE keystone return 
connection.execute(statement, *multiparams, **params)
  2015-06-26 16:52:47.494 6169 TRACE keystone   File 
/opt/kilo/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py, line 
833, in execute
  2015-06-26 16:52:47.494 6169 TRACE keystone return 
self._execute_text(object, multiparams, params)
  2015-06-26 16:52:47.494 6169 TRACE keystone   File 
/opt/kilo/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py, line 
982, in _execute_text
  2015-06-26 16:52:47.494 6169 TRACE keystone statement, parameters
  2015-06-26 16:52:47.494 6169 TRACE keystone   File 
/opt/kilo/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py, line 
1070, in _execute_context
  2015-06-26 16:52:47.494 6169 TRACE keystone context)
  2015-06-26 16:52:47.494 6169 TRACE keystone   File 
/opt/kilo/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/compat/handle_error.py,
 line 261, in _handle_dbapi_exception
  2015-06-26 16:52:47.494 6169 TRACE keystone e, statement, parameters, 
cursor, context)
  2015-06-26 16:52:47.494 6169 TRACE keystone   File 
/opt/kilo/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py, line 
1267, in _handle_dbapi_exception
  2015-06-26 16:52:47.494 6169 TRACE keystone 
util.raise_from_cause(newraise, exc_info)
  2015-06-26 16:52:47.494 6169 TRACE keystone   File 
/opt/kilo/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py, line 
199, in raise_from_cause
  2015-06-26 16:52:47.494 6169 TRACE keystone reraise(type(exception), 
exception, tb=exc_tb)
  2015-06-26 16:52:47.494 6169 TRACE keystone   File 
/opt/kilo/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py, line 
1063, in _execute_context
  2015-06-26 16:52:47.494 6169 TRACE keystone context)
  2015-06-26 16:52:47.494 6169 TRACE keystone   File 
/opt/kilo/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py, 
line 442, in do_execute
  2015-06-26 16:52:47.494 6169 TRACE keystone cursor.execute(statement, 
parameters)
  2015-06-26 16:52:47.494 6169 TRACE keystone   File 
/opt/kilo/local/lib/python2.7/site-packages/MySQLdb/cursors.py, line 205, in 
execute
  2015-06-26 16:52:47.494 6169 TRACE keystone self.errorhandler(self, exc, 
value)
  2015-06-26 16:52:47.494 6169 TRACE keystone   File 

[Yahoo-eng-team] [Bug 1476657] [NEW] Attempt to upload image via ftp makes queued image

2015-07-21 Thread Vlad Okhrimenko
Public bug reported:

Attempt to upload image via ftp creates queued image.

Steps:
1. Login to Horizon.
2. Go to Images tab.
3 .Click Create Image.
4. Specify Name and type FTP location instead of HTTP.

Actual:
Image with permanent queued status is created.

Expected to get some warning about unsupported protocol.
For example glance CLI does not allow to upload via FTP and warns user by words:
400 Bad Request
...
External source are not supported

** Affects: horizon
 Importance: Undecided
 Assignee: Vlad Okhrimenko (vokhrimenko)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Vlad Okhrimenko (vokhrimenko)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1476657

Title:
  Attempt to upload image via ftp makes queued image

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Attempt to upload image via ftp creates queued image.

  Steps:
  1. Login to Horizon.
  2. Go to Images tab.
  3 .Click Create Image.
  4. Specify Name and type FTP location instead of HTTP.

  Actual:
  Image with permanent queued status is created.

  Expected to get some warning about unsupported protocol.
  For example glance CLI does not allow to upload via FTP and warns user by 
words:
  400 Bad Request
  ...
  External source are not supported

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1476657/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476695] [NEW] Invalid body in image creation causes 500

2015-07-21 Thread Niall Bunting
Public bug reported:

Overview:
When creating an image the user either puts a number as the body or sends some 
type of string will cause the server to fall over due to a type error.

How to produce:
curl -X POST http://your-ip:9292/v2/images/ -H X-Auth-Token: your-auth-token 
-d 123

Rather than a number lots of inputs can cause the type error to be
thrown such as '[]' or '[some text]'.

Actual:
500

Expected:
400 with body invalid message.

** Affects: glance
 Importance: Undecided
 Assignee: Niall Bunting (niall-bunting)
 Status: New

** Changed in: glance
 Assignee: (unassigned) = Niall Bunting (niall-bunting)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1476695

Title:
  Invalid body in image creation causes 500

Status in Glance:
  New

Bug description:
  Overview:
  When creating an image the user either puts a number as the body or sends 
some type of string will cause the server to fall over due to a type error.

  How to produce:
  curl -X POST http://your-ip:9292/v2/images/ -H X-Auth-Token: 
your-auth-token -d 123

  Rather than a number lots of inputs can cause the type error to be
  thrown such as '[]' or '[some text]'.

  Actual:
  500

  Expected:
  400 with body invalid message.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1476695/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476696] [NEW] cpu pinning flavor metadef

2015-07-21 Thread Waldemar Znoinski
Public bug reported:

Flavor namespace in glance metadefs should be extended with
hw:cpu_policy property to be able to configure flavors using that
property

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1476696

Title:
  cpu pinning flavor metadef

Status in Glance:
  New

Bug description:
  Flavor namespace in glance metadefs should be extended with
  hw:cpu_policy property to be able to configure flavors using that
  property

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1476696/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp