[Yahoo-eng-team] [Bug 1454567] [NEW] service list updated at giving wrong value

2015-05-13 Thread Masco Kaliyamoorthy
Public bug reported:

#nova service-list command will return the list of services
in that the 'updated_at' field always returning the current time.

output:
ubuntu@develop:~/devstack$ nova service-list
+++-+--+-+---++-+
| Id | Binary | Host| Zone | Status  | State | Updated_at   
  | Disabled Reason |
+++-+--+-+---++-+
| 1  | nova-conductor | develop | internal | enabled | up| 
2015-05-13T06:21:50.00 | -   |
| 3  | nova-cert  | develop | internal | enabled | up| 
2015-05-13T06:21:48.00 | -   |
| 4  | nova-scheduler | develop | internal | enabled | up| 
2015-05-13T06:21:55.00 | -   |
| 5  | nova-compute   | develop | nova | enabled | up| 
2015-05-13T06:21:55.00 | -   |
+++-+--+-+---++-+
ubuntu@develop:~/devstack$ nova service-list
+++-+--+-+---++-+
| Id | Binary | Host| Zone | Status  | State | Updated_at   
  | Disabled Reason |
+++-+--+-+---++-+
| 1  | nova-conductor | develop | internal | enabled | up| 
2015-05-13T06:22:00.00 | -   |
| 3  | nova-cert  | develop | internal | enabled | up| 
2015-05-13T06:21:58.00 | -   |
| 4  | nova-scheduler | develop | internal | enabled | up| 
2015-05-13T06:21:55.00 | -   |
| 5  | nova-compute   | develop | nova | enabled | up| 
2015-05-13T06:21:55.00 | -   |
+++-+--+-+---++-+

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1454567

Title:
  service list updated at giving wrong value

Status in OpenStack Compute (Nova):
  New

Bug description:
  #nova service-list command will return the list of services
  in that the 'updated_at' field always returning the current time.

  output:
  ubuntu@develop:~/devstack$ nova service-list
  
+++-+--+-+---++-+
  | Id | Binary | Host| Zone | Status  | State | Updated_at 
| Disabled Reason |
  
+++-+--+-+---++-+
  | 1  | nova-conductor | develop | internal | enabled | up| 
2015-05-13T06:21:50.00 | -   |
  | 3  | nova-cert  | develop | internal | enabled | up| 
2015-05-13T06:21:48.00 | -   |
  | 4  | nova-scheduler | develop | internal | enabled | up| 
2015-05-13T06:21:55.00 | -   |
  | 5  | nova-compute   | develop | nova | enabled | up| 
2015-05-13T06:21:55.00 | -   |
  
+++-+--+-+---++-+
  ubuntu@develop:~/devstack$ nova service-list
  
+++-+--+-+---++-+
  | Id | Binary | Host| Zone | Status  | State | Updated_at 
| Disabled Reason |
  
+++-+--+-+---++-+
  | 1  | nova-conductor | develop | internal | enabled | up| 
2015-05-13T06:22:00.00 | -   |
  | 3  | nova-cert  | develop | internal | enabled | up| 
2015-05-13T06:21:58.00 | -   |
  | 4  | nova-scheduler | develop | internal | enabled | up| 
2015-05-13T06:21:55.00 | -   |
  | 5  | nova-compute   | develop | nova | enabled | up| 
2015-05-13T06:21:55.00 | -   |
  
+++-+--+-+---++-+

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1454567/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1454566] [NEW] Neutron DB migration failed in the script 2b801560a332_remove_hypervneutronplugin_tables.py when upgrade

2015-05-13 Thread Yang Yu
Public bug reported:

  File /usr/local/bin/neutron-db-manage, line 10, in module
sys.exit(main())
  File /opt/stack/neutron/neutron/db/migration/cli.py, line 238, in main
CONF.command.func(config, CONF.command.name)
  File /opt/stack/neutron/neutron/db/migration/cli.py, line 106, in do_upgrade
do_alembic_command(config, cmd, revision, sql=CONF.command.sql)
  File /opt/stack/neutron/neutron/db/migration/cli.py, line 72, in 
do_alembic_command
getattr(alembic_command, cmd)(config, *args, **kwargs)
  File /usr/local/lib/python2.7/dist-packages/alembic/command.py, line 165, 
in upgrade
script.run_env()
  File /usr/local/lib/python2.7/dist-packages/alembic/script.py, line 390, in 
run_env
util.load_python_file(self.dir, 'env.py')
  File /usr/local/lib/python2.7/dist-packages/alembic/util.py, line 243, in 
load_python_file
module = load_module_py(module_id, path)
  File /usr/local/lib/python2.7/dist-packages/alembic/compat.py, line 79, in 
load_module_py
mod = imp.load_source(module_id, path, fp)
  File /opt/stack/neutron/neutron/db/migration/alembic_migrations/env.py, 
line 109, in module
run_migrations_online()
  File /opt/stack/neutron/neutron/db/migration/alembic_migrations/env.py, 
line 100, in run_migrations_online
context.run_migrations()
  File string, line 7, in run_migrations
  File /usr/local/lib/python2.7/dist-packages/alembic/environment.py, line 
742, in run_migrations
self.get_context().run_migrations(**kw)
  File /usr/local/lib/python2.7/dist-packages/alembic/migration.py, line 309, 
in run_migrations
step.migration_fn(**kw)
  File 
/opt/stack/neutron/neutron/db/migration/alembic_migrations/versions/2b801560a332_remove_hypervneutronplugin_tables.py,
 line 132, in upgrade
_migrate_port_bindings(bind)
  File 
/opt/stack/neutron/neutron/db/migration/alembic_migrations/versions/2b801560a332_remove_hypervneutronplugin_tables.py,
 line 123, in _migrate_port_bindings
op.execute(ml2_port_bindings.insert(), ml2_bindings)
  File string, line 7, in execute
  File /usr/local/lib/python2.7/dist-packages/alembic/operations.py, line 
1270, in execute
execution_options=execution_options)
  File /usr/local/lib/python2.7/dist-packages/alembic/ddl/impl.py, line 108, 
in execute
self._exec(sql, execution_options)
  File /usr/local/lib/python2.7/dist-packages/alembic/ddl/impl.py, line 104, 
in _exec
conn = conn.execution_options(**execution_options)
TypeError: execution_options() argument after ** must be a mapping, not list

** Affects: neutron
 Importance: Undecided
 Assignee: Yang Yu (yuyangbj)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Yang Yu (yuyangbj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1454566

Title:
  Neutron DB migration failed in the script
  2b801560a332_remove_hypervneutronplugin_tables.py when upgrade

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
File /usr/local/bin/neutron-db-manage, line 10, in module
  sys.exit(main())
File /opt/stack/neutron/neutron/db/migration/cli.py, line 238, in main
  CONF.command.func(config, CONF.command.name)
File /opt/stack/neutron/neutron/db/migration/cli.py, line 106, in 
do_upgrade
  do_alembic_command(config, cmd, revision, sql=CONF.command.sql)
File /opt/stack/neutron/neutron/db/migration/cli.py, line 72, in 
do_alembic_command
  getattr(alembic_command, cmd)(config, *args, **kwargs)
File /usr/local/lib/python2.7/dist-packages/alembic/command.py, line 165, 
in upgrade
  script.run_env()
File /usr/local/lib/python2.7/dist-packages/alembic/script.py, line 390, 
in run_env
  util.load_python_file(self.dir, 'env.py')
File /usr/local/lib/python2.7/dist-packages/alembic/util.py, line 243, in 
load_python_file
  module = load_module_py(module_id, path)
File /usr/local/lib/python2.7/dist-packages/alembic/compat.py, line 79, 
in load_module_py
  mod = imp.load_source(module_id, path, fp)
File /opt/stack/neutron/neutron/db/migration/alembic_migrations/env.py, 
line 109, in module
  run_migrations_online()
File /opt/stack/neutron/neutron/db/migration/alembic_migrations/env.py, 
line 100, in run_migrations_online
  context.run_migrations()
File string, line 7, in run_migrations
File /usr/local/lib/python2.7/dist-packages/alembic/environment.py, line 
742, in run_migrations
  self.get_context().run_migrations(**kw)
File /usr/local/lib/python2.7/dist-packages/alembic/migration.py, line 
309, in run_migrations
  step.migration_fn(**kw)
File 
/opt/stack/neutron/neutron/db/migration/alembic_migrations/versions/2b801560a332_remove_hypervneutronplugin_tables.py,
 line 132, in upgrade
  _migrate_port_bindings(bind)
File 

[Yahoo-eng-team] [Bug 1454621] [NEW] Multipath device descripter is not deleted while the device pathes are removed after detach last volume form VM on a host

2015-05-13 Thread Tina Tang
Public bug reported:

Multipath descripter is not removed in volume detachment

In the iSCSI multipath environment, after I detach a last volume from a
instance on the host. Althougth the devices under /dev/disk/by-path are
removed by the multipath descriptor is not removed. I am using VNX as
cinder backend.

How I produced this issue=
1. Before I do an detachment, there is only LUN 86 attached to the host. 
stack@ubuntu-server7:/dev/disk/by-path$ ls
ip-192.168.4.52:3260-iscsi-iqn.1992-04.com.emc:cx.fnm00130200235.a6-lun-0   
pci-:04:00.0-scsi-0:1:1:0-part1
pci-:05:00.1-fc-0x500601610860080f-lun-0
ip-192.168.4.52:3260-iscsi-iqn.1992-04.com.emc:cx.fnm00130200235.a6-lun-86  
pci-:04:00.0-scsi-0:1:1:0-part2
pci-:05:00.1-fc-0x50060169086003ba-lun-0
ip-192.168.4.53:3260-iscsi-iqn.1992-04.com.emc:cx.fnm00130200235.b6-lun-0   
pci-:04:00.0-scsi-0:1:1:0-part5
pci-:05:00.1-fc-0x50060169086003ba-lun-10
pci-:04:00.0-scsi-0:1:0:0   
pci-:05:00.1-fc-0x50060161086003ba-lun-0
pci-:04:00.0-scsi-0:1:1:0   
pci-:05:00.1-fc-0x50060161086003ba-lun-10
stack@ubuntu-server7:/dev/disk/by-path$ ls -l
total 0
lrwxrwxrwx 1 root root  9 May 12 23:26 
ip-192.168.4.52:3260-iscsi-iqn.1992-04.com.emc:cx.fnm00130200235.a6-lun-0 - 
../../sdh
lrwxrwxrwx 1 root root  9 May 12 23:26 
ip-192.168.4.52:3260-iscsi-iqn.1992-04.com.emc:cx.fnm00130200235.a6-lun-86 - 
../../sdi
lrwxrwxrwx 1 root root  9 May 12 23:28 
ip-192.168.4.53:3260-iscsi-iqn.1992-04.com.emc:cx.fnm00130200235.b6-lun-0 - 
../../sdj
lrwxrwxrwx 1 root root  9 May 12 18:07 pci-:04:00.0-scsi-0:1:0:0 - 
../../sdb
lrwxrwxrwx 1 root root  9 May 12 18:07 pci-:04:00.0-scsi-0:1:1:0 - 
../../sda
lrwxrwxrwx 1 root root 10 May 12 18:07 pci-:04:00.0-scsi-0:1:1:0-part1 - 
../../sda1
lrwxrwxrwx 1 root root 10 May 12 18:07 pci-:04:00.0-scsi-0:1:1:0-part2 - 
../../sda2
lrwxrwxrwx 1 root root 10 May 12 18:07 pci-:04:00.0-scsi-0:1:1:0-part5 - 
../../sda5
lrwxrwxrwx 1 root root  9 May 12 18:07 
pci-:05:00.1-fc-0x50060161086003ba-lun-0 - ../../sdc
lrwxrwxrwx 1 root root  9 May 12 18:07 
pci-:05:00.1-fc-0x50060161086003ba-lun-10 - ../../sdd
lrwxrwxrwx 1 root root  9 May 12 18:07 
pci-:05:00.1-fc-0x500601610860080f-lun-0 - ../../sde
lrwxrwxrwx 1 root root  9 May 12 18:07 
pci-:05:00.1-fc-0x50060169086003ba-lun-0 - ../../sdf
lrwxrwxrwx 1 root root  9 May 12 18:07 
pci-:05:00.1-fc-0x50060169086003ba-lun-10 - ../../sdg
stack@ubuntu-server7:/dev/disk/by-path$ sudo multipath -l /dev/sdi
3600601601bd03200d8f0c1714bf9e411 dm-6 DGC,VRAID
size=3.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=-1 status=active
  `- 12:0:0:86 sdi 8:128 active undef running
  
2. Trigger the detachment
stack@ubuntu-server7:/dev/disk/by-path$ nova volume-detach 
818eb5fb-66fe-4179-9466-04ef1ec09a8d 837bd1e1-8d82-472e-bd42-f8edd4fbcc42

3. After the detachment, The devices under /dev/disk/by-path went away this is 
as excpeted
stack@ubuntu-server7:/dev/disk/by-path$ ls
ip-192.168.4.53:3260-iscsi-iqn.1992-04.com.emc:cx.fnm00130200235.b6-lun-0  
pci-:04:00.0-scsi-0:1:1:0-part2
pci-:05:00.1-fc-0x500601610860080f-lun-0
pci-:04:00.0-scsi-0:1:0:0  
pci-:04:00.0-scsi-0:1:1:0-part5
pci-:05:00.1-fc-0x50060169086003ba-lun-0
pci-:04:00.0-scsi-0:1:1:0  
pci-:05:00.1-fc-0x50060161086003ba-lun-0   
pci-:05:00.1-fc-0x50060169086003ba-lun-10
pci-:04:00.0-scsi-0:1:1:0-part1
pci-:05:00.1-fc-0x50060161086003ba-lun-10

4. But the multipath descripter is not removed 3600601601bd03200d8f0c1714bf9e411
stack@ubuntu-server7:/dev/disk/by-path$ sudo multipath -ll
3600601602ba034002c160db00960e411 dm-1 DGC,VRAID
size=1.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=130 status=active
| `- 9:0:2:10 sdg 8:96  active ready  running
`-+- policy='round-robin 0' prio=10 status=enabled
  `- 9:0:0:10 sdd 8:48  active ready  running
3600508e01952e401650d0b0a dm-2 LSI,Logical Volume
size=929G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  `- 6:1:1:0  sda 8:0   active ready  running
3600601601bd03200d8f0c1714bf9e411 dm-6 ,
size=3.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  `- #:#:#:#  -   #:#   active faulty running
3600508e09551404ce9efae0d dm-0 LSI,Logical Volume
size=222G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  `- 6:1:0:0  sdb 8:16  active ready  running

stack@ubuntu-server7:/dev/disk/by-path$ sudo multipath -l 
3600601601bd03200d8f0c1714bf9e411
3600601601bd03200d8f0c1714bf9e411 dm-6 ,
size=3.0G features='1 

[Yahoo-eng-team] [Bug 1449850] Re: Join multiple criteria together

2015-05-13 Thread Kamil Rykowski
** Also affects: glance
   Importance: Undecided
   Status: New

** Changed in: glance
   Status: New = In Progress

** Changed in: glance
 Assignee: (unassigned) = Kamil Rykowski (kamil-rykowski)

** Changed in: glance
   Importance: Undecided = Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1449850

Title:
  Join multiple criteria together

Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  SQLAlchemy supports to join multiple criteria together, this is
  provided to build the query statements when there is multiple
  filtering criterion instead of constructing query statement one by
  one,  just *assume* SQLAlchemy prefer to use it in this way, and the
  code looks more clean after refactoring.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1449850/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296075] Re: Needles duplication in strings

2015-05-13 Thread Łukasz Jernaś
** Changed in: horizon
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1296075

Title:
  Needles duplication in strings

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  There are a lot of strings which create additional jobs for translators, even 
though it could be avoided.
  For example in 
openstack_dashboard/dashboards/project/images/templates/images/images/detail.html
 we have the same string in 3 different versions:
   * Image Details
   * Image Details:  (note the space)
   * Image Details:

  Things like trailing colons or whitespace add additional points of
  failure for translators and usually shouldn't be included in a
  translatable string.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1296075/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1454640] [NEW] ml2.test_rpc cannot be run with testtools

2015-05-13 Thread Rossella Sblendido
Public bug reported:

When running ml2.test_rpc with testtools the following error is got:

./run_tests.sh -d  
neutron.tests.unit.plugins.ml2.test_rpc.RpcCallbacksTestCase.test_update_device_list_no_failure
Tests running...
==
ERROR: 
neutron.tests.unit.plugins.ml2.test_rpc.RpcCallbacksTestCase.test_update_device_list_no_failure
--
Empty attachments:
  pythonlogging:''
  pythonlogging:'neutron.api.extensions'

Traceback (most recent call last):
  File neutron/tests/unit/plugins/ml2/test_rpc.py, line 41, in setUp
self.type_manager = managers.TypeManager()
  File neutron/plugins/ml2/managers.py, line 46, in __init__
cfg.CONF.ml2.type_drivers)
  File 
/opt/stack/neutron/.venv/local/lib/python2.7/site-packages/oslo_config/cfg.py,
 line 1867, in __getattr__
raise NoSuchOptError(name)
oslo_config.cfg.NoSuchOptError: no such option: ml2

** Affects: neutron
 Importance: Undecided
 Assignee: Rossella Sblendido (rossella-o)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Rossella Sblendido (rossella-o)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1454640

Title:
  ml2.test_rpc cannot be run with testtools

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When running ml2.test_rpc with testtools the following error is got:

  ./run_tests.sh -d  
neutron.tests.unit.plugins.ml2.test_rpc.RpcCallbacksTestCase.test_update_device_list_no_failure
  Tests running...
  ==
  ERROR: 
neutron.tests.unit.plugins.ml2.test_rpc.RpcCallbacksTestCase.test_update_device_list_no_failure
  --
  Empty attachments:
pythonlogging:''
pythonlogging:'neutron.api.extensions'

  Traceback (most recent call last):
File neutron/tests/unit/plugins/ml2/test_rpc.py, line 41, in setUp
  self.type_manager = managers.TypeManager()
File neutron/plugins/ml2/managers.py, line 46, in __init__
  cfg.CONF.ml2.type_drivers)
File 
/opt/stack/neutron/.venv/local/lib/python2.7/site-packages/oslo_config/cfg.py,
 line 1867, in __getattr__
  raise NoSuchOptError(name)
  oslo_config.cfg.NoSuchOptError: no such option: ml2

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1454640/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1454610] [NEW] use filter to get compute services

2015-05-13 Thread Masco Kaliyamoorthy
Public bug reported:

to get the list of compute services, we can use binary filter instead of
getting all and filter a compute service.

** Affects: horizon
 Importance: Undecided
 Assignee: Masco Kaliyamoorthy (masco)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Masco Kaliyamoorthy (masco)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1454610

Title:
  use filter to get compute services

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  to get the list of compute services, we can use binary filter instead
  of getting all and filter a compute service.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1454610/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1454041] Re: misunderstanding caused by uuid token and pki token in install guide

2015-05-13 Thread Dolph Mathews
Keystone switched to UUID by default in Juno due to longstanding issues
with PKI that will likely never be resolved. At least in the stable/juno
or stable/kilo install guides, there is no token setup to do beyond
scheduling a cron job to run keystone-manage token_flush.

Setting the keystone token provider is unnecessary, as it's already UUID
in juno and kilo.

keystone-manage pki_setup is not useful if the token provider is not
PKI.

The install guide should not suggest all users switch to PKI tokens. If
they're mentioned at all, they should at least come with the caveat that
they do not improve security and that they will potentially exceed
header size limits in many pieces of software.

As of stable/kilo, the install guide could discuss switching to Fernet
tokens, but I think that's out of scope for this issue.

** Changed in: keystone
   Importance: Undecided = Medium

** Changed in: keystone
   Status: New = Confirmed

** Project changed: keystone = openstack-manuals

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1454041

Title:
  misunderstanding caused by uuid token and pki token in install guide

Status in OpenStack Manuals:
  Confirmed

Bug description:
  In released install guide, we can see the step to set token provider to uuid, 
 as following:
  [token]
  provider = keystone.token.providers.uuid.Provider

  but there are further steps to set pki token, as following:
  # keystone-manage pki_setup --keystone-user keystone --keystone-group
  keystone
  # chown -R keystone:keystone /var/log/keystone
  # chown -R keystone:keystone /etc/keystone/ssl
  # chmod -R o-rwx /etc/keystone/ssl

  
  I think pki token has been brought in from Grizzly,and the installation  
guide should be use pki token provier, like below:
  [token]
  provider = keystone.token.providers.pki.Provider

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-manuals/+bug/1454041/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1454292] Re: User can gain full access to another user's image by image_id

2015-05-13 Thread Inessa Vasilevskaya
Sorry, my environment was not in the original devstack configuration - I
had glance-api/glance-registry launch with noauth flavors and that
seemed to cause the described behaviour, as all requests were executed
with 'is_admin=True' in context even for demo user.

I propose to close the issue.

** Changed in: glance
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1454292

Title:
  User can gain full access to another user's image by image_id

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  If the image is created by a user for another tenant (with --owner
  option), the image won't be seen by the first user in glance image-
  list output, but will be accessible by image_id.

  Steps ro reproduce (I used kilo devstack):

  1. Create the image as demo user with --owner admin

  glance image-create --name created_by_demo --container-format bare
  --disk-format raw --file MANIFEST.in --owner admin

  Remember the id of the created image
  (8d72dbb2-70f9-4618-aee2-187d5c3f296a in my case)

  2. Make sure any list/update/delete operation performed by demo user
  on admin image succeeds.

  (Image Update)
  glance image-update 8d72dbb2-70f9-4618-aee2-187d5c3f296a --name 
updated-by-non-admin2
  +--+--+
  | Property | Value |
  +--+--+
  | checksum | c00d6a5ed8b04bb14b4760baf2804f24 |
  | container_format | bare |
  | created_at | 2015-05-12T14:33:38.481116 |
  | deleted | False |
  | deleted_at | None |
  | disk_format | raw |
  | id | 8d72dbb2-70f9-4618-aee2-187d5c3f296a |
  | is_public | False |
  | min_disk | 0 |
  | min_ram | 0 |
  | name | updated-by-non-admin2 |
  | owner | admin |
  | protected | False |
  | size | 529 |
  | status | active |
  | updated_at | 2015-05-12T14:40:33.162878 |
  | virtual_size | None |

  +--+--+

  (Image List)
   glance image-show 8d72dbb2-70f9-4618-aee2-187d5c3f296a
  +--+--+
  | Property | Value |
  +--+--+
  | checksum | c00d6a5ed8b04bb14b4760baf2804f24 |
  | container_format | bare |
  | created_at | 2015-05-12T14:33:38.481116 |
  | deleted | False |
  | disk_format | raw |
  | id | 8d72dbb2-70f9-4618-aee2-187d5c3f296a |
  | is_public | False |
  | min_disk | 0 |
  | min_ram | 0 |
  | name | updated-by-non-admin2 |
  | owner | admin |
  | protected | False |
  | size | 529 |
  | status | active |
  | updated_at | 2015-05-12T14:40:33.162878 |

  +--+--+

  (Image Delete)
  glance image-delete 8d72dbb2-70f9-4618-aee2-187d5c3f296a
  glance image-show 8d72dbb2-70f9-4618-aee2-187d5c3f296a
  +--+--+
  | Property | Value |
  +--+--+
  | checksum | c00d6a5ed8b04bb14b4760baf2804f24 |
  | container_format | bare |
  | created_at | 2015-05-12T14:33:38.481116 |
  | deleted | True |
  | deleted_at | 2015-05-12T14:43:52.995393 |
  | disk_format | raw |
  | id | 8d72dbb2-70f9-4618-aee2-187d5c3f296a |
  | is_public | False |
  | min_disk | 0 |
  | min_ram | 0 |
  | name | updated-by-non-admin2 |
  | owner | admin |
  | protected | False |
  | size | 529 |
  | status | deleted |
  | updated_at | 2015-05-12T14:43:52.996843 |

  +--+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1454292/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1454730] [NEW] Glance v1 registry returns 500 when passing --checksum over 32 characters long on image-create

2015-05-13 Thread Inessa Vasilevskaya
Public bug reported:

glance --os-image-api-version 1 image-create --name created_by_demo
--container-format bare --disk-format raw --file MANIFEST.in --checksum
2

Raises 500 InternalServerError due to DbError on save in db.

  File 
/home/ina/projects/glance/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py,
 line 1063, in _execute_context
context)
  File 
/home/ina/projects/glance/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py,
 line 442, in do_execute
cursor.execute(statement, parameters)
  File 
/home/ina/projects/glance/.venv/local/lib/python2.7/site-packages/MySQLdb/cursors.py,
 line 205, in execute
self.errorhandler(self, exc, value)
  File 
/home/ina/projects/glance/.venv/local/lib/python2.7/site-packages/MySQLdb/connections.py,
 line 36, in defaulterrorhandler
raise errorclass, errorvalue
DBError: (DataError) (1406, Data too long for column 'checksum' at row 1) 
'INSERT INTO images (created_at, updated_at, deleted_at, deleted, id, name, 
disk_format, container_format, size, virtual_size, status, is_public, checksum, 
min_disk, min_ram, owner, protected) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, 
%s, %s, %s, %s, %s, %s, %s, %s, %s)' (datetime.datetime(2015, 5, 13, 14, 24, 
22, 621502), datetime.datetime(2015, 5, 13, 14, 24, 22, 621515), None, 0, 
'6ea28b08-0131-431c-add3-9278cfb424c7', 'created_by_demo', 'raw', 'bare', 529, 
None, 'queued', 0, 
'2',
 0, 0, '0e12fbc7a63c44f1b078d96e0979be8e', 0)
2015-05-13 17:24:22.630 17212 INFO eventlet.wsgi.server 
[req-163e1c08-3ea1-47a1-ae5b-e27f522c0453 8773a0d6190d4190808c7669d0d7adc6 
0e12fbc7a63c44f1b078d96e0979be8e - - -] 127.0.0.1 - - [13/May/2015 17:24:22] 
POST /images HTTP/1.1 500 139 0.076708

v2 api is not affected, the 32-characters maxLength constraint is
validated by jsonschema.

** Affects: glance
 Importance: Undecided
 Assignee: Inessa Vasilevskaya (ivasilevskaya)
 Status: New

** Changed in: glance
 Assignee: (unassigned) = Inessa Vasilevskaya (ivasilevskaya)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1454730

Title:
  Glance v1 registry returns 500 when passing --checksum over 32
  characters long on image-create

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  glance --os-image-api-version 1 image-create --name created_by_demo
  --container-format bare --disk-format raw --file MANIFEST.in
  --checksum
  
2

  Raises 500 InternalServerError due to DbError on save in db.

File 
/home/ina/projects/glance/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py,
 line 1063, in _execute_context
  context)
File 
/home/ina/projects/glance/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py,
 line 442, in do_execute
  cursor.execute(statement, parameters)
File 
/home/ina/projects/glance/.venv/local/lib/python2.7/site-packages/MySQLdb/cursors.py,
 line 205, in execute
  self.errorhandler(self, exc, value)
File 
/home/ina/projects/glance/.venv/local/lib/python2.7/site-packages/MySQLdb/connections.py,
 line 36, in defaulterrorhandler
  raise errorclass, errorvalue
  DBError: (DataError) (1406, Data too long for column 'checksum' at row 1) 
'INSERT INTO images (created_at, updated_at, deleted_at, deleted, id, name, 
disk_format, container_format, size, virtual_size, status, is_public, checksum, 
min_disk, min_ram, owner, protected) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, 
%s, %s, %s, %s, %s, %s, %s, %s, %s)' (datetime.datetime(2015, 5, 13, 14, 24, 
22, 621502), datetime.datetime(2015, 5, 13, 14, 24, 22, 621515), None, 0, 
'6ea28b08-0131-431c-add3-9278cfb424c7', 'created_by_demo', 'raw', 'bare', 529, 
None, 'queued', 0, 
'2',
 0, 0, '0e12fbc7a63c44f1b078d96e0979be8e', 0)
  2015-05-13 17:24:22.630 17212 INFO eventlet.wsgi.server 
[req-163e1c08-3ea1-47a1-ae5b-e27f522c0453 8773a0d6190d4190808c7669d0d7adc6 
0e12fbc7a63c44f1b078d96e0979be8e - - -] 127.0.0.1 - - [13/May/2015 17:24:22] 
POST /images HTTP/1.1 500 139 0.076708

  v2 api is not affected, the 32-characters maxLength constraint is
  validated by jsonschema.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1454730/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : 

[Yahoo-eng-team] [Bug 1454804] [NEW] Dashboard jasmine_tests.py file is unnecessary

2015-05-13 Thread Matt Borland
Public bug reported:

openstack_dashboard/test/jasmine/jasmine_tests.py is unnecessary as the
JS source, spec, and template files are all specified in the
_10_project.py file.

The file may be completely removed.

** Affects: horizon
 Importance: Undecided
 Assignee: Matt Borland (palecrow)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1454804

Title:
  Dashboard jasmine_tests.py file is unnecessary

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  openstack_dashboard/test/jasmine/jasmine_tests.py is unnecessary as
  the JS source, spec, and template files are all specified in the
  _10_project.py file.

  The file may be completely removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1454804/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1454839] [NEW] cells: error if deleting an instance while host is being set

2015-05-13 Thread Andrew Laski
Public bug reported:

If a delete request gets past some checks to perform a local delete, but
then has a host set on it before the local delete finishes it will fail
a db constraint check and spew this to the logs:

2015-05-13 11:14:58.666 ERROR nova.api.openstack 
[req-cb52db9a-d313-49c4-acce-b627b778ccd1 
ListServersNegativeTestJSON-1942377486 ListServersNegativeTestJSON-625281866] 
Caught error: Object action destroy failed because: host changed
2015-05-13 11:14:58.666 15399 TRACE nova.api.openstack Traceback (most recent 
call last):
2015-05-13 11:14:58.666 15399 TRACE nova.api.openstack   File 
/opt/stack/new/nova/nova/api/openstack/__init__.py, line 125, in __call__
2015-05-13 11:14:58.666 15399 TRACE nova.api.openstack return 
req.get_response(self.application)
2015-05-13 11:14:58.666 15399 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/request.py, line 1317, in send
2015-05-13 11:14:58.666 15399 TRACE nova.api.openstack application, 
catch_exc_info=False)
2015-05-13 11:14:58.666 15399 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/request.py, line 1281, in 
call_application
2015-05-13 11:14:58.666 15399 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
2015-05-13 11:14:58.666 15399 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2015-05-13 11:14:58.666 15399 TRACE nova.api.openstack return resp(environ, 
start_response)
2015-05-13 11:14:58.666 15399 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py,
 line 639, in __call__
2015-05-13 11:14:58.666 15399 TRACE nova.api.openstack return 
self._call_app(env, start_response)
2015-05-13 11:14:58.666 15399 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py,
 line 559, in _call_app
2015-05-13 11:14:58.666 15399 TRACE nova.api.openstack return 
self._app(env, _fake_start_response)
2015-05-13 11:14:58.666 15399 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2015-05-13 11:14:58.666 15399 TRACE nova.api.openstack return resp(environ, 
start_response)
2015-05-13 11:14:58.666 15399 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2015-05-13 11:14:58.666 15399 TRACE nova.api.openstack return resp(environ, 
start_response)
2015-05-13 11:14:58.666 15399 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/routes/middleware.py, line 136, in 
__call__
2015-05-13 11:14:58.666 15399 TRACE nova.api.openstack response = 
self.app(environ, start_response)
2015-05-13 11:14:58.666 15399 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2015-05-13 11:14:58.666 15399 TRACE nova.api.openstack return resp(environ, 
start_response)
2015-05-13 11:14:58.666 15399 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__
2015-05-13 11:14:58.666 15399 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
2015-05-13 11:14:58.666 15399 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func
2015-05-13 11:14:58.666 15399 TRACE nova.api.openstack return 
self.func(req, *args, **kwargs)
2015-05-13 11:14:58.666 15399 TRACE nova.api.openstack   File 
/opt/stack/new/nova/nova/api/openstack/wsgi.py, line 756, in __call__
2015-05-13 11:14:58.666 15399 TRACE nova.api.openstack content_type, body, 
accept)
2015-05-13 11:14:58.666 15399 TRACE nova.api.openstack   File 
/opt/stack/new/nova/nova/api/openstack/wsgi.py, line 821, in _process_stack
2015-05-13 11:14:58.666 15399 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
2015-05-13 11:14:58.666 15399 TRACE nova.api.openstack   File 
/opt/stack/new/nova/nova/api/openstack/wsgi.py, line 911, in dispatch
2015-05-13 11:14:58.666 15399 TRACE nova.api.openstack return 
method(req=request, **action_args)
2015-05-13 11:14:58.666 15399 TRACE nova.api.openstack   File 
/opt/stack/new/nova/nova/api/openstack/compute/servers.py, line 833, in delete
2015-05-13 11:14:58.666 15399 TRACE nova.api.openstack 
self._delete(req.environ['nova.context'], req, id)
2015-05-13 11:14:58.666 15399 TRACE nova.api.openstack   File 
/opt/stack/new/nova/nova/api/openstack/compute/servers.py, line 668, in 
_delete
2015-05-13 11:14:58.666 15399 TRACE nova.api.openstack 
self.compute_api.delete(context, instance)
2015-05-13 11:14:58.666 15399 TRACE nova.api.openstack   File 
/opt/stack/new/nova/nova/compute/cells_api.py, line 212, in delete
2015-05-13 11:14:58.666 15399 TRACE nova.api.openstack 
self._handle_cell_delete(context, instance, 'delete')
2015-05-13 11:14:58.666 15399 TRACE 

[Yahoo-eng-team] [Bug 1454823] [NEW] Error on compress: AttributeError: 'FileSystemStorage' object has no attribute 'prefix'

2015-05-13 Thread Doug Fish
Public bug reported:

I have found that in some stable/kilo environments it is not possible to
compress js/css.

drf@drf-VirtualBox:~/horizon$ .venv/bin/python manage.py compress --force
RemovedInDjango18Warning: 'The `firstof` template tag is changing to escape its 
arguments; the non-autoescaping version is deprecated. Load it from the 
`future` tag library to start using the new behavior.
WARNING:py.warnings:RemovedInDjango18Warning: 'The `firstof` template tag is 
changing to escape its arguments; the non-autoescaping version is deprecated. 
Load it from the `future` tag library to start using the new behavior.
RemovedInDjango18Warning: 'The `cycle` template tag is changing to escape its 
arguments; the non-autoescaping version is deprecated. Load it from the 
`future` tag library to start using the new behavior.
WARNING:py.warnings:RemovedInDjango18Warning: 'The `cycle` template tag is 
changing to escape its arguments; the non-autoescaping version is deprecated. 
Load it from the `future` tag library to start using the new behavior.
Found 'compress' tags in:
/home/drf/horizon/horizon/templates/horizon/_scripts.html
/home/drf/horizon/horizon/templates/horizon/_conf.html
/home/drf/horizon/openstack_dashboard/templates/_stylesheets.html
Compressing... CommandError: An error occured during rendering 
/home/drf/horizon/openstack_dashboard/templates/_stylesheets.html: Error 
parsing block:

[the entire output of the file
openstack_dashboard/static/dashboard/scss/horizon.scss is dumped here,
I've removed it]

From string u'// Pure CSS Vendor\n@import /horizon/lib/bootstrap'...:0
Traceback:
  File 
/home/drf/horizon/.venv/local/lib/python2.7/site-packages/scss/__init__.py, 
line 498, in manage_children
self._manage_children_impl(rule, scope)
  File 
/home/drf/horizon/.venv/local/lib/python2.7/site-packages/scss/__init__.py, 
line 548, in _manage_children_impl
self._do_import(rule, scope, block)
  File 
/home/drf/horizon/.venv/local/lib/python2.7/site-packages/django_pyscss/scss.py,
 line 118, in _do_import
source_file = self._find_source_file(name, relative_to)
  File 
/home/drf/horizon/.venv/local/lib/python2.7/site-packages/django_pyscss/scss.py,
 line 86, in _find_source_file
full_filename, storage = self.get_file_and_storage(name)
  File 
/home/drf/horizon/.venv/local/lib/python2.7/site-packages/django_pyscss/scss.py,
 line 53, in get_file_and_storage
return self.get_file_from_finders(filename)
  File 
/home/drf/horizon/.venv/local/lib/python2.7/site-packages/django_pyscss/scss.py,
 line 46, in get_file_from_finders
for file_and_storage in find_all_files(filename):
  File 
/home/drf/horizon/.venv/local/lib/python2.7/site-packages/django_pyscss/utils.py,
 line 16, in find_all_files
if fnmatch.fnmatchcase(os.path.join(storage.prefix or '', path),
AttributeError: 'FileSystemStorage' object has no attribute 'prefix'


I was able to force this to happen in my development environment by putting 
this update in requirements.txt
 Babel=1.3
-Django=1.4.2,1.8
+#Django=1.4.2,1.8
+Django==1.7.7
 Pint=0.5  # BSD
 django_compressor=1.4
 django_openstack_auth=1.1.7,!=1.1.8,1.3.0
-django-pyscss=1.0.3,2.0.0  # BSD License (2 clause)
+#django-pyscss=1.0.3,2.0.0  # BSD License (2 clause)
+django-pyscss==1.0.3
 eventlet=0.16.1,!=0.17.0

Note that these are in bounds (but not the latest) requirements.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1454823

Title:
  Error on compress:  AttributeError: 'FileSystemStorage' object has no
  attribute 'prefix'

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I have found that in some stable/kilo environments it is not possible
  to compress js/css.

  drf@drf-VirtualBox:~/horizon$ .venv/bin/python manage.py compress --force
  RemovedInDjango18Warning: 'The `firstof` template tag is changing to escape 
its arguments; the non-autoescaping version is deprecated. Load it from the 
`future` tag library to start using the new behavior.
  WARNING:py.warnings:RemovedInDjango18Warning: 'The `firstof` template tag is 
changing to escape its arguments; the non-autoescaping version is deprecated. 
Load it from the `future` tag library to start using the new behavior.
  RemovedInDjango18Warning: 'The `cycle` template tag is changing to escape its 
arguments; the non-autoescaping version is deprecated. Load it from the 
`future` tag library to start using the new behavior.
  WARNING:py.warnings:RemovedInDjango18Warning: 'The `cycle` template tag is 
changing to escape its arguments; the non-autoescaping version is deprecated. 
Load it from the `future` tag library to start using the new behavior.
  Found 'compress' tags in:
/home/drf/horizon/horizon/templates/horizon/_scripts.html

[Yahoo-eng-team] [Bug 1453779] Re: Performing rescue operation on a volume backed instance fails.

2015-05-13 Thread Sylvain Bauza
It's not a bug as it is explained : there is no current way to rescue a
volume-backend instance, even with the latest code.

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1453779

Title:
  Performing rescue operation on a volume backed instance fails.

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  When performing rescue operation on an instance booted from volume it gives 
error Cannot rescue a volume-backed instance, code: 400. 
  Steps to reproduce
  1. Boot a VM from volume
  curl -g -i -X POST 
https://10.0.0.5:8774/v2/ee61323896a34bea9c9a5623fbb6f239/os-volumes_boot  -H 
X-Auth-Token: omitted -d '{server: {name: TestVm, imageRef: , 
block_device_mapping_v2: [{boot_index: 0, uuid: 
5d246189-a666-470c-8cee-36ee489cbd9e, volume_size: 6, source_type: 
image, destination_type: volume, delete_on_termination: 1}], 
flavorRef: da9ba7b5-be67-4a62-bb35-a362e05ba2f2, max_count: 1, 
min_count: 1, networks: [{uuid: 
b5220eb2-e105-4ae0-8fc7-75a7cd468a40}]}}'

  {server: {security_groups: [{name: default}], OS-
  DCF:diskConfig: MANUAL, id:
  e436453d-5164-4f36-a7b0-617b63718759, links: [{href:
  
http://127.0.0.1:18774/v2/ee61323896a34bea9c9a5623fbb6f239/servers/e436453d-5164-4f36-a7b0-617b63718759;,
  rel: self}, {href:
  
http://127.0.0.1:18774/ee61323896a34bea9c9a5623fbb6f239/servers/e436453d-5164-4f36-a7b0-617b63718759;,
  rel: bookmark}], adminPass: 6zGefA3nzNiv}}

  
  2. Run rescue operation on this instance.
  curl -i 
'https://10.0.0.5:8774/v2/ee61323896a34bea9c9a5623fbb6f239/servers/e436453d-5164-4f36-a7b0-617b63718759/action'
 -X POST -H 'X-Auth-Token: omitted'  -d '{rescue: {adminPass: 
p8uQwFZ8qQan}}'
  HTTP/1.1 400 Bad Request
  Date: Mon, 11 May 2015 05:20:57 GMT
  Server: Apache/2.4.7 (Ubuntu)
  Access-Control-Allow-Origin: *
  Access-Control-Allow-Headers: Accept, Content-Type, X-Auth-Token, 
X-Subject-Token
  Access-Control-Expose-Headers: Accept, Content-Type, X-Auth-Token, 
X-Subject-Token
  Access-Control-Allow-Methods: GET POST OPTIONS PUT DELETE PATCH
  Content-Length: 147
  Content-Type: application/json; charset=UTF-8
  X-Compute-Request-Id: req-6d671d9d-475c-41a3-894e-3e72676e1144
  Via: 1.1 10.0.05:8774
  Connection: close

  {badRequest: {message: Instance
  e436453d-5164-4f36-a7b0-617b63718759 cannot be rescued: Cannot rescue
  a volume-backed instance, code: 400}}

  The above issue is observed in Icehouse.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1453779/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1454153] Re: nova.openstack.common.loopingcall run outlasted interval

2015-05-13 Thread Sylvain Bauza
I don't think it's a problem, that just means the method took more time
than the periodic one. Can you see it repeatively ?

** Changed in: nova
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1454153

Title:
  nova.openstack.common.loopingcall run outlasted interval

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  In the latest nova code, the loopingcall seems abnormal that prevents our 
code to be okay, the log is:
  2015-05-12 03:52:26.905 2594 WARNING nova.openstack.common.loopingcall 
[req-5d2fe3c6-6ee5-4dad-9e40-2a3ba3f33434 - - - - -] task function __swallowed 
at 0x59635f0 run outlasted interval by 5.72 sec

  that is to say, looping call function doesn't work, is there any
  problem here?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1454153/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1454846] [NEW] When prep_resize() issues a retry it calls conductor resize_instance() with flavor as a primitive

2015-05-13 Thread Hans Lindgren
Public bug reported:

Since the server method this ends up calling, migrate_server() has been
changed to take a flavor object, this only works for as long as the
compat code in migrate_server() is still in place. When conductor
compute task rpcapi version is major bumped and the compat code is
removed, retries will start to fail if this is not fixed first.

** Affects: nova
 Importance: Undecided
 Assignee: Hans Lindgren (hanlind)
 Status: New

** Summary changed:

- When prep_resize() issues a retry it calls conductor migrate_server() with 
flavor as a primitive
+ When prep_resize() issues a retry it calls conductor resize_instance() with 
flavor as a primitive

** Description changed:

- Since conductor migrate_server() has been changed to take a flavor
- object, this only works for as long as the compat code in
- migrate_server() is still in place. When conductor compute task rpcapi
- version is major bumped and the compat code is removed, retries will
- start to fail if this is not fixed first.
+ Since the server method this ends up calling, migrate_server() has been
+ changed to take a flavor object, this only works for as long as the
+ compat code in migrate_server() is still in place. When conductor
+ compute task rpcapi version is major bumped and the compat code is
+ removed, retries will start to fail if this is not fixed first.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1454846

Title:
  When prep_resize() issues a retry it calls conductor resize_instance()
  with flavor as a primitive

Status in OpenStack Compute (Nova):
  New

Bug description:
  Since the server method this ends up calling, migrate_server() has
  been changed to take a flavor object, this only works for as long as
  the compat code in migrate_server() is still in place. When conductor
  compute task rpcapi version is major bumped and the compat code is
  removed, retries will start to fail if this is not fixed first.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1454846/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453675] Re: Live migration fails

2015-05-13 Thread Sylvain Bauza
** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1453675

Title:
  Live migration fails

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  1: Exact Version (Latest apt-get dist-upgrade with Kilo repositories for 
ubuntu 14.04.02)
  ii  nova-api1:2015.1~rc1-0ubuntu1~cloud0  
all  OpenStack Compute - API frontend
  ii  nova-cert   1:2015.1~rc1-0ubuntu1~cloud0  
all  OpenStack Compute - certificate management
  ii  nova-common 1:2015.1~rc1-0ubuntu1~cloud0  
all  OpenStack Compute - common files
  ii  nova-conductor  1:2015.1~rc1-0ubuntu1~cloud0  
all  OpenStack Compute - conductor service
  ii  nova-consoleauth1:2015.1~rc1-0ubuntu1~cloud0  
all  OpenStack Compute - Console Authenticator
  ii  nova-novncproxy 1:2015.1~rc1-0ubuntu1~cloud0  
all  OpenStack Compute - NoVNC proxy
  ii  nova-scheduler  1:2015.1~rc1-0ubuntu1~cloud0  
all  OpenStack Compute - virtual machine scheduler
  ii  python-nova 1:2015.1~rc1-0ubuntu1~cloud0  
all  OpenStack Compute Python libraries
  ii  python-novaclient   1:2.22.0-0ubuntu1~cloud0  
all  client library for OpenStack Compute API

  2: Log files
  2015-05-11 09:26:05.515 25372 DEBUG nova.compute.api 
[req-d3f3807f-3aa9-472e-b076-3987372f8943 fb2aaa72c412443fafe9d483ecb396c5 
3d5adc9afa334a2097fc4374fe3c96e1 - - -] [instance: 
9cf946cf-8e0a-4e4b-8651-514251f7c2de] Going to try to live migrate instance to 
compute2 live_migrate /usr/lib/python2.7/dist-packages/nova/compute/api.py:3224
  2015-05-11 09:26:05.607 25372 INFO oslo_messaging._drivers.impl_rabbit 
[req-d3f3807f-3aa9-472e-b076-3987372f8943 fb2aaa72c412443fafe9d483ecb396c5 
3d5adc9afa334a2097fc4374fe3c96e1 - - -] Connecting to AMQP server on 
controller:5672
  2015-05-11 09:26:05.619 25372 INFO oslo_messaging._drivers.impl_rabbit 
[req-d3f3807f-3aa9-472e-b076-3987372f8943 fb2aaa72c412443fafe9d483ecb396c5 
3d5adc9afa334a2097fc4374fe3c96e1 - - -] Connected to AMQP server on 
controller:5672
  2015-05-11 09:26:05.623 25372 INFO oslo_messaging._drivers.impl_rabbit 
[req-d3f3807f-3aa9-472e-b076-3987372f8943 fb2aaa72c412443fafe9d483ecb396c5 
3d5adc9afa334a2097fc4374fe3c96e1 - - -] Connecting to AMQP server on 
controller:5672
  2015-05-11 09:26:05.636 25372 INFO oslo_messaging._drivers.impl_rabbit 
[req-d3f3807f-3aa9-472e-b076-3987372f8943 fb2aaa72c412443fafe9d483ecb396c5 
3d5adc9afa334a2097fc4374fe3c96e1 - - -] Connected to AMQP server on 
controller:5672
  2015-05-11 09:26:05.776 25372 ERROR 
nova.api.openstack.compute.contrib.admin_actions 
[req-d3f3807f-3aa9-472e-b076-3987372f8943 fb2aaa72c412443fafe9d483ecb396c5 
3d5adc9afa334a2097fc4374fe3c96e1 - - -] Live migration of instance 
9cf946cf-8e0a-4e4b-8651-514251f7c2de to host compute2 failed
  2015-05-11 09:26:05.776 25372 TRACE 
nova.api.openstack.compute.contrib.admin_actions Traceback (most recent call 
last):
  2015-05-11 09:26:05.776 25372 TRACE 
nova.api.openstack.compute.contrib.admin_actions   File 
/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/contrib/admin_actions.py,
 line 331, in _migrate_live
  2015-05-11 09:26:05.776 25372 TRACE 
nova.api.openstack.compute.contrib.admin_actions disk_over_commit, host)
  2015-05-11 09:26:05.776 25372 TRACE 
nova.api.openstack.compute.contrib.admin_actions   File 
/usr/lib/python2.7/dist-packages/nova/compute/api.py, line 219, in inner
  2015-05-11 09:26:05.776 25372 TRACE 
nova.api.openstack.compute.contrib.admin_actions return function(self, 
context, instance, *args, **kwargs)
  2015-05-11 09:26:05.776 25372 TRACE 
nova.api.openstack.compute.contrib.admin_actions   File 
/usr/lib/python2.7/dist-packages/nova/compute/api.py, line 247, in _wrapped
  2015-05-11 09:26:05.776 25372 TRACE 
nova.api.openstack.compute.contrib.admin_actions return fn(self, context, 
instance, *args, **kwargs)
  2015-05-11 09:26:05.776 25372 TRACE 
nova.api.openstack.compute.contrib.admin_actions   File 
/usr/lib/python2.7/dist-packages/nova/compute/api.py, line 200, in inner
  2015-05-11 09:26:05.776 25372 TRACE 
nova.api.openstack.compute.contrib.admin_actions return f(self, context, 
instance, *args, **kw)
  2015-05-11 09:26:05.776 25372 TRACE 
nova.api.openstack.compute.contrib.admin_actions   File 
/usr/lib/python2.7/dist-packages/nova/compute/api.py, line 3234, in 
live_migrate
  2015-05-11 09:26:05.776 25372 TRACE 
nova.api.openstack.compute.contrib.admin_actions 
disk_over_commit=disk_over_commit)
  2015-05-11 09:26:05.776 25372 TRACE 
nova.api.openstack.compute.contrib.admin_actions   

[Yahoo-eng-team] [Bug 1454880] [NEW] Angular source re-organization

2015-05-13 Thread Tyr Johanson
Public bug reported:

The Angular source will benefit from re-organization to
1) align module names with their directory structure
2) make it clear what is framework code (reusable utilities), what is core 
business logic of the horizon UI, and what is code that bootstraps the 
application.

See https://review.openstack.org/#/c/176152/ for an example of the full
set of proposed changes.  The ideas in that patch are well supported by
cores and the PTL, however that patch is too large to be easily reviewed
and merged.  Instead, create a series of smaller, dependent patches that
incrementally make the desired improvements.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1454880

Title:
  Angular source re-organization

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The Angular source will benefit from re-organization to
  1) align module names with their directory structure
  2) make it clear what is framework code (reusable utilities), what is core 
business logic of the horizon UI, and what is code that bootstraps the 
application.

  See https://review.openstack.org/#/c/176152/ for an example of the
  full set of proposed changes.  The ideas in that patch are well
  supported by cores and the PTL, however that patch is too large to be
  easily reviewed and merged.  Instead, create a series of smaller,
  dependent patches that incrementally make the desired improvements.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1454880/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1454772] [NEW] VPNaaS: tox -ecover fails

2015-05-13 Thread German Eichberger
Public bug reported:

error: option --coverage-package-name not recognized
ERROR: InvocationError: '/tmp/neutron-vpnaas/.tox/cover/bin/python setup.py 
testr --coverage --coverage-package-name=neutron_vpnaas --testr-args='
__ summary 
__
ERROR:   cover: commands failed

There is a proposed fix in https://review.openstack.org/#/c/182370/

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: vpnaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1454772

Title:
  VPNaaS: tox -ecover fails

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  error: option --coverage-package-name not recognized
  ERROR: InvocationError: '/tmp/neutron-vpnaas/.tox/cover/bin/python setup.py 
testr --coverage --coverage-package-name=neutron_vpnaas --testr-args='
  __ summary 
__
  ERROR:   cover: commands failed

  There is a proposed fix in https://review.openstack.org/#/c/182370/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1454772/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1454901] [NEW] OS install failed in the VM created using an ISO image

2015-05-13 Thread junxu
Public bug reported:

When we install OS in a VM created by an ISO image, it was failed.

Steps to reproduce:
1.  create an ISO image,   glance image-create --name ubuntu.iso --disk-format 
iso --container-format bare  --file ubuntu-14.04.2-server-amd64.iso --progress
2. create a vm using this ISO image
3. install OS in this vm.

We think the nova generates wrong libvirt config, below are two examples:
1.  For a vm with local storage,  it can't detect disk when install OS.  The 
partial libvirt.xml is as follow:

devices
  disk type=file device=cdrom
driver name=qemu type=qcow2 cache=none/
source 
file=/opt/stack/data/nova/instances/9b3c730a-8391-4b11-8e07-dcd0981fbc56/disk/
target bus=ide dev=hda/
  /disk
  disk type=file device=cdrom
driver name=qemu type=raw cache=none/
source 
file=/opt/stack/data/nova/instances/9b3c730a-8391-4b11-8e07-dcd0981fbc56/disk.config/
target bus=ide dev=hdd/
  /disk

2. For a volume vm,  it can't detect CD-ROM when install OS. And the
partial libvirt.xml is as follow:

devices
  disk type=file device=disk
driver name=qemu type=qcow2 cache=none/
source 
file=/var/lib/nova/instances/95a38caf-9f12-4516-8166-6c5b572b4734/disk/
target bus=virtio dev=vda/
  /disk
  disk type=file device=disk
driver name=qemu type=raw cache=none/
source 
file=/var/lib/nova/instances/95a38caf-9f12-4516-8166-6c5b572b4734/disk.config/
target bus=virtio dev=vdz/
  /disk

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  When we install OS in a VM created by an ISO image, it was failed.
  
  Steps to reproduce:
  1.  create an ISO image,   glance image-create --name ubuntu.iso 
--disk-format iso --container-format bare  --file 
ubuntu-14.04.2-server-amd64.iso --progress
  2. create a vm using this ISO image
  3. install OS in this vm.
  
- We think the nova generates wrong libvirt config, below are two example:
+ We think the nova generates wrong libvirt config, below are two examples:
  1.  For a vm with local storage,  it can't detect disk when install OS.  The 
partial libvirt.xml is as follow:
  
-   devices
- disk type=file device=cdrom
-   driver name=qemu type=qcow2 cache=none/
-   source 
file=/opt/stack/data/nova/instances/9b3c730a-8391-4b11-8e07-dcd0981fbc56/disk/
-   target bus=ide dev=hda/
- /disk
- disk type=file device=cdrom
-   driver name=qemu type=raw cache=none/
-   source 
file=/opt/stack/data/nova/instances/9b3c730a-8391-4b11-8e07-dcd0981fbc56/disk.config/
-   target bus=ide dev=hdd/
- /disk
+   devices
+ disk type=file device=cdrom
+   driver name=qemu type=qcow2 cache=none/
+   source 
file=/opt/stack/data/nova/instances/9b3c730a-8391-4b11-8e07-dcd0981fbc56/disk/
+   target bus=ide dev=hda/
+ /disk
+ disk type=file device=cdrom
+   driver name=qemu type=raw cache=none/
+   source 
file=/opt/stack/data/nova/instances/9b3c730a-8391-4b11-8e07-dcd0981fbc56/disk.config/
+   target bus=ide dev=hdd/
+ /disk
  
  2. For a volume vm,  it can't detect CD-ROM when install OS. And the
  partial libvirt.xml is as follow:
  
-   devices
- disk type=file device=disk
-   driver name=qemu type=qcow2 cache=none/
-   source 
file=/var/lib/nova/instances/95a38caf-9f12-4516-8166-6c5b572b4734/disk/
-   target bus=virtio dev=vda/
- /disk
- disk type=file device=disk
-   driver name=qemu type=raw cache=none/
-   source 
file=/var/lib/nova/instances/95a38caf-9f12-4516-8166-6c5b572b4734/disk.config/
-   target bus=virtio dev=vdz/
- /disk
+   devices
+ disk type=file device=disk
+   driver name=qemu type=qcow2 cache=none/
+   source 
file=/var/lib/nova/instances/95a38caf-9f12-4516-8166-6c5b572b4734/disk/
+   target bus=virtio dev=vda/
+ /disk
+ disk type=file device=disk
+   driver name=qemu type=raw cache=none/
+   source 
file=/var/lib/nova/instances/95a38caf-9f12-4516-8166-6c5b572b4734/disk.config/
+   target bus=virtio dev=vdz/
+ /disk

** Description changed:

  When we install OS in a VM created by an ISO image, it was failed.
  
  Steps to reproduce:
  1.  create an ISO image,   glance image-create --name ubuntu.iso 
--disk-format iso --container-format bare  --file 
ubuntu-14.04.2-server-amd64.iso --progress
  2. create a vm using this ISO image
  3. install OS in this vm.
  
  We think the nova generates wrong libvirt config, below are two examples:
  1.  For a vm with local storage,  it can't detect disk when install OS.  The 
partial libvirt.xml is as follow:
  
-   devices
- disk type=file device=cdrom
-   driver name=qemu type=qcow2 cache=none/
-   source 
file=/opt/stack/data/nova/instances/9b3c730a-8391-4b11-8e07-dcd0981fbc56/disk/
-   target bus=ide dev=hda/
- /disk
- disk type=file device=cdrom
-   driver name=qemu type=raw cache=none/
-   source 

[Yahoo-eng-team] [Bug 1454921] [NEW] OVS DVR: KeyError: 'gateway_mac'

2015-05-13 Thread YAMAMOTO Takashi
Public bug reported:

get_subnet_for_dvr RPC returns {} on error.
OVS agent, namely _bind_centralized_snat_port_on_dvr_subnet, doesn't handle the 
case gracefully.

eyJzZWFyY2giOiJtZXNzYWdlOiBcIktleUVycm9yOiAnZ2F0ZXdheV9tYWMnXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6Ijg2NDAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQzMTU3MzExODcxNH0

[req-4c481831-bcb8-47db-9487-e29eb396e871 None None] Error while processing VIF 
ports
2015-05-13 15:46:44.026 5536 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Traceback (most recent call 
last):
2015-05-13 15:46:44.026 5536 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/opt/stack/new/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
 line 1641, in rpc_loop
2015-05-13 15:46:44.026 5536 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent ovs_restarted)
2015-05-13 15:46:44.026 5536 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/opt/stack/new/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
 line 1411, in process_network_ports
2015-05-13 15:46:44.026 5536 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent devices_added_updated, 
ovs_restarted))
2015-05-13 15:46:44.026 5536 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/opt/stack/new/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
 line 1318, in treat_devices_added_or_updated
2015-05-13 15:46:44.026 5536 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/opt/stack/new/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
 line 1220, in treat_vif_port
2015-05-13 15:46:44.026 5536 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent fixed_ips, 
device_owner, ovs_restarted)
2015-05-13 15:46:44.026 5536 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/opt/stack/new/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
 line 723, in port_bound
2015-05-13 15:46:44.026 5536 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent device_owner)
2015-05-13 15:46:44.026 5536 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/opt/stack/new/neutron/neutron/plugins/openvswitch/agent/ovs_dvr_neutron_agent.py,
 line 671, in bind_port_to_dvr
2015-05-13 15:46:44.026 5536 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/opt/stack/new/neutron/neutron/plugins/openvswitch/agent/ovs_dvr_neutron_agent.py,
 line 641, in _bind_centralized_snat_port_on_dvr_subnet
2015-05-13 15:46:44.026 5536 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
(subnet_info['gateway_mac'],
2015-05-13 15:46:44.026 5536 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent KeyError: 'gateway_mac'
2015-05-13 15:46:44.026 5536 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1454921

Title:
  OVS DVR: KeyError: 'gateway_mac'

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  get_subnet_for_dvr RPC returns {} on error.
  OVS agent, namely _bind_centralized_snat_port_on_dvr_subnet, doesn't handle 
the case gracefully.

  
eyJzZWFyY2giOiJtZXNzYWdlOiBcIktleUVycm9yOiAnZ2F0ZXdheV9tYWMnXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6Ijg2NDAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQzMTU3MzExODcxNH0

  [req-4c481831-bcb8-47db-9487-e29eb396e871 None None] Error while processing 
VIF ports
  2015-05-13 15:46:44.026 5536 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Traceback (most recent call 
last):
  2015-05-13 15:46:44.026 5536 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/opt/stack/new/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
 line 1641, in rpc_loop
  2015-05-13 15:46:44.026 5536 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent ovs_restarted)
  2015-05-13 15:46:44.026 5536 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/opt/stack/new/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
 line 1411, in process_network_ports
  2015-05-13 15:46:44.026 5536 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent devices_added_updated, 
ovs_restarted))
  2015-05-13 15:46:44.026 5536 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/opt/stack/new/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
 line 1318, in treat_devices_added_or_updated
  2015-05-13 15:46:44.026 5536 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/opt/stack/new/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
 line 1220, in treat_vif_port
  2015-05-13 15:46:44.026 5536 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 

[Yahoo-eng-team] [Bug 1454455] Re: nova doesn't log to syslog

2015-05-13 Thread Davanum Srinivas (DIMS)
** Also affects: oslo.log
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1454455

Title:
  nova doesn't log to syslog

Status in OpenStack Compute (Nova):
  New
Status in Logging configuration library for OpenStack:
  New

Bug description:
  Logs from nova are not recorded when using syslog. Neutron logging
  works fine using the same rsyslog service. I've tried with debug and
  verbose enabled and disabled.

  
  1) Nova version:
   1:2014.2.2-0ubuntu1~cloud0 on Ubuntu 14.04

  2) Relevant log files:
  No relevant log files, as that is the problem

  3) Reproduction steps:
a) Set the following in nova.conf 
 logdir=/var/log/nova
b) Restart nova services
c) Confirm that logs are created in /var/log/nova
d) Remove logdir and add the following to nova.conf
use_syslog=true
syslog_log_facility=LOG_LOCAL0
e) Restart nova services
f) Nova's logs are not showing up in /var/log/syslog

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1454455/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1454792] [NEW] Inconsistency with authorization in functional test environment

2015-05-13 Thread Inessa Vasilevskaya
Public bug reported:

While writing a functional test I stumbled on the following
inconsistency:

When glance-api is launched with default flavor (no authentication) and
glance-registry with fakeauth flavor (or any other requiring user token)
any CRUD operation via api without a valid token should return 401, as
long as the user receives 401 from glance registry.

But the expected behaviour is not observed with glance v2 api. The user
can still perform any operation without supplying a token in headers.

I covered the issue in a test: https://review.openstack.org/#/c/180615/

** Affects: glance
 Importance: Undecided
 Status: New

** Description changed:

  While writing a functional test I stumbled on the following
- inconsistency with no authorization/authorization flavors:
+ inconsistency:
  
  When glance-api is launched with default flavor (no authentication) and
  glance-registry with fakeauth flavor (or any other requiring user token)
  any CRUD operation via api without a valid token should return 401, as
  long as the user receives 401 from glance registry.
  
  But the expected behaviour is not observed with glance v2 api. The user
  can still perform any operation without supplying a token in headers.
  
  I covered the issue in a test: https://review.openstack.org/#/c/180615/

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1454792

Title:
  Inconsistency with authorization in functional test environment

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  While writing a functional test I stumbled on the following
  inconsistency:

  When glance-api is launched with default flavor (no authentication)
  and glance-registry with fakeauth flavor (or any other requiring user
  token) any CRUD operation via api without a valid token should return
  401, as long as the user receives 401 from glance registry.

  But the expected behaviour is not observed with glance v2 api. The
  user can still perform any operation without supplying a token in
  headers.

  I covered the issue in a test:
  https://review.openstack.org/#/c/180615/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1454792/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1454515] [NEW] Property instance.disable_terminate is always False and never actually used.

2015-05-13 Thread Zhenyu Zheng
Public bug reported:

The property instance.disable_terminate is initialized using:

disable_terminate = Column(Boolean(), default=False)
in \nova\db\sqlalchemy\models.py

This property is then used in

1) compute\api:

def _delete(self, context, instance, delete_type, cb, **instance_attrs):
if instance.disable_terminate:
LOG.info(_LI('instance termination disabled'),
 instance=instance)
return

2) nova\api\ec2.py:

def _format_attr_disable_api_termination(instance, result):
result['disableApiTermination'] = instance.disable_terminate

Since there are no API provided to modify this property, it is always
False.

There are two ways to fix this:

1) Add methods to modify this property in servers/create api and servers/update 
api
to make it actually functional.

2) Remove this property and the whole logic in _delete().
modify nova\api\ec2.py, set  result['disableApiTermination'] = False

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  The property instance.disable_terminate is initialized using:
  
  disable_terminate = Column(Boolean(), default=False)
  in \nova\db\sqlalchemy\models.py
  
  This property is then used in
  
  1) compute\api:
  
  def _delete(self, context, instance, delete_type, cb, **instance_attrs):
  if instance.disable_terminate:
  LOG.info(_LI('instance termination disabled'),
   instance=instance)
  return
  
  2) nova\api\ec2.py:
  
  def _format_attr_disable_api_termination(instance, result):
  result['disableApiTermination'] = instance.disable_terminate
  
  Since there are no API provided to modify this property, it is always
  False.
  
  There are two ways to fix this:
  
  1) Add methods to modify this property in servers/create api and 
servers/update api
  to make it actually functional.
  
- 2) Remove this property and modify  nova\api\ec2.py, set
- result['disableApiTermination'] = False
+ 2) Remove this property and the whole logic in _delete().
+ modify nova\api\ec2.py, set  result['disableApiTermination'] = False

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1454515

Title:
  Property instance.disable_terminate is always False and never
  actually used.

Status in OpenStack Compute (Nova):
  New

Bug description:
  The property instance.disable_terminate is initialized using:

  disable_terminate = Column(Boolean(), default=False)
  in \nova\db\sqlalchemy\models.py

  This property is then used in

  1) compute\api:

  def _delete(self, context, instance, delete_type, cb, **instance_attrs):
  if instance.disable_terminate:
  LOG.info(_LI('instance termination disabled'),
   instance=instance)
  return

  2) nova\api\ec2.py:

  def _format_attr_disable_api_termination(instance, result):
  result['disableApiTermination'] = instance.disable_terminate

  Since there are no API provided to modify this property, it is always
  False.

  There are two ways to fix this:

  1) Add methods to modify this property in servers/create api and 
servers/update api
  to make it actually functional.

  2) Remove this property and the whole logic in _delete().
  modify nova\api\ec2.py, set  result['disableApiTermination'] = False

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1454515/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1454519] [NEW] iSCSI Multipath, multipath id is incorrect parsed when io_setup failed error appears in multipath -ll output

2015-05-13 Thread Tina Tang
Public bug reported:

We detacted this during our test. Sometimes, there maybe error string
comes out in the begining of the output of multipath -ll device. See
below log:

2015-05-11 05:59:53.005 DEBUG oslo_concurrency.processutils 
[req-30af6607-6e51-487e-beb2-3dbea8ee9fac admin admin] CMD sudo nova-rootwrap 
/etc/nova/rootwrap.conf multipath -ll /dev/sdfb returned: 0 in 0.912s execute 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:225
2015-05-11 05:59:53.005 DEBUG nova.virt.libvirt.volume 
[req-30af6607-6e51-487e-beb2-3dbea8ee9fac admin admin] multipath ['-ll', 
u'/dev/sdfb']: stdout=May 11 05:59:52 | io_setup failed
3600601602ba0340035339627c3f7e411 dm-44 DGC,VRAID
size=5.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=130 status=active
| `- 34:0:0:193 sdfb 129:208 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
  |- 35:0:0:193 sdfc 129:224 active ready running
  `- 36:0:0:193 sdfd 129:240 active ready running

When this error appears, the multipath id of a device will be
incorrectly parsed. In above instance, the multipath device name was set
to /dev/mapper/May which is incorrect.

 nova/virt/libvirt/volume.py
 744 def _get_multipath_device_name(self, single_path_device):
 745 device = os.path.realpath(single_path_device)
 746
 747 out = self._run_multipath(['-ll',
 748 device],
 749 check_exit_code=[0, 1])[0]
 750 mpath_line = [line for line in out.splitlines()
 751 if scsi_id not in line] # ignore udev errors
 752 if len(mpath_line)  0 and len(mpath_line[0])  0:
 753 return /dev/mapper/%s % mpath_line[0].split( )[0]
 754
 755 return None

stack@openstack-performance:~/tina/nova_iscsi_mp/nova$ git log -1
commit f4504f3575b35ec14390b4b678e441fcf953f47b
Merge: 3f21f60 5fbd852
Author: Jenkins jenk...@review.openstack.org
Date: Tue May 12 22:46:43 2015 +

Merge Remove db layer hard-code permission checks for
network_get_all_by_host

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  We detacted this during our test. Sometimes, there maybe error string
- comes out in the begining of the output of multipath -ll device.  See
+ comes out in the begining of the output of multipath -ll device. See
  below log:
  
  2015-05-11 05:59:53.005 DEBUG oslo_concurrency.processutils 
[req-30af6607-6e51-487e-beb2-3dbea8ee9fac admin admin] CMD sudo nova-rootwrap 
/etc/nova/rootwrap.conf multipath -ll /dev/sdfb returned: 0 in 0.912s execute 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:225
  2015-05-11 05:59:53.005 DEBUG nova.virt.libvirt.volume 
[req-30af6607-6e51-487e-beb2-3dbea8ee9fac admin admin] multipath ['-ll', 
u'/dev/sdfb']: stdout=May 11 05:59:52 | io_setup failed
  3600601602ba0340035339627c3f7e411 dm-44 DGC,VRAID
  size=5.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
  |-+- policy='round-robin 0' prio=130 status=active
- | `- 34:0:0:193 sdfb 129:208 active ready  running
+ | `- 34:0:0:193 sdfb 129:208 active ready running
  `-+- policy='round-robin 0' prio=10 status=enabled
-   |- 35:0:0:193 sdfc 129:224 active ready  running
-   `- 36:0:0:193 sdfd 129:240 active ready  running
-  
- When this error appears, the multipath id of a device will be incorrectly 
parsed. In above instance, the multipath device name was set to 
/dev/mapper/May which is incorrect.
+   |- 35:0:0:193 sdfc 129:224 active ready running
+   `- 36:0:0:193 sdfd 129:240 active ready running
  
+ When this error appears, the multipath id of a device will be
+ incorrectly parsed. In above instance, the multipath device name was set
+ to /dev/mapper/May which is incorrect.
  
-  744 def _get_multipath_device_name(self, single_path_device):
-  745 device = os.path.realpath(single_path_device)
+  nova/virt/libvirt/volume.py
+  744 def _get_multipath_device_name(self, single_path_device):
+  745 device = os.path.realpath(single_path_device)
   746
-  747 out = self._run_multipath(['-ll',
-  748   device],
-  749   check_exit_code=[0, 1])[0]
-  750 mpath_line = [line for line in out.splitlines()
-  751   if scsi_id not in line]  # ignore udev errors
-  752 if len(mpath_line)  0 and len(mpath_line[0])  0:
-  753 return /dev/mapper/%s % mpath_line[0].split( )[0]
+  747 out = self._run_multipath(['-ll',
+  748 device],
+  749 check_exit_code=[0, 1])[0]
+  750 mpath_line = [line for line in out.splitlines()
+  751 if scsi_id not in line] # ignore udev errors
+  752 if len(mpath_line)  0 and len(mpath_line[0])  0:
+  753 return /dev/mapper/%s % mpath_line[0].split( )[0]
   754
-  755 return None
- 
+  755 return None
  
  stack@openstack-performance:~/tina/nova_iscsi_mp/nova$ git log -1
  commit f4504f3575b35ec14390b4b678e441fcf953f47b
  Merge: 3f21f60 5fbd852
  Author: Jenkins jenk...@review.openstack.org
- Date:   Tue May 12 22:46:43 

[Yahoo-eng-team] [Bug 1454512] [NEW] Device for other volume is deleted unexpected during volume detach when iscsi multipath is used

2015-05-13 Thread Tina Tang
Public bug reported:

We found this issue during testing volume detachment when iSCSI
multipath is used. When a same iSCSI protal and iqn is shared by
multiple LUNs, device from other volume maybe be deleted unexpected.
This is found both in Kilo and the latest code.

For example, the devices under /dev/disk/by-path may looks like below when LUN 
23 and 231 are from a same storage system and a same iSCSI protal and iqn are 
used. ls /dev/disk/by-path
ip-192.168.3.50:3260-iscsi-iqna-lun-23
ip-192.168.3.50:3260-iscsi-iqna-lun-231
ip-192.168.3.51:3260-iscsi-iqnb-lun-23
ip-192.168.3.51:3260-iscsi-iqnb-lun-231

When we try to detach volume corresponding LUN 23 from the host, we
noticed that the devices regarding to LUN 231 are also deleted which may
cause the data unavailable.

Why this happen? After digging into the nova code, below is the clue:

nova/virt/libvirt/volume.py
770 def _delete_mpath(self, iscsi_properties, multipath_device, ips_iqns):
771 entries = self._get_iscsi_devices()
772 # Loop through ips_iqns to construct all paths
773 iqn_luns = []
774 for ip, iqn in ips_iqns:
775 iqn_lun = '%s-lun-%s' % (iqn,
776 iscsi_properties.get('target_lun', 0))
777 iqn_luns.append(iqn_lun)
778 for dev in ['/dev/disk/by-path/%s' % dev for dev in entries]:
779 for iqn_lun in iqn_luns:
780 if iqn_lun in dev: == This is incorrect, device for LUN 231 will made this 
be True.
781 self._delete_device(dev)
782
783 self._rescan_multipath()

Due to the incorrect logic in line 780, detach LUN xx will deleted devices for 
other LUNs starts with xx, such as xxy, xxz. We could use dev.endswith(iqn_lun) 
to avoid it.
===
stack@openstack-performance:~/tina/nova_iscsi_mp/nova$ git log -1
commit f4504f3575b35ec14390b4b678e441fcf953f47b
Merge: 3f21f60 5fbd852
Author: Jenkins jenk...@review.openstack.org
Date: Tue May 12 22:46:43 2015 +

Merge Remove db layer hard-code permission checks for
network_get_all_by_host

** Affects: nova
 Importance: Undecided
 Assignee: Tina Tang (tina-tang)
 Status: New

** Description changed:

  We found this issue during testing volume detachment when iSCSI
  multipath is used. When a same iSCSI protal and iqn is shared by
  multiple LUNs, device from other volume maybe be deleted unexpected.
  This is found both in Kilo and the latest code.
  
+ For example, the devices under /dev/disk/by-path may looks like below when 
LUN 23 and 231 are from a same storage system and a same iSCSI protal and iqn 
are used. ls /dev/disk/by-path
+ ip-192.168.3.50:3260-iscsi-iqna-lun-23
+ ip-192.168.3.50:3260-iscsi-iqna-lun-231
+ ip-192.168.3.51:3260-iscsi-iqnb-lun-23
+ ip-192.168.3.51:3260-iscsi-iqnb-lun-231
  
- For example, the devices under /dev/disk/by-path may looks like below when 
LUN 23 and 231 are from a same storage system and a same iSCSI protal and iqn 
are used. ls /dev/disk/by-path
- ip-192.168.3.50:3260-iscsi-iqna-lun-23 - ../../sdh
- ip-192.168.3.50:3260-iscsi-iqna-lun-231 - ../../sdk
- ip-192.168.3.51:3260-iscsi-iqnb-lun-23 - ../../sdd
- ip-192.168.3.51:3260-iscsi-iqnb-lun-231 - ../../sdi
+ When we try to detach volume corresponding LUN 23 from the host, we
+ noticed that the devices regarding to LUN 231 are also deleted which may
+ cause the data unavailable.
  
- 
- When we try to detach volume corresponding LUN 23 from the host, the devices 
regarding to LUN 231 are also deleted which may cause the data unavailable. 
- 
- Why this happen?  After digging into the node code, below is the clue:
+ Why this happen? After digging into the nova code, below is the clue:
  
  nova/virt/libvirt/volume.py
- 770 def _delete_mpath(self, iscsi_properties, multipath_device, ips_iqns):
-  771 entries = self._get_iscsi_devices()
-  772 # Loop through ips_iqns to construct all paths
-  773 iqn_luns = []
-  774 for ip, iqn in ips_iqns:
-  775 iqn_lun = '%s-lun-%s' % (iqn,
-  776  iscsi_properties.get('target_lun', 
0))
-  777 iqn_luns.append(iqn_lun)
-  778 for dev in ['/dev/disk/by-path/%s' % dev for dev in entries]:
-  779 for iqn_lun in iqn_luns:
-  780 if iqn_lun in dev: == This is 
incorrect, device for LUN 231 will made this be True. 
-  781 self._delete_device(dev)
-  782
-  783 self._rescan_multipath()
+ 770 def _delete_mpath(self, iscsi_properties, multipath_device, ips_iqns):
+ 771 entries = self._get_iscsi_devices()
+ 772 # Loop through ips_iqns to construct all paths
+ 773 iqn_luns = []
+ 774 for ip, iqn in ips_iqns:
+ 775 iqn_lun = '%s-lun-%s' % (iqn,
+ 776iscsi_properties.get('target_lun', 0))
+ 777 iqn_luns.append(iqn_lun)
+ 778 for dev in ['/dev/disk/by-path/%s' % dev for dev in entries]:
+ 779 for iqn_lun in iqn_luns:
+ 780if iqn_lun in dev: == This is incorrect, device for LUN 231 will 
made this be True.
+ 781 

[Yahoo-eng-team] [Bug 1454531] [NEW] list_user_projects() can't get filtered by 'domain_id'.

2015-05-13 Thread DWang
Public bug reported:

Here is our use case, we want our tenant domain admin(e.g., Bob) to have
this capability: Bob(domain-scoped) can list the projects that one user
has roles on, and the projects Bob get should only belong to Bob's
scoping domain.

When we  read the rule in policy.v3cloudsample.json for 
identity:list_user_projects, we are happy it's the same as what we want:
{...
admin_and_matching_domain_id: rule:admin_required and 
domain_id:%(domain_id)s,
identity:list_user_projects: rule:owner or 
rule:admin_and_matching_domain_id,
...}

I thought we could use this API with query string 'domain_id', thus Bob
can and only can query projects in his scoping domain, but it doesn't
work, since the  @controller.filterprotected('enabled', 'name')  for
list_user_projects() exclude the possibility of taking 'domain_id' as a
query string even it's useful to us and recorded in the policy file.

** Affects: keystone
 Importance: Undecided
 Assignee: DWang (darren-wang)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) = DWang (darren-wang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1454531

Title:
  list_user_projects() can't get filtered by 'domain_id'.

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Here is our use case, we want our tenant domain admin(e.g., Bob) to
  have this capability: Bob(domain-scoped) can list the projects that
  one user has roles on, and the projects Bob get should only belong to
  Bob's scoping domain.

  When we  read the rule in policy.v3cloudsample.json for 
identity:list_user_projects, we are happy it's the same as what we want:
  {...
  admin_and_matching_domain_id: rule:admin_required and 
domain_id:%(domain_id)s,
  identity:list_user_projects: rule:owner or 
rule:admin_and_matching_domain_id,
  ...}

  I thought we could use this API with query string 'domain_id', thus
  Bob can and only can query projects in his scoping domain, but it
  doesn't work, since the  @controller.filterprotected('enabled',
  'name')  for list_user_projects() exclude the possibility of taking
  'domain_id' as a query string even it's useful to us and recorded in
  the policy file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1454531/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp