[Yahoo-eng-team] [Bug 1687871] [NEW] update the description of hypervisor statistics response

2017-05-03 Thread LiChunlin
Public bug reported:

https://developer.openstack.org/api-ref/compute/?expanded=show-
hypervisor-statistics-detail#list-servers

The description of hypervisor statistics response was not for one hypervisor, 
but all hypervisors. So I will modify the description of the following items:
disk_available_least
free_disk_gb
free_ram_mb
local_gb
local_gb_used
memory_mb
memory_mb_used
vcpus
vcpus_used

** Affects: nova
 Importance: Undecided
 Assignee: LiChunlin (lichl)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => LiChunlin (lichl)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1687871

Title:
  update the description of hypervisor statistics response

Status in OpenStack Compute (nova):
  New

Bug description:
  https://developer.openstack.org/api-ref/compute/?expanded=show-
  hypervisor-statistics-detail#list-servers

  The description of hypervisor statistics response was not for one hypervisor, 
but all hypervisors. So I will modify the description of the following items:
  disk_available_least
  free_disk_gb
  free_ram_mb
  local_gb
  local_gb_used
  memory_mb
  memory_mb_used
  vcpus
  vcpus_used

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1687871/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1643569] Re: RFE: Choose VMware datastore in dependence of the provisioned space

2017-05-03 Thread ChangBo Guo(gcb)
** Changed in: oslo.vmware
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1643569

Title:
  RFE: Choose VMware datastore in dependence of the provisioned space

Status in OpenStack Compute (nova):
  In Progress
Status in oslo.vmware:
  Fix Released

Bug description:
  At the moment the _select_datastore method in the VMware driver
  chooses the in dependence of the free space. The datastore with the
  most free space will be chosen for a new instance.

  On of our customers want to place new instances on the datastore with
  the less provisioned space and not the one with the most free space.

  The amount of provisioned space is not directly provided by the VMware
  API and has to be calculated by using the capacity, the free space and
  the uncommitted bytes of the datastore. provisionedSpace = Capacity -
  freeSpace - uncommitted.

  Following changes are necessary to support the selection of a
  datastore in dependence of the provisioned space:

  * The uncommitted property is not requested by the get_datastore
  method. The get_datastore method hast to request the uncommitted
  property.

  * The Datastore class does not provide a keyword argument for the
  amount of provisioned space. A provisioned space keyword argument has
  to be added to the Datastore class.

  * The _select_datastore method has to calculate the provisioned space
  of a datastore.

  * A new configuration parameter datastore_allocation_type with
  'provisionedSpace' and 'freeSpace' as choices has to be introduced.
  The default value should be 'freeSpace' to not change the default
  behaviour of the _select_datastore method.

  * When  datastore_allocation_type is set to 'provisionedSpace' then
  the provisionedSpace will be compared instead of the freeSpace and the
  datastore with the lowest value for provisionedSpace will be chosen.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1643569/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1687888] [NEW] keystone federation protocol

2017-05-03 Thread yangweiwei
Public bug reported:

Do as the following:
1 PUT /v3/OS-FEDERATION/identity_providers/keystone-idp/protocols/saml2   
  result:ok

2.PUT /v3/OS-FEDERATION/identity_providers/keystone-idp/protocols/saml2
  result:string indices must be integers (HTTP 400)

But actually, the response should like 'Conflict occurred...'

** Affects: keystone
 Importance: Undecided
 Assignee: yangweiwei (496176919-6)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => yangweiwei (496176919-6)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1687888

Title:
  keystone federation protocol

Status in OpenStack Identity (keystone):
  New

Bug description:
  Do as the following:
  1 PUT /v3/OS-FEDERATION/identity_providers/keystone-idp/protocols/saml2   
result:ok

  2.PUT /v3/OS-FEDERATION/identity_providers/keystone-idp/protocols/saml2
result:string indices must be integers (HTTP 400)

  But actually, the response should like 'Conflict occurred...'

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1687888/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1687896] [NEW] neutron-rpc-server fails to start on configuration that works under neutron-server

2017-05-03 Thread Ebbex
Public bug reported:

I'm running neutron-api under uwsgi, but noticed it's not doing any rpc stuff,
so I figure that's what "neutron-rpc-server" is for. But it fails to start on
configuration that works under the normal "neutron-server".

Traceback (most recent call last):
  File "/usr/local/bin/neutron-rpc-server", line 10, in 
sys.exit(main_rpc_eventlet())
  File "/opt/stack/neutron/neutron/cmd/eventlet/server/__init__.py", line 23, 
in main_rpc_eventlet
server.boot_server(rpc_eventlet.eventlet_rpc_server)
  File "/opt/stack/neutron/neutron/server/__init__.py", line 42, in boot_server
server_func()
  File "/opt/stack/neutron/neutron/server/rpc_eventlet.py", line 33, in 
eventlet_rpc_server
rpc_workers_launcher = service.start_rpc_workers()
  File "/opt/stack/neutron/neutron/service.py", line 269, in start_rpc_workers
rpc_workers = _get_rpc_workers()
  File "/opt/stack/neutron/neutron/service.py", line 163, in _get_rpc_workers
if not plugin.rpc_workers_supported():
AttributeError: 'NoneType' object has no attribute 'rpc_workers_supported'

In neutron/neutron/service.py:
from neutron_lib.plugins import directory

def _get_rpc_workers():
plugin = directory.get_plugin()
service_plugins = directory.get_plugins().values()

I'm not sure what directory.get_plugin() does or how it works, but it doesn't
return anything of use apparently.


You should be able to reproduce quite easily by running

neutron-server --config-file neutron.conf
neutron-rpc-server --config-file neutron.conf

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1687896

Title:
  neutron-rpc-server fails to start on configuration that works under
  neutron-server

Status in neutron:
  New

Bug description:
  I'm running neutron-api under uwsgi, but noticed it's not doing any rpc stuff,
  so I figure that's what "neutron-rpc-server" is for. But it fails to start on
  configuration that works under the normal "neutron-server".

  Traceback (most recent call last):
File "/usr/local/bin/neutron-rpc-server", line 10, in 
  sys.exit(main_rpc_eventlet())
File "/opt/stack/neutron/neutron/cmd/eventlet/server/__init__.py", line 23, 
in main_rpc_eventlet
  server.boot_server(rpc_eventlet.eventlet_rpc_server)
File "/opt/stack/neutron/neutron/server/__init__.py", line 42, in 
boot_server
  server_func()
File "/opt/stack/neutron/neutron/server/rpc_eventlet.py", line 33, in 
eventlet_rpc_server
  rpc_workers_launcher = service.start_rpc_workers()
File "/opt/stack/neutron/neutron/service.py", line 269, in start_rpc_workers
  rpc_workers = _get_rpc_workers()
File "/opt/stack/neutron/neutron/service.py", line 163, in _get_rpc_workers
  if not plugin.rpc_workers_supported():
  AttributeError: 'NoneType' object has no attribute 'rpc_workers_supported'

  In neutron/neutron/service.py:
  from neutron_lib.plugins import directory

  def _get_rpc_workers():
  plugin = directory.get_plugin()
  service_plugins = directory.get_plugins().values()

  I'm not sure what directory.get_plugin() does or how it works, but it doesn't
  return anything of use apparently.

  
  You should be able to reproduce quite easily by running

  neutron-server --config-file neutron.conf
  neutron-rpc-server --config-file neutron.conf

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1687896/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1687913] [NEW] db retry not triggered when fail happened in after_create notify

2017-05-03 Thread Wim De Clercq
Public bug reported:

Note: 
- The specific use case can no longer happen on master (due to a couple of 
commits). So the below is for a < ocata context.
- Bug seen on Newton setup

During high concurrency testing (with router:external networks) the following 
deadlock may occur
http://paste.openstack.org/show/608690/

Deadlocks are normally 'okay', because the db retry mechanism will retry
the request. But in this specific case it did not.

The issue happens here:
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L769

- It's inside of a transaction
- the external_net_db code does a notify with AFTER_CREATE.
- in the AFTER_CREATE even processing, the deadlock happens 

The problem is that an AFTER_CREATE event will not raise exceptions. It just 
logs. 
But it IS inside of a transaction, and it did make the session invalid.

So the code continues, it tries to commit the invalid session. And the
resulting exception of this is a

sqlalchemy.exc.InvalidRequestError  - This Session's transaction has
been rolled back due to a previous exception during flush. To begin a
new transaction with this Session, first issue Session.rollback().
Original exception was: ...

Since this exception type is not part of the db_retry exceptions, no
retry happens and the request fails.


While this use case is a very specific one. Maybe some action is needed to 
avoid something like this happening in other places. Because any database error 
which occurs inside of an event notify which is not BEFORE_x or PRECOMMIT will 
have this behaviour: corrupt the session object, nothing raises, and the 
following error is not retriable.


(to easily reproduce on a test setup: add

if event == events.AFTER_CREATE:
try:
context.session.add(models_v2.Network(name=256*'g'))
context.session.flush() # this makes the session invalid
except:
raise db_exc.DBDeadlock()


to _ensure_external_network_default_value_callback in 
neutron.services.auto_allocate.db.py
and create a router:external network.

This should trigger the retry mechanism at first sight, but it won't.)

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1687913

Title:
  db retry not triggered when fail happened in after_create notify

Status in neutron:
  New

Bug description:
  Note: 
  - The specific use case can no longer happen on master (due to a couple of 
commits). So the below is for a < ocata context.
  - Bug seen on Newton setup

  During high concurrency testing (with router:external networks) the following 
deadlock may occur
  http://paste.openstack.org/show/608690/

  Deadlocks are normally 'okay', because the db retry mechanism will
  retry the request. But in this specific case it did not.

  The issue happens here:
  
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L769

  - It's inside of a transaction
  - the external_net_db code does a notify with AFTER_CREATE.
  - in the AFTER_CREATE even processing, the deadlock happens 

  The problem is that an AFTER_CREATE event will not raise exceptions. It just 
logs. 
  But it IS inside of a transaction, and it did make the session invalid.

  So the code continues, it tries to commit the invalid session. And the
  resulting exception of this is a

  sqlalchemy.exc.InvalidRequestError  - This Session's transaction has
  been rolled back due to a previous exception during flush. To begin a
  new transaction with this Session, first issue Session.rollback().
  Original exception was: ...

  Since this exception type is not part of the db_retry exceptions, no
  retry happens and the request fails.

  
  While this use case is a very specific one. Maybe some action is needed to 
avoid something like this happening in other places. Because any database error 
which occurs inside of an event notify which is not BEFORE_x or PRECOMMIT will 
have this behaviour: corrupt the session object, nothing raises, and the 
following error is not retriable.


  (to easily reproduce on a test setup: add

  if event == events.AFTER_CREATE:
  try:
  context.session.add(models_v2.Network(name=256*'g'))
  context.session.flush() # this makes the session invalid
  except:
  raise db_exc.DBDeadlock()

  
  to _ensure_external_network_default_value_callback in 
neutron.services.auto_allocate.db.py
  and create a router:external network.

  This should trigger the retry mechanism at first sight, but it won't.)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1687913/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.

[Yahoo-eng-team] [Bug 1687712] Re: cc_disk_setup: fs_setup with cmd doesn't work

2017-05-03 Thread Scott Moser
** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1687712

Title:
  cc_disk_setup: fs_setup with cmd doesn't work

Status in cloud-init:
  Confirmed
Status in cloud-init package in Ubuntu:
  New

Bug description:
  This reproduces on Azure, but it should fail similarly elsewhere.
  Consider repro.yml:

  #cloud-config
  fs_setup:
  - special:
cmd: mkfs -t %(filesystem)s -L %(label)s %(device)s
filesystem: ext4
device: /dev/sdb1
label: repro

  Create a VM with this cloud config: 
  $ az vm create -g $rg -l westus2 --custom-data @repro.yml --image UbuntuLTS 
-n repro2

  Then cloud-init will fail with:
  Failed to exec of 'mkfs -t ext4 -L repro /dev/sdb1':
  Unexpected error while running command.
  Command: mkfs -t ext4 -L repro /dev/sdb1
  Exit code: -
  Reason: [Errno 2] No such file or directory: 'mkfs -t ext4 -L repro /dev/sdb1'

  $ dpkg-query -W -f='${Version}' cloud-init
  0.7.9-48-g1c795b9-0ubuntu1~16.04.1

  Bug is in mkfs() in cc_disk_setup.py, which creates a shell-like
  string in the case that cmd was specified and a exec-like array in the
  other case (around line 913).

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1687712/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1687942] [NEW] OPERATION_LOG_OPTIONS setting ignored

2017-05-03 Thread Mateusz Kowalski
*** This bug is a duplicate of bug 1675176 ***
https://bugs.launchpad.net/bugs/1675176

Public bug reported:

As python dictionary does not support getattr(), options from
OPERATION_LOG_OPTIONS are ignored. The correct way to do this is
dict.get() and this change fixes the issue.

Current behaviour ignores both "mask_fields" and "format" from
local_settings file and always uses default values which are
['password'] and "[%(domain_name)s] ... [%(param)s]"

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1687942

Title:
  OPERATION_LOG_OPTIONS setting ignored

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  As python dictionary does not support getattr(), options from
  OPERATION_LOG_OPTIONS are ignored. The correct way to do this is
  dict.get() and this change fixes the issue.

  Current behaviour ignores both "mask_fields" and "format" from
  local_settings file and always uses default values which are
  ['password'] and "[%(domain_name)s] ... [%(param)s]"

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1687942/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1687942] Re: OPERATION_LOG_OPTIONS setting ignored

2017-05-03 Thread Mateusz Kowalski
*** This bug is a duplicate of bug 1675176 ***
https://bugs.launchpad.net/bugs/1675176

Appears on stable/newton and stable/ocata

** This bug has been marked a duplicate of bug 1675176
   customization of operational log format does not work

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1687942

Title:
  OPERATION_LOG_OPTIONS setting ignored

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  As python dictionary does not support getattr(), options from
  OPERATION_LOG_OPTIONS are ignored. The correct way to do this is
  dict.get() and this change fixes the issue.

  Current behaviour ignores both "mask_fields" and "format" from
  local_settings file and always uses default values which are
  ['password'] and "[%(domain_name)s] ... [%(param)s]"

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1687942/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1685340] Re: compute logs tell me live migration finished successfully when it actually failed

2017-05-03 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/458958
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=746e48efa32fd599817197ffd7ad434a35f96165
Submitter: Jenkins
Branch:master

commit 746e48efa32fd599817197ffd7ad434a35f96165
Author: Matt Riedemann 
Date:   Thu Apr 27 14:44:52 2017 -0400

Do not log live migration success when it actually failed

During post live migration, if post live migration on destination
fails, then we log a stacktrace but continue to perform cleanup
on the source side. However, at the end of the _post_live_migration
method it was logging that things were successful on the destination
host, which they weren't, which is really confusing when you're trying
to debug the failure and seeing this conflict in the logs.

This patch simply sets a flag if we failed post live migration at
the destination host so we don't log the success message later on
the source host, plus tests to show the flag is set and checked.

Change-Id: I16e70912a13c963031397e66a8553b2c199d50bd
Closes-Bug: #1685340


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1685340

Title:
  compute logs tell me live migration finished successfully when it
  actually failed

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  New
Status in OpenStack Compute (nova) ocata series:
  New

Bug description:
  This tells me post live migration at destination failed:

  http://logs.openstack.org/43/458843/1/check/gate-tempest-dsvm-
  multinode-live-migration-ubuntu-
  xenial/697a501/logs/subnode-2/screen-n-cpu.txt.gz#_2017-04-21_13_54_10_281

  2017-04-21 13:54:10.281 10362 ERROR nova.compute.manager [req-
  7ecbf938-9e55-4e4c-b7da-63eef0f8d4a9 tempest-
  LiveBlockMigrationTestJSON-208732686 tempest-
  LiveBlockMigrationTestJSON-208732686] [instance: 9bf9f268-5242-4b1d-
  8fe6-ee348b2b8d3e] Post live migration at destination ubuntu-xenial-2
  -node-osic-cloud1-s3500-8527282 failed

  Later on, the logs tell me it was successful:

  http://logs.openstack.org/43/458843/1/check/gate-tempest-dsvm-
  multinode-live-migration-ubuntu-
  xenial/697a501/logs/subnode-2/screen-n-cpu.txt.gz#_2017-04-21_13_54_11_080

  2017-04-21 13:54:11.080 10362 INFO nova.compute.manager [req-
  7ecbf938-9e55-4e4c-b7da-63eef0f8d4a9 tempest-
  LiveBlockMigrationTestJSON-208732686 tempest-
  LiveBlockMigrationTestJSON-208732686] [instance: 9bf9f268-5242-4b1d-
  8fe6-ee348b2b8d3e] Migrating instance to ubuntu-xenial-2-node-osic-
  cloud1-s3500-8527282 finished successfully.

  That's because we don't stop on the failure because we want to
  continue with cleanup, but we don't check if we failed when emitting
  the success message.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1685340/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1688024] [NEW] quota API missing input validation

2017-05-03 Thread Matthew Edmonds
Public bug reported:

As seen with the following curl command, neutron accepts float values
for quotas that should require ints. It coverts them to an int, but it
should have returned HTTP 400 instead. The conversion it's doing may or
may not have the same results in python3 as it does here in python2, so
that's another potential concern.

curl -s -X PUT 
http://localhost:9696/v2.0/quotas/c4d15a1adc0a4cd89006d4db0a2bdfed -H "Accept: 
application/json" -H "X-Auth-Token: " -H "Content-Type: 
application/json" -d '{"quota": {"floatingip": 2.9}}' | python -m json.tool
{
"quota": {
"floatingip": 2,
"network": -1,
"port": -1,
"rbac_policy": 10,
"router": 10,
"security_group": 10,
"security_group_rule": 100,
"subnet": -1,
"subnetpool": -1
}
}

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1688024

Title:
  quota API missing input validation

Status in neutron:
  New

Bug description:
  As seen with the following curl command, neutron accepts float values
  for quotas that should require ints. It coverts them to an int, but it
  should have returned HTTP 400 instead. The conversion it's doing may
  or may not have the same results in python3 as it does here in
  python2, so that's another potential concern.

  curl -s -X PUT 
http://localhost:9696/v2.0/quotas/c4d15a1adc0a4cd89006d4db0a2bdfed -H "Accept: 
application/json" -H "X-Auth-Token: " -H "Content-Type: 
application/json" -d '{"quota": {"floatingip": 2.9}}' | python -m json.tool
  {
  "quota": {
  "floatingip": 2,
  "network": -1,
  "port": -1,
  "rbac_policy": 10,
  "router": 10,
  "security_group": 10,
  "security_group_rule": 100,
  "subnet": -1,
  "subnetpool": -1
  }
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1688024/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1688038] [NEW] test_rescued_vm_add_remove_security_group fails with "InstanceNotRescuable: Instance 4869b462-c3cf-4437-8c94-1d0dcd5fff8b cannot be rescued: Driver Error: failed t

2017-05-03 Thread Matt Riedemann
Public bug reported:

http://logs.openstack.org/73/461473/3/check/gate-tempest-dsvm-neutron-
full-ubuntu-xenial/a277636/console.html#_2017-05-02_18_16_55_064917

http://logs.openstack.org/73/461473/3/check/gate-tempest-dsvm-neutron-
full-ubuntu-
xenial/a277636/logs/screen-n-cpu.txt.gz#_May_02_17_49_07_771248

May 02 17:49:07.771248 ubuntu-xenial-rax-ord-8683720 nova-compute[23706]: ERROR 
oslo_messaging.rpc.server [req-f281fc56-b69f-4e1a-a6d0-752871138ace 
tempest-ServerRescueTestJSON-709146246 tempest-ServerRescueTestJSON-709146246] 
Exception during message handling
  ERROR 
oslo_messaging.rpc.server Traceback (most recent call last):
  ERROR 
oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
157, in _process_incoming
  ERROR 
oslo_messaging.rpc.server res = self.dispatcher.dispatch(message)
  ERROR 
oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
213, in dispatch
  ERROR 
oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, 
args)
  ERROR 
oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
183, in _do_dispatch
  ERROR 
oslo_messaging.rpc.server result = func(ctxt, **new_args)
  ERROR 
oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/exception_wrapper.py", line 77, in wrapped
  ERROR 
oslo_messaging.rpc.server function_name, call_dict, binary)
  ERROR 
oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  ERROR 
oslo_messaging.rpc.server self.force_reraise()
  ERROR 
oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  ERROR 
oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
  ERROR 
oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/exception_wrapper.py", line 68, in wrapped
  ERROR 
oslo_messaging.rpc.server return f(self, context, *args, **kw)
  ERROR 
oslo_messaging.rpc.server   File "/opt/stack/new/nova/nova/compute/manager.py", 
line 187, in decorated_function
  ERROR 
oslo_messaging.rpc.server LOG.warning(msg, e, instance=instance)
  ERROR 
oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  ERROR 
oslo_messaging.rpc.server self.force_reraise()
  ERROR 
oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  ERROR 
oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
  ERROR 
oslo_messaging.rpc.server   File "/opt/stack/new/nova/nova/compute/manager.py", 
line 156, in decorated_function
  ERROR 
oslo_messaging.rpc.server return function(self, context, *args, **kwargs)
  ERROR 
oslo_messaging.rpc.server   File "/opt/stack/new/nova/nova/compute/utils.py", 
line 660, in decorated_function
  ERROR 
oslo_messaging.rpc.server return function(self, context, *args, **kwargs)
  

[Yahoo-eng-team] [Bug 1481370] Re: system logging module is still in use in many places

2017-05-03 Thread Ken'ichi Ohmichi
This has been fixed on Tempest side since
https://review.openstack.org/#/c/398019/

** Changed in: tempest
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1481370

Title:
  system logging module is still in use in many places

Status in Murano:
  Fix Released
Status in neutron:
  Fix Released
Status in Stackalytics:
  Fix Released
Status in tempest:
  Fix Released
Status in OpenStack DBaaS (Trove):
  Fix Released
Status in tuskar:
  Won't Fix

Bug description:
  The system logging module is still in use in many places, i suggest to
  use the oslo.log library. Form the 1.8 version of oslo.log we can use
  the constants of the log levels (INFO, DEBUG, etc) directly from log
  module instead of system logging module.

To manage notifications about this bug go to:
https://bugs.launchpad.net/murano/+bug/1481370/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1687593] Re: Create OAUTH request token gives 401 error when request url is admin endpoint

2017-05-03 Thread Hemanth Nakkina
** Also affects: python-keystoneclient
   Importance: Undecided
   Status: New

** No longer affects: python-keystoneclient (Ubuntu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1687593

Title:
  Create OAUTH request token gives 401 error when request url is admin
  endpoint

Status in OpenStack Identity (keystone):
  In Progress
Status in python-keystoneclient:
  New

Bug description:
  Create request token API returns 401 error when the request URL is
  admin endpoint.

  Error scenario:
  URL used to generate OAUTH signature and for POST request is Keystone admin 
endpoint
  http:///identity_admin/v3/OS-OAUTH1/request_token

  Working scenario:
  When the URL used to generate OAUTH signature is public endpoint, then the 
response is 201. 
  http:///identity/v3/OS-OAUTH1/request_token

  Endpoints in devstack for identity:
  ocata@ocata-VirtualBox:~/devstack$ openstack endpoint list | grep identity
  | 549f73e17b0e471e95176bb508561bb3 | RegionOne | keystone | identity  
| True| internal  | http://192.168.56.101/identity|
  | 739cda51666f4ab197241beac5c5c14c | RegionOne | keystone | identity  
| True| admin | http://192.168.56.101/identity_admin  |
  | a0eb39c0ecff46c3b61bc6184c42bc13 | RegionOne | keystone | identity  
| True| public| http://192.168.56.101/identity

  
  Steps to reproduce the problem:

  Run the python script in the below link (by changing the necessary 
credentials and IP address)
  https://pastebin.com/AqL9674n

  If #L38 is modified to public endpoint (http:///identity/v3/OS-OAUTH1/request_token), the status code is 201.

  Seems like Keystone code verifies the OAUTH signature using Public
  endpoint irrespective of the request URL.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1687593/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1681531] Re: DigitalOcean DS defines mutliple gateways via meta-data

2017-05-03 Thread Scott Moser
** Also affects: cloud-init
   Importance: Undecided
   Status: New

** Changed in: cloud-init
   Importance: Undecided => Medium

** Changed in: cloud-init
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1681531

Title:
  DigitalOcean DS defines mutliple gateways via meta-data

Status in cloud-init:
  Confirmed
Status in cloud-init package in Ubuntu:
  Confirmed

Bug description:
  [Impact]
  The cloud-init datasource for DigitalOcean allows for multiple gateways on 
any NIC.

  On Ubuntu 16.04, this breaks networking.service. For 17.04 and later,
  Ubuntu _replaces_ the default gateway with the second gateway on
  'ifup' after reboot.

  DigitalOcean is looking at changing the meta-data, however, this will
  result in another version of the meta-data JSON.

  [Regression Potential]

  Low. This change is scope to DigitalOcean only. DigitalOcean has
  tested this Datasource exhaustively.

  [TEST Cases]
  - provision on DigitalOcean with a private IP
  - reboot
  - confirm that a single route exists in /etc/network/interfaces

  [LOGS]

  
--
  From /var/log/cloud-init.log:

  2017-04-10 17:36:11,608 - util.py[DEBUG]: Running command ['ip', 'link', 
'set', 'ens3', 'down'] with allowed return codes [0] (shell=False, capture=True)
  2017-04-10 17:36:11,615 - util.py[DEBUG]: Running command ['ip', 'link', 
'set', 'ens3', 'name', 'eth0'] with allowed return codes [0] (shell=False, 
capture=True)
  2017-04-10 17:36:11,635 - util.py[DEBUG]: Running command ['ip', 'link', 
'set', 'ens4', 'name', 'eth1'] with allowed return codes [0] (shell=False, 
capture=True)
  2017-04-10 17:36:11,651 - util.py[DEBUG]: Running command ['ip', 'link', 
'set', 'eth0', 'up'] with allowed return codes [0] (shell=False, capture=True)
  2017-04-10 17:36:11,654 - stages.py[INFO]: Applying network configuration 
from ds bringup=False: {'version': 1, 'config': [{'name': 'eth0', 'subnets': 
[{'address': '138.197.88.85', 'netmask': '255.255.240.0', 'gateway': 
'138.197.80.1', 'type': 'static', 'control': 'auto'}, {'address': 
'2604:A880:0800:0010:::2ECE:D001/64', 'gateway': 
'2604:A880:0800:0010::::0001', 'type': 'static', 'control': 
'auto'}, {'address': '10.17.0.10', 'netmask': '255.255.0.0', 'type': 'static', 
'control': 'auto'}], 'mac_address': 'ee:90:f2:c6:dc:db', 'type': 'physical'}, 
{'name': 'eth1', 'subnets': [{'address': '10.132.92.131', 'netmask': 
'255.255.0.0', 'gateway': '10.132.0.1', 'type': 'static', 'control': 'auto'}], 
'mac_address': '1a:b6:7c:24:5e:cd', 'type': 'physical'}, {'address': 
['2001:4860:4860::8844', '2001:4860:4860::', '8.8.8.8'], 'type': 
'nameserver'}]}
  2017-04-10 17:36:11,668 - util.py[DEBUG]: Writing to 
/etc/network/interfaces.d/50-cloud-init.cfg - wb: [420] 868 bytes
  2017-04-10 17:36:11,669 - main.py[DEBUG]: [local] Exiting. datasource 
DataSourceDigitalOcean not in local mode.
  2017-04-10 17:36:11,674 - util.py[DEBUG]: Reading from /proc/uptime 
(quiet=False)

  
--
  From 'dmesg':
  Apr 10 17:36:11 ubuntu systemd[1]: Started Initial cloud-init job 
(pre-networking).
  Apr 10 17:36:12 ubuntu systemd[1]: Started LSB: AppArmor initialization.
  Apr 10 17:36:12 ubuntu systemd[1]: Reached target Network (Pre).
  Apr 10 17:36:12 ubuntu systemd[1]: Starting Raise network interfaces...
  Apr 10 17:36:13 ubuntu ifup[1099]: Waiting for DAD... Done
  Apr 10 17:36:13 ubuntu ifup[1099]: RTNETLINK answers: File exists
  Apr 10 17:36:13 ubuntu ifup[1099]: Failed to bring up eth1.

  
--
  $ sudo journalctl -xe -u networking
  Apr 10 17:36:12 ubuntu systemd[1]: Starting Raise network interfaces...
  -- Subject: Unit networking.service has begun start-up
  -- Defined-By: systemd
  -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
  --
  -- Unit networking.service has begun starting up.
  Apr 10 17:36:13 ubuntu ifup[1099]: Waiting for DAD... Done
  Apr 10 17:36:13 ubuntu ifup[1099]: RTNETLINK answers: File exists
  Apr 10 17:36:13 ubuntu ifup[1099]: Failed to bring up eth1.
  Apr 10 17:36:13 ubuntu systemd[1]: networking.service: Main process exited, 
code=exited, status=1/FAILURE
  Apr 10 17:36:13 ubuntu systemd[1]: Failed to start Raise network interfaces.
  -- Subject: Unit networking.service has failed
  -- Defined-By: systemd
  -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
  --
  -- Unit networking.service has failed.
  --

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1681531/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@li

[Yahoo-eng-team] [Bug 1671694] Re: Softreboot can be done when the instance not in active status

2017-05-03 Thread Akihiro Motoki
** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1671694

Title:
  Softreboot can be done when the instance not in active status

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  When the instance not in active status, you can also do softreboot
  action, and the result must be failed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1671694/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1664374] Re: Direction path is not correct

2017-05-03 Thread Akihiro Motoki
it is worth adding horizon to the affected project to allow us to track this as 
a bug.
It has been fixed in https://review.openstack.org/#/c/442276/ in horizon.

** Also affects: horizon
   Importance: Undecided
   Status: New

** Changed in: horizon
   Status: New => Fix Released

** Changed in: horizon
Milestone: None => pike-2

** Changed in: horizon
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1664374

Title:
  Direction path is not correct

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Zaqar-ui:
  Fix Released

Bug description:
  Currently, on the direction path, it's showing Project/Messaging/None.
  We would like to see Project/Messaging/Queues

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1664374/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1688119] [NEW] change_password_after_first_use is not honored

2017-05-03 Thread Samuel de Medeiros Queiroz
Public bug reported:

With change_password_after_first_use set to true, new users or users
whom password got administratively updated should get their
password_expires_at set to the current time, and password_expires_days
should not be honored.

keystone.conf:

[security_compliance]
# Configuring password expiration
password_expires_days = 1
# Force users to immediately change their password upon first use
change_password_after_first_use = true

(demo) samueldmq@workstation:~/workspace$ date -u
Qua Mai  3 21:24:34 UTC 2017
(demo) samueldmq@workstation:~/workspace$ openstack user create demo --password 
demo123 --domain default
+-+--+
| Field   | Value|
+-+--+
| domain_id   | default  |
| enabled | True |
| id  | 0d56a461493a43a1aa34b604970800c1 |
| name| demo |
| options | {}   |
| password_expires_at | 2017-05-04T21:24:40.00   |
+-+--+

(demo) samueldmq@workstation:~/workspace$ date -u
Qua Mai  3 21:27:47 UTC 2017
(demo) samueldmq@workstation:~/workspace$ openstack user set demo --password 
123demo
(demo) samueldmq@workstation:~/workspace$ openstack user show demo
+-+--+
| Field   | Value|
+-+--+
| domain_id   | default  |
| enabled | True |
| id  | 0d56a461493a43a1aa34b604970800c1 |
| name| demo |
| options | {}   |
| password_expires_at | 2017-05-04T21:27:53.00   |
+-+--+

Environment:
- Ubuntu 14.04 LTS
- Using virtualenv-15.0.1 with Python 3.5
- keystone master version
- python-openstackclient master version

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1688119

Title:
  change_password_after_first_use is not honored

Status in OpenStack Identity (keystone):
  New

Bug description:
  With change_password_after_first_use set to true, new users or users
  whom password got administratively updated should get their
  password_expires_at set to the current time, and password_expires_days
  should not be honored.

  keystone.conf:

  [security_compliance]
  # Configuring password expiration
  password_expires_days = 1
  # Force users to immediately change their password upon first use
  change_password_after_first_use = true

  (demo) samueldmq@workstation:~/workspace$ date -u
  Qua Mai  3 21:24:34 UTC 2017
  (demo) samueldmq@workstation:~/workspace$ openstack user create demo 
--password demo123 --domain default
  +-+--+
  | Field   | Value|
  +-+--+
  | domain_id   | default  |
  | enabled | True |
  | id  | 0d56a461493a43a1aa34b604970800c1 |
  | name| demo |
  | options | {}   |
  | password_expires_at | 2017-05-04T21:24:40.00   |
  +-+--+

  (demo) samueldmq@workstation:~/workspace$ date -u
  Qua Mai  3 21:27:47 UTC 2017
  (demo) samueldmq@workstation:~/workspace$ openstack user set demo --password 
123demo
  (demo) samueldmq@workstation:~/workspace$ openstack user show demo
  +-+--+
  | Field   | Value|
  +-+--+
  | domain_id   | default  |
  | enabled | True |
  | id  | 0d56a461493a43a1aa34b604970800c1 |
  | name| demo |
  | options | {}   |
  | password_expires_at | 2017-05-04T21:27:53.00   |
  +-+--+

  Environment:
  - Ubuntu 14.04 LTS
  - Using virtualenv-15.0.1 with Python 3.5
  - keystone master version
  - python-openstackclient master version

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1688119/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to

[Yahoo-eng-team] [Bug 1688123] [NEW] ignore_password_expiry is not honored

2017-05-03 Thread Samuel de Medeiros Queiroz
Public bug reported:

ignore_password_expiry is set for admin user and is not working
properly. With it set to true, the user should not be affected if their
password has expired.

keystone.conf:

[cache]
# Global toggle for caching. (boolean value)
enabled = false
[security_compliance]
# Configuring password expiration
password_expires_days = 1

(demo) samueldmq@workstation:~/workspace$ date -u
Qua Mai  3 21:41:29 UTC 2017
(demo) samueldmq@workstation:~/workspace$ openstack token issue
++-+
| Field  | Value

   |
++-+
| expires| 2017-05-03T21:41:53+ 

   |
| id | 
gABZCk6NvFEKGZuUxYrij80hLxFU3mw0s0qYR8N6ekNZ6vok-Cnto1pDZSSoJ7JJOwDRGUCzNjYCCyHmqx-kllUpcNFDpPU-eC72Ni5PEqlV9ZVFvVjkmnXLp6b2uplacYafyEFbFeHJAfEdOY8hQDgDCqO3zbaOx-FGs4XWDLbVMv5bz8c
 |
| project_id | 2a642e78f42f43ce8458974e7c6aded4 

   |
| user_id| 8cff3292355d4571a7cb7c5165c4cc73 

   |
++-+
(demo) samueldmq@workstation:~/workspace$ openstack user show 
8cff3292355d4571a7cb7c5165c4cc73
+-+--+
| Field   | Value   
 |
+-+--+
| domain_id   | default 
 |
| enabled | True
 |
| id  | 8cff3292355d4571a7cb7c5165c4cc73
 |
| name| admin   
 |
| options | {'ignore_lockout_failure_attempts': True, 
'ignore_password_expiry': True, 'ignore_change_password_upon_first_use': True} |
| password_expires_at | 2017-05-04T21:04:24.00  
 |
+-+--+
(demo) samueldmq@workstation:~/workspace$ date -u
Qua Mai  3 21:41:44 UTC 2017

[[ Manually updated system date +1d ]]

(demo) samueldmq@workstation:~/workspace$ date -u
Qui Mai  4 21:41:55 UTC 2017
(demo) samueldmq@workstation:~/workspace$ openstack token issue
The password is expired and needs to be changed for user: 
8cff3292355d4571a7cb7c5165c4cc73. (HTTP 401) (Request-ID: 
req-278ccb52-582e-426d-a58d-5ba3a297eeaf)

Environment:
- Ubuntu 14.04 LTS
- Using virtualenv-15.0.1 with Python 3.5
- keystone master version
- python-openstackclient master version

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1688123

Title:
  ignore_password_expiry is not honored

Status in OpenStack Identity (keystone):
  New

Bug description:
  ignore_password_expiry is set for admin user and is not working
  properly. With it set to true, the user should not be affected if
  their password has expired.

  keystone.conf:

  [cache]
  # Global toggle for caching. (boolean value)
  enabled = false
  [security_compliance]
  # Configuring password expiration
  password_expires_days = 1

  (demo) samueldmq@workstation:~/w

[Yahoo-eng-team] [Bug 1656386] Re: OOM issues in the gate

2017-05-03 Thread Matt Riedemann
(5:07:19 PM) clarkb: mriedem: there were a lot of small thinsg we did
(5:07:56 PM) clarkb: mriedem: we enable same page merging or whatever its 
called so libvirt VMs woudl share memory. We reduced the number of siwft 
processes. We reduced the number of apache workers
(5:08:11 PM) clarkb: mriedem: I don't think any openstack projects did anything 
to reduce their memory consumption though

** Changed in: openstack-gate
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1656386

Title:
  OOM issues in the gate

Status in neutron:
  Confirmed
Status in OpenStack-Gate:
  Fix Released

Bug description:
  Couple examples of recent leakages for linuxbridge job [1], [2]

  [1] 
http://logs.openstack.org/73/373973/13/check/gate-tempest-dsvm-neutron-linuxbridge-ubuntu-xenial/295d92f/logs/syslog.txt.gz#_Jan_11_13_56_32
  [2] 
http://logs.openstack.org/59/382659/26/check/gate-tempest-dsvm-neutron-linuxbridge-ubuntu-xenial/7de01d0/logs/syslog.txt.gz#_Jan_11_15_54_36

  Close to the end of running tests, consumption of swap growths pretty 
quickly, exceeding 2GBs.
  I didn't find root cause of that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1656386/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1688147] [NEW] The example of disabling floating IPs tab should be removed

2017-05-03 Thread Ying Zuo
Public bug reported:

The floating ip tab on access and security panel was moved to its own
panel in Ocata. The example of disabling floating ips tab should be
removed from the override existing methods section on the customization
file.

https://docs.openstack.org/developer/horizon/topics/customizing.html

** Affects: horizon
 Importance: Undecided
 Assignee: Ying Zuo (yingzuo)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Ying Zuo (yingzuo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1688147

Title:
  The example of disabling floating IPs tab should be removed

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The floating ip tab on access and security panel was moved to its own
  panel in Ocata. The example of disabling floating ips tab should be
  removed from the override existing methods section on the
  customization file.

  https://docs.openstack.org/developer/horizon/topics/customizing.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1688147/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1688166] [NEW] Missing rootwrap filter for cryptsetup

2017-05-03 Thread Jackie Truong
Public bug reported:

Description
===
`cryptsetup` is not authorized to run with root permissions. The rootwrap 
filter for cryptsetup was recently removed from compute.filters [1], but it is 
still needed by dmcrypt [2].

References:
[1] 
https://github.com/openstack/nova/commit/9c23cdc247770830fa288f429ca7231eb431a3b2#diff-b01672c9be31a4fe1dd0921241a7ae15L234

[2]
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/storage/dmcrypt.py


Steps to reproduce
==

1. Set up an LVM device:
Create a backing file:
  $ truncate nova-lvm -s 2G

Mount the backing file on a loop device:
  $ sudo losetup /dev/loop1 nova-lvm

Prepare the device for LVM:
  $ sudo pvcreate /dev/loop1

Create the LVM group on the loop device:
  $ sudo vgcreate nova-lvm /dev/loop1

2. Set up a devstack environment with ephemeral storage encryption enabled by 
adding the following lines to `lib/nova`:
  iniset $NOVA_CONF ephemeral_storage_encryption enabled "True"
  iniset $NOVA_CONF ephemeral_storage_encryption cipher "aes-xts-plain64"
  iniset $NOVA_CONF ephemeral_storage_encryption key_size "256"
  iniset $NOVA_CONF libvirt images_type "lvm"
  iniset $NOVA_CONF libvirt images_volume_group "nova-lvm"

3. Stack:
  $ ./stack

4. Use Nova to boot an instance:
  $ nova boot --flavor 1 --image {image_id}

---OR---

4. Run Barbican Tempest tests:

4a. Set up a Tempest environment:
  $ pip install virtualenv
  $ mkdir tempest-env
  $ virtualenv tempest-env
  $ cd tempest-env
  $ source bin/activate

4b. Install Tempest:
  $ git clone http://git.openstack.org/openstack/tempest
  $ bin/pip install tempest/

4c. Install the Barbican Tempest plugin and oslotest:
  $ git clone https://git.openstack.org/openstack/barbican-tempest-plugin
  $ bin/pip install -e barbican-tempest-plugin/
  $ bin/pip install oslotest

4d. Run Barbican Tempest tests
  $ testr run barbican_tempest_plugin


Expected result
===
Instance successfully boots.


Actual result
=
Instance fails to boot.

Example traceback (from `gate-barbican-simple-crypto-dsvm-tempest-
ubuntu-xenial-nv` [1]):

2017-04-30 16:17:41.576206 | Captured traceback:
2017-04-30 16:17:41.576224 | ~~~
2017-04-30 16:17:41.576246 | Traceback (most recent call last):
2017-04-30 16:17:41.576272 |   File "tempest/test.py", line 96, in wrapper
2017-04-30 16:17:41.576298 | return f(self, *func_args, **func_kwargs)
2017-04-30 16:17:41.576362 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/barbican_tempest_plugin/tests/scenario/test_image_signing.py",
 line 48, in test_signed_image_upload_and_boot
2017-04-30 16:17:41.576382 | wait_until='ACTIVE')
2017-04-30 16:17:41.576437 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/barbican_tempest_plugin/tests/scenario/manager.py",
 line 201, in create_server
2017-04-30 16:17:41.576460 | image_id=image_id, **kwargs)
2017-04-30 16:17:41.576492 |   File "tempest/common/compute.py", line 206, 
in create_test_server
2017-04-30 16:17:41.576510 | server['id'])
2017-04-30 16:17:41.576557 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
2017-04-30 16:17:41.576577 | self.force_reraise()
2017-04-30 16:17:41.576626 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
2017-04-30 16:17:41.576652 | six.reraise(self.type_, self.value, 
self.tb)
2017-04-30 16:17:41.576683 |   File "tempest/common/compute.py", line 188, 
in create_test_server
2017-04-30 16:17:41.576711 | clients.servers_client, server['id'], 
wait_until)
2017-04-30 16:17:41.576742 |   File "tempest/common/waiters.py", line 76, 
in wait_for_server_status
2017-04-30 16:17:41.576762 | server_id=server_id)
2017-04-30 16:17:41.576809 | tempest.exceptions.BuildErrorException: Server 
b2fc6277-1f92-4466-a177-194fa7e8f0c3 failed to build and is in ERROR status
2017-04-30 16:17:41.576926 | Details: {u'message': u"Build of instance 
b2fc6277-1f92-4466-a177-194fa7e8f0c3 aborted: Unexpected error while running 
command.\nCommand: sudo nova-rootwrap /etc/nova/rootwrap.conf cryptsetup remove 
b2fc6277-1f92-4466-a177-194fa7e8f0c3_disk-dmcrypt\nExit code: 99\nStdout: 
u''\nStder", u'created': u'2017-04-30T16:16:50Z', u'code': 500}


n-cpu log [2] indicates that cryptsetup is an unauthorized command:


2017-04-30 16:16:43.530 19568 ERROR nova.virt.libvirt.storage.dmcrypt 
[req-2dcb6768-1edc-4ac7-b00d-7d081478abef tempest-ImageSigningTest-57730532 
tempest-ImageSigningTest-57730532] Could not disconnect encrypted volume 
b2fc6277-1f92-4466-a177-194fa7e8f0c3_disk-dmcrypt. If dm-crypt device is still 
active it will have to be destroyed manually for cleanup to succeed.
2017-04-30 16:16:43.532 19568 ERROR root 
[req-2dcb6768-1edc-4ac7-b00d-7d0814

[Yahoo-eng-team] [Bug 1687727] Re: virtual machine error with \n"| "code": 500

2017-05-03 Thread jichenjc
2017-05-02T11:44:59.355676-04:00 cic-3 nova-api[8656]: 2017-05-02 11:44:59.355 
8656 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": 
{"message": "An unexpected error prevented the server from fulfilling your 
request.", "code": 500, "title": "Internal Server Error"}}
2017-05-02T11:44:59.355962-04:00 cic-3 nova-api[8656]: 2017-05-02 11:44:59.355 
8656 CRITICAL keystonemiddleware.auth_token [-] Unable to validate token: 
Failed to fetch token data from identity server
2017-05-02T11:44:59.356948-04:00 cic-3 nova-api[8656]: 2017-05-02 11:44:59.356 
8656 INFO nova.osapi_compute.wsgi.server [-] 10.88.4.165 "GET /v2.1/extensions 
HTTP/1.1" status: 503 len: 318 time: 0.0228529
2017-05-02T11:45:03.111211-04:00 cic-3 nova-api[8648]: 2017-05-02 11:45:03.110 
8648 INFO nova.osapi_compute.wsgi.server [-] 10.88.4.165 "OPTIONS / HTTP/1.0" 
status: 200 len: 499 time: 0.0004940
2017-05-02T11:45:03.143150-04:00 cic-3 nova-api[8648]: 2017-05-02 11:45:03.142 
8648 INFO nova.osapi_compute.wsgi.server [-] 240.0.0.2 "OPTIONS / HTTP/1.0" 
status: 200 len: 499 time: 0.0003510
2017-05-02T11:45:04.122306-04:00 cic-3 nova-api[8648]: 2017-05-02 11:45:04.122 
8648 INFO nova.osapi_compute.wsgi.server [-] 10.88.4.166 "OPTIONS / HTTP/1.0" 
status: 200 len: 499 time: 0.0005429
2017-05-02T11:45:13.184040-04:00 cic-3 nova-api[8648]: 2017-05-02 11:45:13.183 
8648 INFO nova.osapi_compute.wsgi.server [-] 10.88.4.165 "OPTIONS / HTTP/1.0" 
status: 200 len: 499 time: 0.0005920
2017-05-02T11:45:13.214667-04:00 cic-3 nova-api[8647]: 2017-05-02 11:45:13.214 
8647 INFO nova.osapi_compute.wsgi.server [-] 240.0.0.2 "OPTIONS / HTTP/1.0" 
status: 200 len: 499 time: 0.0005410
2017-05-02T11:45:14.149329-04:00 cic-3 nova-api[8648]: 2017-05-02 11:45:14.148 
8648 INFO nova.osapi_compute.wsgi.server [-] 10.88.4.166 "OPTIONS / HTTP/1.0" 
status: 200 len: 499 time: 0.0005960
2017-05-02T11:45:23.353592-04:00 cic-3 nova-api[8647]: 2017-05-02 11:45:23.353 
8647 INFO nova.osapi_compute.wsgi.server [-] 10.88.4.165 "OPTIONS / HTTP/1.0" 
status: 200 len: 499 time: 0.0005689
2017-05-02T11:45:23.383527-04:00 cic-3 nova-api[8656]: 2017-05-02 11:45:23.383 
8656 INFO nova.osapi_compute.wsgi.server [-] 240.0.0.2 "OPTIONS / HTTP/1.0" 
status: 200 len: 499 time: 0.0004990
2017-05-02T11:45:23.935692-04:00 cic-3 nova-api[8649]: 2017-05-02 11:45:23.935 
8649 INFO nova.osapi_compute.wsgi.server [-] 10.88.4.166 "OPTIONS / HTTP/1.0" 
status: 200 len: 499 time: 0.0004661
2017-05-02T11:45:29.058235-04:00 cic-3 nova-api[8647]: 2017-05-02 11:45:29.057 
8647 ERROR oslo.messaging._drivers.impl_rabbit [-] AMQP server on 
10.88.4.161:5673 is unreachable: (0, 0): (320) CONNECTION_FORCED - broker 
forced connection closure with reason 'shutdown'. Trying again in 5 seconds.
2017-05-02T11:45:29.066108-04:00 cic-3 nova-api[8648]: 2017-05-02 11:45:29.065 
8648 ERROR oslo.messaging._drivers.impl_rabbit [-] AMQP server on 
10.88.4.161:5673 is unreachable: (0, 0): (320) CONNECTION_FORCED - broker 
forced connection closure with reason 'shutdown'. Trying again in 5 seconds.
2017-05-02T11:45:30.344488-04:00 cic-3 nova-api[8650]: 2017-05-02 11:45:30.344 
8650 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable 
connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe
2017-05-02T11:45:30.351433-04:00 cic-3 nova-api[8650]: 2017-05-02 11:45:30.351 
8650 ERROR oslo.messaging._drivers.impl_rabbit [-] AMQP server on 
10.88.4.161:5673 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 5 
seconds.
2017-05-02T11:45:31.897745-04:00 cic-3 nova-api[8652]: 2017-05-02 11:45:31.897 
8652 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable 
connection/channel error occurred, trying to reconnect: (0, 0): (320) 
CONNECTION_FO

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1687727

Title:
  virtual machine error with \n"| "code": 500

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  (ECMAC012) PG-NGN: Failed to invoke Create operation for Virtual
  Machine STT_MME1NCB-1.15 due to following error: code = { 50500 }
  message = { server fault } details = { External error. {
  "computeFault": { "message": "Unexpected API Error. Please report this
  at http://bugs.launchpad.net/nova/ and attach the Nova API log if
  possible.\n"| "code": 500
  } }- [Processed by PG Node: sttnfv01-ecmpgngn.viyavi.com]

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1687727/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1652157] Re: privsep configuration is invalid

2017-05-03 Thread ChangBo Guo(gcb)
** Changed in: oslo.rootwrap
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1652157

Title:
  privsep configuration is invalid

Status in neutron:
  Fix Released
Status in oslo.rootwrap:
  Fix Released

Bug description:
  http://logs.openstack.org/76/414176/6/check/gate-devstack-dsvm-py35
  -updown-ubuntu-xenial-
  nv/e100b7f/logs/devstacklog.txt.gz#_2016-12-22_19_44_56_941

  
  2016-12-22 19:44:56.941 | 2016-12-22 19:44:56.940 24861 ERROR 
neutron.agent.ovsdb.impl_vsctl [-] Unable to execute ['ovs-vsctl', 
'--timeout=10', '--oneline', '--format=json', '--', '--id=@manager', 'create', 
'Manager', 'target="ptcp:6640:127.0.0.1"', '--', 'add', 'Open_vSwitch', '.', 
'manager_options', '@manager']. Exception: Failed to spawn rootwrap process.
  2016-12-22 19:44:56.941 | stderr:
  2016-12-22 19:44:56.941 | b'Traceback (most recent call last):\n  File 
"/usr/local/bin/neutron-rootwrap-daemon", line 10, in \n
sys.exit(daemon())\n  File 
"/usr/local/lib/python3.5/dist-packages/oslo_rootwrap/cmd.py", line 57, in 
daemon\nreturn main(run_daemon=True)\n  File 
"/usr/local/lib/python3.5/dist-packages/oslo_rootwrap/cmd.py", line 91, in 
main\nfilters = wrapper.load_filters(config.filters_path)\n  File 
"/usr/local/lib/python3.5/dist-packages/oslo_rootwrap/wrapper.py", line 120, in 
load_filters\nfilterconfig.read(os.path.join(filterdir, filterfile))\n  
File "/usr/lib/python3.5/configparser.py", line 696, in read\n
self._read(fp, filename)\n  File "/usr/lib/python3.5/configparser.py", line 
1089, in _read\nfpname, lineno)\nconfigparser.DuplicateOptionError: While 
reading from \'/etc/neutron/rootwrap.d/privsep.filters\' [line 32]: option 
\'privsep\' in section \'Filters\' already exists\n'

  
  
https://github.com/openstack/neutron/blob/master/etc/neutron/rootwrap.d/privsep.filters#L32-L36

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1652157/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp