[Yahoo-eng-team] [Bug 1791989] Re: grenade-dvr-multinode job fails

2018-10-12 Thread Slawek Kaplonski
Problem solved by https://review.openstack.org/#/c/595490/ in
stable/rocky. Job is again voting and gating so this bug can be closed
now.

** Changed in: neutron
   Status: Confirmed => Fix Committed

** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1791989

Title:
  grenade-dvr-multinode job fails

Status in neutron:
  Fix Released

Bug description:
  This jobs is failing quite often in last days.
  In all cases which I checked there was similar issue in grenade log, for 
example: 
http://logs.openstack.org/65/600565/2/gate/neutron-grenade-dvr-multinode/07ef603/logs/grenade.sh.txt.gz#_2018-09-11_16_12_33_959

  It hits more than 70 times during last 7 days:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=build_name%3A%5C%22neutron-grenade-dvr-multinode%5C%22%20AND%20build_status%3AFAILURE%20AND%20message%3A%5C%22die%2067%20'%5BFail%5D%20Couldn'%5C%5C''t%20ping%20server%5C%22

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1791989/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259292] Re: Some tests use assertEqual(observed, expected) , the argument order is wrong

2018-10-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/485050
Committed: 
https://git.openstack.org/cgit/openstack/blazar/commit/?id=3fdac0da7dc2ca752b9dd9de854c16e1d8debf1b
Submitter: Zuul
Branch:master

commit 3fdac0da7dc2ca752b9dd9de854c16e1d8debf1b
Author: Kiran_totad 
Date:   Wed Jul 19 11:33:48 2017 +0530

Fix order of arguments in assertEqual

Some tests incorrectly used the order assertEqual(observed, expected).

The correct order expected by testtools is
assertEqual(expected, observed).

Change-Id: Idbb127147df5acc03287b6a7c1f8d24a37fd663e
Closes-Bug: #1259292


** Changed in: blazar
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1259292

Title:
  Some tests use assertEqual(observed, expected) , the argument order is
  wrong

Status in Astara:
  Fix Released
Status in Bandit:
  Fix Released
Status in Barbican:
  Fix Released
Status in Blazar:
  Fix Released
Status in Ceilometer:
  Invalid
Status in Cinder:
  Fix Released
Status in congress:
  Fix Released
Status in daisycloud-core:
  New
Status in Designate:
  Fix Released
Status in OpenStack Backup/Restore and DR (Freezer):
  In Progress
Status in Glance:
  Fix Released
Status in glance_store:
  Fix Released
Status in Higgins:
  New
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in Magnum:
  Fix Released
Status in Manila:
  Fix Released
Status in Mistral:
  Fix Released
Status in Murano:
  Fix Released
Status in networking-calico:
  Fix Released
Status in networking-infoblox:
  In Progress
Status in networking-l2gw:
  Fix Released
Status in networking-sfc:
  Fix Released
Status in quark:
  In Progress
Status in OpenStack Compute (nova):
  Won't Fix
Status in os-brick:
  Fix Released
Status in PBR:
  Fix Released
Status in pycadf:
  Fix Released
Status in python-barbicanclient:
  Fix Released
Status in python-ceilometerclient:
  Invalid
Status in python-cinderclient:
  Fix Released
Status in python-designateclient:
  Fix Committed
Status in Glance Client:
  Fix Released
Status in python-mistralclient:
  Fix Released
Status in python-solumclient:
  Fix Released
Status in Python client library for Zaqar:
  Fix Released
Status in Rally:
  In Progress
Status in Sahara:
  Fix Released
Status in Solum:
  Fix Released
Status in sqlalchemy-migrate:
  In Progress
Status in SWIFT:
  In Progress
Status in tacker:
  Fix Released
Status in tempest:
  Invalid
Status in zaqar:
  Fix Released

Bug description:
  The test cases will produce a confusing error message if the tests
  ever fail, so this is worth fixing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/astara/+bug/1259292/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1778771] Re: Backups panel is visible even if enable_backup is False

2018-10-12 Thread Edward Hope-Morley
** Changed in: charm-openstack-dashboard
   Status: In Progress => Invalid

** Changed in: charm-openstack-dashboard
 Assignee: Seyeong Kim (xtrusia) => (unassigned)

** Changed in: charm-openstack-dashboard
Milestone: 18.11 => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1778771

Title:
  Backups panel is visible even if enable_backup is False

Status in OpenStack openstack-dashboard charm:
  Invalid
Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive queens series:
  Triaged
Status in Ubuntu Cloud Archive rocky series:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in horizon package in Ubuntu:
  Fix Released
Status in horizon source package in Bionic:
  Triaged
Status in horizon source package in Cosmic:
  Fix Released

Bug description:
  Hi,

  Volumes - Backup panel is visible even if OPENSTACK_CINDER_FEATURES =
  {'enable_backup': False} in local_settings.py

  Meanwhile setting enable_backup to false removes an option to create
  backup of a volume in the volume drop-down options. But panel with
  backups itself stays visible for both admins and users.

  As a work-around I use the following customization script:
  import horizon
  from django.conf import settings
  if not getattr(settings, 'OPENSTACK_CINDER_FEATURES', 
{}).get('enable_backup', False):
  project = horizon.get_dashboard("project")
  backup = project.get_panel("backups")
  project.unregister(backup.__class__)

  And for permanent fix I see the following decision. In 
openstack_dashboard/dashboards/project/backups/panel.py make the following 
changes:
  ...
  +L16: from django.conf import settings
  ...
  +L21: if not getattr(settings, 'OPENSTACK_CINDER_FEATURES', 
{}).get('enable_backup', False):
  +L22: return False
  ...

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-openstack-dashboard/+bug/1778771/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1797571] [NEW] Functional tests related to web-download import method timeouts or fails intermittently

2018-10-12 Thread Abhishek Kekane
Public bug reported:

In web-download import method functional tests we are trying to download
a file from 'https://www.openstack.org/assets/openstack-logo/2016R
/OpenStack-Logo-Horizontal.eps.zip' in glance. Here we are assuming
image will be downloaded and active 20 seconds of time and if not it
will be marked as failed. As of now these tests never fails in local
environment but, external networking will always be unreliable from the
CI environment which sometimes causes these tests to either timeout or
failure.

The solution for this is likely to be that not relying on pulling
something from the external network in the test instead just use
something else hosted on the local Apache httpd of the test node and use
that as the URL to import in the test.

** Affects: glance
 Importance: High
 Assignee: Abhishek Kekane (abhishek-kekane)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1797571

Title:
  Functional tests related to web-download import method timeouts or
  fails intermittently

Status in Glance:
  New

Bug description:
  In web-download import method functional tests we are trying to
  download a file from 'https://www.openstack.org/assets/openstack-
  logo/2016R/OpenStack-Logo-Horizontal.eps.zip' in glance. Here we are
  assuming image will be downloaded and active 20 seconds of time and if
  not it will be marked as failed. As of now these tests never fails in
  local environment but, external networking will always be unreliable
  from the CI environment which sometimes causes these tests to either
  timeout or failure.

  The solution for this is likely to be that not relying on pulling
  something from the external network in the test instead just use
  something else hosted on the local Apache httpd of the test node and
  use that as the URL to import in the test.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1797571/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1797580] [NEW] NoValidHost during live migration after cold migrating to a specified host

2018-10-12 Thread Matt Riedemann
Public bug reported:

I recreated this with a 2-node devstack in stein created yesterday.

1. create a server
2. cold migrate the server to the other host and specify the host: nova migrate 
 --host 
3. confirm the resize
4. live migrate the server w/o specifying a host so the scheduler has to pick 
one

At this point, you get a NoValidHost error because the scheduler is
restricted to the current host on which the instance is running because
of the requested_destination field that is persisted in the request spec
from step 2:

http://paste.openstack.org/show/731972/

The problem is when cold migrating a server with a specified target
host, compute API stores that on the request spec and sends it to the
conductor to tell the scheduler which host to use:

https://github.com/openstack/nova/blob/20bc0136d0665bafdcd379f19389a0a5ea7bf310/nova/compute/api.py#L3565

But that request spec requested_destination field gets persisted and
then when you live migrate, it's re-used but since the server is already
on that host, we get NoValidHost since you can't live migrate to the
same host.

This is a regression in Queens: https://review.openstack.org/#/c/408955/

** Affects: nova
 Importance: High
 Assignee: Matt Riedemann (mriedem)
 Status: Triaged

** Affects: nova/queens
 Importance: High
 Status: Confirmed

** Affects: nova/rocky
 Importance: High
 Status: Confirmed


** Tags: live-migration

** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Also affects: nova/rocky
   Importance: Undecided
   Status: New

** Changed in: nova/queens
   Status: New => Confirmed

** Changed in: nova/rocky
   Status: New => Confirmed

** Changed in: nova/rocky
   Importance: Undecided => High

** Changed in: nova/queens
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1797580

Title:
  NoValidHost during live migration after cold migrating to a specified
  host

Status in OpenStack Compute (nova):
  Triaged
Status in OpenStack Compute (nova) queens series:
  Confirmed
Status in OpenStack Compute (nova) rocky series:
  Confirmed

Bug description:
  I recreated this with a 2-node devstack in stein created yesterday.

  1. create a server
  2. cold migrate the server to the other host and specify the host: nova 
migrate  --host 
  3. confirm the resize
  4. live migrate the server w/o specifying a host so the scheduler has to pick 
one

  At this point, you get a NoValidHost error because the scheduler is
  restricted to the current host on which the instance is running
  because of the requested_destination field that is persisted in the
  request spec from step 2:

  http://paste.openstack.org/show/731972/

  The problem is when cold migrating a server with a specified target
  host, compute API stores that on the request spec and sends it to the
  conductor to tell the scheduler which host to use:

  
https://github.com/openstack/nova/blob/20bc0136d0665bafdcd379f19389a0a5ea7bf310/nova/compute/api.py#L3565

  But that request spec requested_destination field gets persisted and
  then when you live migrate, it's re-used but since the server is
  already on that host, we get NoValidHost since you can't live migrate
  to the same host.

  This is a regression in Queens:
  https://review.openstack.org/#/c/408955/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1797580/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1797592] [NEW] Progress bar doesn't displayed at image upload

2018-10-12 Thread Vadym Markov
Public bug reported:

"Create image" modal windows should render progress bar during image
upload, immediately after "Create image" button pressed. It disappears
when image loaded. Screenshot of expected behavior is attached

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "horizon_image_create_progressbar.png"
   
https://bugs.launchpad.net/bugs/1797592/+attachment/5200459/+files/horizon_image_create_progressbar.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1797592

Title:
  Progress bar doesn't displayed at image upload

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  "Create image" modal windows should render progress bar during image
  upload, immediately after "Create image" button pressed. It disappears
  when image loaded. Screenshot of expected behavior is attached

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1797592/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1796593] Re: metadata agent can't use ipv6 addresses for nova_metadata_host

2018-10-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/608468
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=85588ad38e3a08137f4f7b4be98ce271064eb2f0
Submitter: Zuul
Branch:master

commit 85588ad38e3a08137f4f7b4be98ce271064eb2f0
Author: aojeagarcia 
Date:   Sun Oct 7 23:17:08 2018 +0200

Allow Ipv6 addresses for nova_metadata_host

Current logic didn't check if the nova_metadata_host is an IPv6 address
causing the proxy request to fail with an exception because the url is
not valid.

This patchs check if the nova_metadata_host is an IPv6 address and
create a valid url enclosing the IPv6 address with brackets

Closes-Bug: #1796593

Change-Id: Ibfebffcec2c8860237a1f151084de978a7863bd8
Signed-off-by: aojeagarcia 


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1796593

Title:
  metadata agent can't use ipv6 addresses for nova_metadata_host

Status in neutron:
  Fix Released

Bug description:
  It's a known issue that metadata services don't work for ipv6-only
  tenant networks [1]

  However, it can be possible that operators want to use ipv4 and ipv6
  tenant networks with an underlay ipv6 infra.

  This doesn't work as you can see in the devstack failure [2] because
  the metadata agent, when building the request, doesn't check if the
  nova_metadata_host is an ipv6 address to add the corresponding square
  brackets [3].

  
  [1] https://bugs.launchpad.net/neutron/+bug/1460177
  [2] 
http://logs.openstack.org/68/608168/5/check/tempest-full-py3/d2321db/controller/logs/screen-q-meta.txt.gz#_Oct_07_18_00_37_770322
  [3] 
https://github.com/openstack/neutron/blob/3e579256a36e66960495da2f303b5a6e37f644a6/neutron/agent/metadata/agent.py#L180

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1796593/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1788619] Re: disk cachemodes should be restricted with multiattached volumes

2018-10-12 Thread Matt Riedemann
Have you tested this or just guessing that the libvirt driver in nova
isn't doing the right thing? Because multiattach disks are always set to
cache mode "none":

https://github.com/openstack/nova/blob/20bc0136d0665bafdcd379f19389a0a5ea7bf310/nova/virt/libvirt/driver.py#L423-L426

# Shareable disks like for a multi-attach volume need to have the
# driver cache disabled.
if getattr(conf, 'shareable', False):
conf.driver_cache = 'none'

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1788619

Title:
  disk cachemodes should be restricted with multiattached volumes

Status in Cinder:
  New
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  If using multiattach, the "writeback" and "unsafe" disk_cachemode
  options presumably break the semantics that an application writing to
  a clustered datastore would rely on for data consistency between
  multiple nodes.

  Volumes should not be allowed to attach to multiple instances
  (multiattach) with unsafe cache modes.

  (This may even include writethrough?  I'm not sure.)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1788619/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1788619] Re: disk cachemodes should be restricted with multiattached volumes

2018-10-12 Thread Sean McGinnis
Description only refers to nova settings. I don't see anything here for
Cinder, so I am going to close it as Invalid. If there is something,
please reopen and provide more detail what Cinder is doing (or not
doing) that it should.

** Changed in: cinder
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1788619

Title:
  disk cachemodes should be restricted with multiattached volumes

Status in Cinder:
  Invalid
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  If using multiattach, the "writeback" and "unsafe" disk_cachemode
  options presumably break the semantics that an application writing to
  a clustered datastore would rely on for data consistency between
  multiple nodes.

  Volumes should not be allowed to attach to multiple instances
  (multiattach) with unsafe cache modes.

  (This may even include writethrough?  I'm not sure.)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1788619/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1789998] Re: ResourceProviderAllocationRetrievalFailed ERROR log message on fresh n-cpu startup

2018-10-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/609552
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=418fc93a10fe18de27c75b522a6afdc15e1c49f2
Submitter: Zuul
Branch:master

commit 418fc93a10fe18de27c75b522a6afdc15e1c49f2
Author: Matt Riedemann 
Date:   Wed Oct 10 17:37:38 2018 -0400

Skip _remove_deleted_instances_allocations if compute is new

If this is the first start of the compute service and the compute node
record does not exist, the resource provider won't exist either. So when
the ResourceTracker._remove_deleted_instances_allocations method is called
it's going to log an ERROR because get_allocations_for_resource_provider
will raise an error since the resource provider doesn't yet exist (that
happens later during RT._update() on the new compute node record).

We can avoid calling _remove_deleted_instances_allocations if we know the
compute node is newly created, so this adds handling for that case.

Tests are updated and an unnecessary mock is removed along the way.

Change-Id: I37e8ad5b14262d801702411c2c87e73550adda70
Closes-Bug: #1789998


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1789998

Title:
  ResourceProviderAllocationRetrievalFailed ERROR log message on fresh
  n-cpu startup

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  As a result of this recent change in stein:

  
https://review.openstack.org/#/c/584598/21/nova/compute/resource_tracker.py@1281

  We now get this error in the n-cpu logs on a fresh startup after the
  compute node record is created in the database but before the resource
  provider is created in placement:

  http://logs.openstack.org/98/584598/21/check/tempest-
  
full/85acbda/controller/logs/screen-n-cpu.txt.gz?level=TRACE#_Aug_29_21_43_10_675029

  Aug 29 21:43:10.675029 ubuntu-xenial-rax-iad-0001643010 nova-
  compute[16853]: ERROR nova.compute.resource_tracker [None req-
  5ee3cf40-9136-42b6-b370-89f6b17ac61a None None] Skipping removal of
  allocations for deleted instances: Failed to retrieve allocations for
  resource provider 6b03ae3f-495d-472a-804b-6cac034f5661: {"errors":
  [{"status": 404, "request_id": "req-
  6ff222c2-be32-471a-8764-d7168e6de73f", "detail": "The resource could
  not be found.\n\n Resource provider '6b03ae3f-495d-472a-804b-
  6cac034f5661' not found: No resource provider with uuid 6b03ae3f-495d-
  472a-804b-6cac034f5661 found  ", "title": "Not Found"}]}:
  ResourceProviderAllocationRetrievalFailed: Failed to retrieve
  allocations for resource provider 6b03ae3f-495d-472a-804b-
  6cac034f5661: {"errors": [{"status": 404, "request_id": "req-
  6ff222c2-be32-471a-8764-d7168e6de73f", "detail": "The resource could
  not be found.\n\n Resource provider '6b03ae3f-495d-472a-804b-
  6cac034f5661' not found: No resource provider with uuid 6b03ae3f-495d-
  472a-804b-6cac034f5661 found  ", "title": "Not Found"}]}

  We could probably pass a flag down to indicate if the compute node is
  newly created and if so, and we hit that exception, to just ignore it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1789998/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1796959] Re: cloud-init disk_setup creates misaligned partition

2018-10-12 Thread Gregory May
The gdisk package and therefor sgdisk was not installed. It seems this
is why the GPT partition was not created correctly. After installing
sgdisk the partition was created successfully from sector 2048.

** Changed in: cloud-init
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1796959

Title:
  cloud-init disk_setup creates misaligned partition

Status in cloud-init:
  Invalid

Bug description:
  [Impact]
  Cloud-init disk_setup has the function to partition disks on devices.
  Partitions are not usable as sfdisk creates new partitions starting on sector 
1. It seems either no start sector value is being passed to sfdisk during 
execution, or an incorrect start sector value of 1 is being passed.

  [Configuration]
  ### User Data ###
  disk_setup:
    /dev/sdc:
  type: gpt
  layout: auto
  overwrite: True

  [Resulting Logs]
  ### /var/log/cloud-init.log ###
  2018-10-08 12:51:29,732 - cc_disk_setup.py[DEBUG]: Partitioning disks: 
{'/dev/disk/cloud/azure_resource': {'_origname': 'ephemeral0', 'table_type': 
'gpt', 'layout': [100], 'overwrite': True}, '/dev/sdc': {'layout': True, 
'type': 'gpt'}}
  2018-10-08 12:51:29,920 - cc_disk_setup.py[DEBUG]: Checking values for 
/dev/sdc definition
  2018-10-08 12:51:29,921 - cc_disk_setup.py[DEBUG]: Checking if device 
/dev/sdc is a valid device
  2018-10-08 12:51:29,921 - util.py[DEBUG]: Running command ['/usr/bin/lsblk', 
'--pairs', '--output', 'NAME,TYPE,FSTYPE,LABEL', '/dev/sdc', '--nodeps'] with 
allowed return codes [0] (shell=False, capture=True)
  2018-10-08 12:51:29,939 - util.py[DEBUG]: Running command 
['/usr/sbin/blockdev', '--rereadpt', '/dev/sdc'] with allowed return codes [0] 
(shell=False, capture=True)
  2018-10-08 12:51:29,955 - util.py[DEBUG]: Running command 
['/usr/sbin/sfdisk', '-l', '/dev/sdc'] with allowed return codes [0] 
(shell=False, capture=True)
  2018-10-08 12:51:30,040 - util.py[DEBUG]: Running command ['/usr/bin/lsblk', 
'--pairs', '--output', 'NAME,TYPE,FSTYPE,LABEL', '/dev/sdc'] with allowed 
return codes [0] (shell=False, capture=True)
  2018-10-08 12:51:30,045 - util.py[DEBUG]: Running command ['/usr/sbin/blkid', 
'-c', '/dev/null', '/dev/sdc'] with allowed return codes [0, 2] (shell=False, 
capture=True)
  2018-10-08 12:51:30,055 - util.py[DEBUG]: Running command ['/usr/sbin/blkid', 
'-c', '/dev/null', '/dev/sdc'] with allowed return codes [0, 2] (shell=False, 
capture=True)
  2018-10-08 12:51:30,063 - util.py[DEBUG]: Running command 
['/usr/sbin/blockdev', '--getsize64', '/dev/sdc'] with allowed return codes [0] 
(shell=False, capture=True)
  2018-10-08 12:51:30,066 - util.py[DEBUG]: Running command 
['/usr/sbin/blockdev', '--getss', '/dev/sdc'] with allowed return codes [0] 
(shell=False, capture=True)
  2018-10-08 12:51:30,070 - cc_disk_setup.py[DEBUG]: Creating partition table 
on /dev/sdc
  2018-10-08 12:51:30,070 - util.py[DEBUG]: Running command 
['/usr/sbin/sfdisk', '--Linux', '--unit=S', '--force', '/dev/sdc'] with allowed 
return codes [0] (shell=False, capture=True)
  2018-10-08 12:51:30,178 - util.py[DEBUG]: Running command 
['/usr/sbin/blockdev', '--rereadpt', '/dev/sdc'] with allowed return codes [0] 
(shell=False, capture=True)
  2018-10-08 12:51:30,218 - cc_disk_setup.py[DEBUG]: Partition table created 
for /dev/sdc
  2018-10-08 12:51:30,218 - util.py[DEBUG]: Creating partition on /dev/sdc took 
0.298 seconds
  2018-10-08 12:51:30,218 - cc_disk_setup.py[DEBUG]: setting up filesystems: 
[{'device': '/dev/sdc1', 'label': 'data-dsk01', 'filesystem': 'xfs'}]

  2018-10-08 12:51:30,225 - util.py[DEBUG]: Running command 
['/usr/sbin/mkfs.xfs', '/dev/sdc1', '-L', 'data-dsk01'] with allowed return 
codes [0] (shell=Fal$
  2018-10-08 12:51:30,300 - util.py[DEBUG]: Creating fs for /dev/sdc1 took 
0.082 seconds
  2018-10-08 12:51:30,300 - util.py[WARNING]: Failed during filesystem operation
  Failed to exec of '['/usr/sbin/mkfs.xfs', '/dev/sdc1', '-L', 'data-dsk01']':
  Unexpected error while running command.
  Command: ['/usr/sbin/mkfs.xfs', '/dev/sdc1', '-L', 'data-dsk01']
  Exit code: 1
  Reason: -
  Stdout: -
  Stderr: warning: device is not properly aligned /dev/sdc1
  Use -f to force usage of a misaligned device
  2018-10-08 12:51:30,300 - util.py[DEBUG]: Failed during filesystem operation
  Failed to exec of '['/usr/sbin/mkfs.xfs', '/dev/sdc1', '-L', 'data-dsk01']':
  Unexpected error while running command.
  Command: ['/usr/sbin/mkfs.xfs', '/dev/sdc1', '-L', 'data-dsk01']
  Exit code: 1
  Reason: -
  Stdout: -
  Stderr: warning: device is not properly aligned /dev/sdc1
  Use -f to force usage of a misaligned device

  ### fdisk ###
  $fdisk -l /dev/sdc

  Disk /dev/sdc: 10.7 GB, 10737418240 bytes, 20971520 sectors
  Units = sectors of 1 * 512 = 512 bytes
  Sector size (logical/physical): 512 bytes / 4096 bytes
  I/O size (minimum/optimal): 4096 byt

[Yahoo-eng-team] [Bug 1797663] [NEW] refactor def _get_dvr_sync_data from neutron/db/l3_dvr_db.py

2018-10-12 Thread Manjeet Singh Bhatia
Public bug reported:

The function def _get_dvr_sync_data in neutron/db/l3_dvr_db.py is
fetching and processing routers data and since its called upon for each
dvr ha router type on update, its becomes very hard to pin point issues
in such a massive method, so I propose breaking it into two methods.

def _get_dvr_sync_data and _process_dvr_sync_data. will make debugging
in future easy.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1797663

Title:
  refactor def _get_dvr_sync_data from neutron/db/l3_dvr_db.py

Status in neutron:
  New

Bug description:
  The function def _get_dvr_sync_data in neutron/db/l3_dvr_db.py is
  fetching and processing routers data and since its called upon for
  each dvr ha router type on update, its becomes very hard to pin point
  issues in such a massive method, so I propose breaking it into two
  methods.

  def _get_dvr_sync_data and _process_dvr_sync_data. will make debugging
  in future easy.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1797663/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1771538] Re: PowerVM config drive path is not secure

2018-10-12 Thread Matthew Edmonds
** Also affects: nova-powervm
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1771538

Title:
  PowerVM config drive path is not secure

Status in OpenStack Compute (nova):
  In Progress
Status in nova-powervm:
  New

Bug description:
  This report is based on the Bandit scanner results and code review.

  1) 
  On 
https://git.openstack.org/cgit/openstack/nova/tree/nova/virt/powervm/media.py?h=refs/heads/master#n44

  43 _VOPT_SIZE_GB = 1
  44 _VOPT_TMPDIR = '/tmp/cfgdrv/'
  45

  We have hardcoded tmp dir that could be cleaned up after compute node reboot.
  As mentioned in todo it might be good to use conf option.

  2) 
  On 
https://git.openstack.org/cgit/openstack/nova/tree/nova/virt/powervm/media.py?h=refs/heads/master#n116
  Predictable file name based on a user input is used:
  116file_name = pvm_util.sanitize_file_name_for_api(
  117instance.name, prefix='cfg_', suffix='.iso',
  118max_len=pvm_const.MaxLen.VOPT_NAME)
  Probably we could use instance.uuid for that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1771538/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1784573] Re: Cannot customize header in login page

2018-10-12 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1784573

Title:
  Cannot customize header in login page

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  In login page, footer can be customized by using
  _login_form_footer.html[1]. However, header cannot be customized.

  It also should be customized as same as footer like
  _login_form_header.html.

  [1]
  
https://github.com/openstack/horizon/blob/master/horizon/templates/auth/_login_page.html#L18

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1784573/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1793756] Re: remote user tests disabled

2018-10-12 Thread Adam Young
After reviewing these tests, I think I can say with confidence that they
are not testing code that we support any longer.  External plugins work
fine, including Kerberos.  These tests were Kerberos specific, but we no
longer support a specific Kerberos plugin, only the External one.  They
were testing that the remote domain was or was not set, but that is
really logic from the mod_auth_gssapi configuration of the apache
server, and not needed for now.  We can revisit if we start seeing
problems with LDAP+Kerberos, but we would probably write new, better
tests if that were ever to happen.

** Changed in: keystone
   Importance: Medium => Wishlist

** Changed in: keystone
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1793756

Title:
  remote user tests disabled

Status in OpenStack Identity (keystone):
  Won't Fix

Bug description:
  in keystone/tests/unit/test_v3_auth.py  there are two tests that have
  been commented out because they are unrunnable:

   test_remote_user_with_realm
  and
test_remote_user_with_default_domain

  These support the External auth mechanism which should be avaialable
  to people with the LDAP identity backend enabled .

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1793756/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1762454] Re: FWaaS: Invalid port error on associating ports (distributed router) to firewall group

2018-10-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/580552
Committed: 
https://git.openstack.org/cgit/openstack/neutron-fwaas/commit/?id=f8e4a193e7930c2e9ef169c6e3be53a3e2a39dbe
Submitter: Zuul
Branch:master

commit f8e4a193e7930c2e9ef169c6e3be53a3e2a39dbe
Author: Yushiro FURUKAWA 
Date:   Fri Jul 6 13:16:40 2018 +0900

Fix associating firewall group with DVR/L3HA port

This commit enables to specify DVR/L3HA port for firewall group. We can
select a port with following device_owner in creating/updating firewall
group.

* DVR:  'network:router_interface_distributed'
* L3HA: 'network:ha_router_replicated_interface'

Co-Authored-By: Nguyen Phuong An 
Change-Id: I05f0f652f3e43d5c1ce5ae7933991cf92a418920
Closes-Bug: #1762454


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1762454

Title:
  FWaaS: Invalid port error on associating ports (distributed router) to
  firewall group

Status in neutron:
  Fix Released

Bug description:
  This bug is probably very similar to #1759773.

  Creating a firewall group fails on CentOS 7.4. and OS Ocata with fwaas_v2 
when using a port of a distributed router. The issue is also still present in 
Queens.
  The validation only accepts "network:router_interface" as "device_owner", but 
not "network:router_interface_distributed".

  The creation of the firewall group itself works, setting a port does
  not:

  # openstack firewall group set --port ff2c03f4-22d9-4d7a-bc7a-9632ba6cd9d8 
oh_noes
  Failed to set firewall group 'oh_noes': Firewall Group Port 
ff2c03f4-22d9-4d7a-bc7a-9632ba6cd9d8 is invalid
  Neutron server returns request_ids: 
['req-8a8a320b-659e-4364-9604-d41e0b04d6ea']

  The port in question:

  # openstack port show ff2c03f4-22d9-4d7a-bc7a-9632ba6cd9d8 -f json
  {
    "allowed_address_pairs": "",
    "extra_dhcp_opts": "",
    "updated_at": "2018-04-09T15:15:07Z",
    "device_owner": "network:router_interface_distributed",
    "revision_number": 9,
    "port_security_enabled": false,
    "fixed_ips": "ip_address='192.168.133.1', 
subnet_id='4d0e4235-a1e8-44c8-9297-e226a65beda6'",
    "id": "ff2c03f4-22d9-4d7a-bc7a-9632ba6cd9d8",
    "security_groups": "",
    "option_value": null,
    "binding_vnic_type": "normal",
    "option_name": null,
    "description": "",
    "qos_policy_id": null,
    "mac_address": "fa:16:3e:75:c8:06",
    "project_id": "4c7effe5f22b4d11ade21982746d650c",
    "status": "ACTIVE",
    "binding_profile": "",
    "binding_vif_type": "distributed",
    "binding_vif_details": "",
    "dns_assignment": "fqdn='host-192-168-133-1.vm.environment.uf0.de.', 
hostname='host-192-168-133-1', ip_address='192.168.133.1'",
    "ip_address": null,
    "device_id": "f305a116-5d6d-4539-883b-117de552d291",
    "name": "",
    "admin_state_up": "UP",
    "network_id": "25b641fb-b104-480c-b347-4b5f66e9bd2b",
    "dns_name": "",
    "created_at": "2018-04-09T15:15:00Z",
    "subnet_id": null,
    "binding_host_id": ""
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1762454/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp