[Yahoo-eng-team] [Bug 1866269] Re: Testcase 'test_encrypted_cinder_volumes_luks' is broken

2020-05-11 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1866269

Title:
  Testcase 'test_encrypted_cinder_volumes_luks' is broken

Status in OpenStack Compute (nova):
  Expired

Bug description:
  CI job:https://zuul.opendev.org/t/openstack/job/nova-next

  ==
  Failed 1 tests - output below:
  ==

  
tempest.scenario.test_encrypted_cinder_volumes.TestEncryptedCinderVolumes.test_encrypted_cinder_volumes_luks[compute,id-79165fb4-5534-4b9d-8429-97ccffb8f86e,image,slow,volume]
  
---

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "/opt/stack/tempest/tempest/common/utils/__init__.py", line 89, in 
wrapper
  return f(*func_args, **func_kwargs)
File 
"/opt/stack/tempest/tempest/scenario/test_encrypted_cinder_volumes.py", line 
63, in test_encrypted_cinder_volumes_luks
  self.attach_detach_volume(server, volume)
File 
"/opt/stack/tempest/tempest/scenario/test_encrypted_cinder_volumes.py", line 
53, in attach_detach_volume
  attached_volume = self.nova_volume_attach(server, volume)
File "/opt/stack/tempest/tempest/scenario/manager.py", line 640, in 
nova_volume_attach
  volume['id'], 'in-use')
File "/opt/stack/tempest/tempest/common/waiters.py", line 215, in 
wait_for_volume_resource_status
  raise lib_exc.TimeoutException(message)
  tempest.lib.exceptions.TimeoutException: Request timed out
  Details: volume 201ccef3-07a9-4b5e-b726-e31c922d068d failed to reach 
in-use status (current available) within the required time (196 s).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1866269/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1824858] Re: nova instance remnant left behind after cold migration completes

2020-05-11 Thread melanie witt
** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Also affects: nova/train
   Importance: Undecided
   Status: New

** Also affects: nova/stein
   Importance: Undecided
   Status: New

** Also affects: nova/rocky
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1824858

Title:
  nova instance remnant left behind after cold migration completes

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) queens series:
  New
Status in OpenStack Compute (nova) rocky series:
  New
Status in OpenStack Compute (nova) stein series:
  New
Status in OpenStack Compute (nova) train series:
  New
Status in StarlingX:
  Fix Released

Bug description:
  Brief Description
  -
  After cold migration to a new worker node, instances remnants are left behind

  
  Severity
  
  standard

  
  Steps to Reproduce
  --
  worker nodes compute-1 and compute-2 have label   remote-storage enabled
  1. Launch instance on compute-1
  2. cold migrate to compute-2
  3. confirm cold migration to complete

  
  Expected Behavior
  --
  Migration to compute-2 and cleanup on files on compute-1

  
  Actual Behavior
  
  At 16:35:24 cold migration for instance a416ead6-a17f-4bb9-9a96-3134b426b069  
completed to compute-2 but the following path is left behind on compute-1
  compute-1:/var/lib/nova/instances/a416ead6-a17f-4bb9-9a96-3134b426b069

  compute-1:/var/lib/nova/instances$ ls
  a416ead6-a17f-4bb9-9a96-3134b426b069 _base  locks
  a416ead6-a17f-4bb9-9a96-3134b426b069_resize  compute_nodes  lost+found

  
  compute-1:/var/lib/nova/instances$ ls
  a416ead6-a17f-4bb9-9a96-3134b426b069  _base  compute_nodes  locks  lost+found

  compute-1:/var/lib/nova/instances$ ls
  a416ead6-a17f-4bb9-9a96-3134b426b069  _base  compute_nodes  locks  lost+found


  2019-04-15T16:35:24.646749clear   700.010 Instance 
tenant2-migration_test-1 owned by tenant2 has been cold-migrated to host 
compute-2 waiting for confirmation
tenant=7f1d4223-3341-428a-9188-55614770e676.instance=a416ead6-a17f-4bb9-9a96-3134b426b069
   critical
  2019-04-15T16:35:24.482575log 700.168 Cold-Migrate-Confirm 
complete for instance tenant2-migration_test-1 enabled on host compute-2   
tenant=7f1d4223-3341-428a-9188-55614770e676.instance=a416ead6-a17f-4bb9-9a96-3134b426b069
   critical
  2019-04-15T16:35:16.815223log 700.163 Cold-Migrate-Confirm 
issued by tenant2 against instance tenant2-migration_test-1 owned by tenant2 on 
host compute-2 
tenant=7f1d4223-3341-428a-9188-55614770e676.instance=a416ead6-a17f-4bb9-9a96-3134b426b069
   critical
  2019-04-15T16:35:10.030068clear   700.009 Instance 
tenant2-migration_test-1 owned by tenant2 is cold migrating from host compute-1 
   
tenant=7f1d4223-3341-428a-9188-55614770e676.instance=a416ead6-a17f-4bb9-9a96-3134b426b069
   critical
  2019-04-15T16:35:09.971414set 700.010 Instance 
tenant2-migration_test-1 owned by tenant2 has been cold-migrated to host 
compute-2 waiting for confirmation
tenant=7f1d4223-3341-428a-9188-55614770e676.instance=a416ead6-a17f-4bb9-9a96-3134b426b069
   critical
  2019-04-15T16:35:09.970212log 700.162 Cold-Migrate complete 
for instance tenant2-migration_test-1 now enabled on host compute-2 waiting for 
confirmation  
tenant=7f1d4223-3341-428a-9188-55614770e676.instance=a416ead6-a17f-4bb9-9a96-3134b426b069
   critical
  2019-04-15T16:34:51.637687set 700.009 Instance 
tenant2-migration_test-1 owned by tenant2 is cold migrating from host compute-1 
   
tenant=7f1d4223-3341-428a-9188-55614770e676.instance=a416ead6-a17f-4bb9-9a96-3134b426b069
   critical
  2019-04-15T16:34:51.637636log 700.158 Cold-Migrate inprogress 
for instance tenant2-migration_test-1 from host compute-1   
tenant=7f1d4223-3341-428a-9188-55614770e676.instance=a416ead6-a17f-4bb9-9a96-3134b426b069
   critical
  2019-04-15T16:34:51.478442log 700.157 Cold-Migrate issued by 
tenant2 against instance tenant2-migration_test-1 owned by tenant2 from host 
compute-1   
tenant=7f1d4223-3341-428a-9188-55614770e676.instance=a416ead6-a17f-4bb9-9a96-3134b426b069
   critical
  2019-04-15T16:34:20.181155log 700.101 Instance 
tenant2-migration_test-1 is enabled on host compute-1  
tenant=7f1d4223-3341-428a-9188-55614770e676.instance=a416ead6-a17f-4bb9-9a96-3134b426b069
   critical

  
  see nova-compute.log (compute-1)
  compute-1 nova-compute log

  [instance: a416ead6-a17f-4bb9-9a96-3134b426b069 claimed and spawned
  here on compute-1]

  {"log":"2019-04-15 

[Yahoo-eng-team] [Bug 1807030] Re: Deployment fails due to missing EFI directory on system with no EFI support

2020-05-11 Thread Jeff Lane
** Changed in: maas-cert-server
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1807030

Title:
  Deployment fails due to missing EFI directory on system with no EFI
  support

Status in cloud-init:
  Invalid
Status in curtin:
  Invalid
Status in MAAS:
  Invalid
Status in maas-cert-server:
  Fix Released

Bug description:
  Attempting to deploy various versions of Ubuntu via MAAS 2.4.2
  (7034-g2f5deb8b8-0ubuntu1). I've tried 16.04.5 and 18.04.1 and end up
  with messages in the logs like such:


  Looking in the system settings, this system does not use EFI at all.
  It is purely BIOS mode, yet the installer is complaining about a
  missing efi directory when it fails.

  The machine is successfully commissioned, and commissioning does not
  detect EFI and thus does not create a /boot/efi partition as it is not
  necessary.  Watching the node boot via console, it clearly is doing a
  BIOS mode PXE boot from the NICs, it is not loading an EFI environment
  first.

  
  A search and skimming of the manuals for this model (ProLiant SL230s) shows 
that it has no EFI options available as well: 
  https://support.hpe.com/hpsc/doc/public/display?docId=c03239129
  https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-c03239183

  Installation finished. No error reported.


  Running command ['udevadm', 'settle'] with allowed return codes [0]
  (capture=False)

  Running command ['umount', '/tmp/tmpmg3cwxp7/target/sys'] with allowed
  return codes [0] (capture=False)

  Running command ['umount', '/tmp/tmpmg3cwxp7/target/proc'] with
  allowed return codes [0] (capture=False)

  Running command ['umount', '/tmp/tmpmg3cwxp7/target/dev'] with allowed
  return codes [0] (capture=False)

  finish: cmd-install/stage-curthooks/builtin/cmd-curthooks: SUCCESS:
  curtin command curthooks

  start: cmd-install/stage-hook/builtin/cmd-hook: curtin command hook

  Finalizing /tmp/tmpmg3cwxp7/target

  finish: cmd-install/stage-hook/builtin/cmd-hook: SUCCESS: curtin
  command hook

  curtin: Installation failed with exception: Unexpected error while
  running command.

  Command: ['grep', 'efi', '/proc/mounts']

  Exit code: 1

  Reason: -

  Stdout: ''

  Stderr: ''

  A full paste of the install lot from the MAAS web ui is here:
  https://pastebin.canonical.com/p/6SCncBtHGd/

  And the node's config data from MAAS can be found here:
  https://pastebin.canonical.com/p/dbV7PTVnYw/

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1807030/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1868997] Re: RFE: option to overwrite allocations for instances

2020-05-11 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/715395
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=87936baaac3a1b5227ab779398dc26b4546e1af1
Submitter: Zuul
Branch:master

commit 87936baaac3a1b5227ab779398dc26b4546e1af1
Author: jay 
Date:   Mon Apr 6 16:03:17 2020 +0200

Support for --force flag for nova-manage placement heal_allocations command

Use this flag to forcefully heal allocation for a specific instance

Change-Id: I54147d522c86d858f938df509b333b6af3189e52
Closes-Bug: #1868997


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1868997

Title:
  RFE: option to overwrite allocations for instances

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Add an option to overwrite allocations for instances which already
  have allocations (but the operator thinks might be wrong?);

  this would probably only be safe with a specific instance.

  This is mentioned as TO-DO in nova/cmd/manage.py Line 2126

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1868997/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1878042] [NEW] SRIOV agent does not parse correctly "ip link show "

2020-05-11 Thread Rodolfo Alonso
Public bug reported:

In Red Hat 8.2, iproute2-ss190924, the output for "ip link show "
is http://paste.openstack.org/show/793392/

The regex [1] used does not parse the VF information line and can't
extract the MAC address and the state.

[1]https://github.com/openstack/neutron/blob/2ac52607c266e593700be0784ebadc77789070ff/neutron/plugins/ml2/drivers/mech_sriov/agent/pci_lib.py#L38-L44
[2]https://github.com/openstack/neutron/blob/2ac52607c266e593700be0784ebadc77789070ff/neutron/plugins/ml2/drivers/mech_sriov/agent/pci_lib.py#L178

** Affects: neutron
 Importance: Undecided
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1878042

Title:
  SRIOV agent does not parse correctly "ip link show "

Status in neutron:
  New

Bug description:
  In Red Hat 8.2, iproute2-ss190924, the output for "ip link show "
  is http://paste.openstack.org/show/793392/

  The regex [1] used does not parse the VF information line and can't
  extract the MAC address and the state.

  
[1]https://github.com/openstack/neutron/blob/2ac52607c266e593700be0784ebadc77789070ff/neutron/plugins/ml2/drivers/mech_sriov/agent/pci_lib.py#L38-L44
  
[2]https://github.com/openstack/neutron/blob/2ac52607c266e593700be0784ebadc77789070ff/neutron/plugins/ml2/drivers/mech_sriov/agent/pci_lib.py#L178

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1878042/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1878031] [NEW] Unable to delete an instance | Conflict: Port [port-id] is currently a parent port for trunk [trunk-id]

2020-05-11 Thread Nate Johnston
Public bug reported:

When you create a trunk in Neutron you create a parent port for the
trunk and attach the trunk to the parent.  Then subports can be created
on the trunk.  When instances are created on the trunk, first a port is
created and then an instance is associated with a free port.  It looks
to me that's this is the oversight in the logic.

>From the perspective of the code, the parent port looks like any other
port attached to the trunk bridge.  It doesn't have an instance attached
to it so it looks like it's not being used for anything (which is
technically correct).  So it becomes an eligible port for an instance to
bind to.  That is all fine and dandy until you go to delete the instance
and you get the "Port [port-id] is currently a parent port for trunk
[trunk-id]" exception just as happened here.  Anecdotally, it's seems
rare that an instance will actually bind to it, but that is what
happened for the user in this case and I have had several pings over the
past year about people in a similar state.

I propose that when a port is made parent port for a trunk, that the
trunk be established as the owner of the port.  That way it will be
ineligible for instances seeking to bind to the port.

See also old bug: https://bugs.launchpad.net/neutron/+bug/1700428

Description of problem:

Attempting to delete instance failed with error in nova-compute

~~~
2020-03-04 09:52:46.257 1 WARNING nova.network.neutronv2.api 
[req-0dd45fe4-861c-46d3-a5ec-7db36352da58 02c6d1bc10fe4ffaa289c786cd09b146 
695c417810ac460480055b074bc41817 - default default] [instance: 
2f9e3740-b425-4f00-a949-e1aacf2239c4] Failed to delete port 
991e4e50-481a-4ca6-9ea6-69f848c4ca9f for instance.: Conflict: Port 
991e4e50-481a-4ca6-9ea6-69f848c4ca9f is currently a parent port for trunk 
5800ee0f-b558-46cb-bb0b-92799dbe02cf.
~~~

~~~
[stack@migration-host ~]$ openstack network trunk show 
5800ee0f-b558-46cb-bb0b-92799dbe02cf
+-+--+
| Field   | Value|
+-+--+
| admin_state_up  | UP   |
| created_at  | 2020-03-04T09:01:23Z |
| description |  |
| id  | 5800ee0f-b558-46cb-bb0b-92799dbe02cf |
| name| WIN-TRUNK|
| port_id | 991e4e50-481a-4ca6-9ea6-69f848c4ca9f |
| project_id  | 695c417810ac460480055b074bc41817 |
| revision_number | 3|
| status  | ACTIVE   |
| sub_ports   |  |
| tags| []   |
| tenant_id   | 695c417810ac460480055b074bc41817 |
| updated_at  | 2020-03-04T10:20:46Z |
+-+--+


[stack@migration-host ~]$ nova interface-list 
2f9e3740-b425-4f00-a949-e1aacf2239c4
++--+--+--+---+
| Port State | Port ID  | Net ID
   | IP addresses | MAC Addr  |
++--+--+--+---+
| DOWN   | 991e4e50-481a-4ca6-9ea6-69f848c4ca9f | 
9be62c82-4274-48b4-bba0-39ccbdd5bb1b | 192.168.0.19 | fa:16:3e:0a:2b:9b |
++--+--+--+---+
[stack@migration-host ~]$ openstack port show 
991e4e50-481a-4ca6-9ea6-69f848c4ca9f
+---+---+
| Field | Value 
|
+---+---+
| admin_state_up| UP
|
| allowed_address_pairs |   
|
| binding_host_id   | cnibydc01cmp1.pl.cni.local
|
| binding_profile   |   
|
| binding_vif_details   | port_filter='True'
|
| binding_vif_type  | ovs   
|
| binding_vnic_type | normal
  

[Yahoo-eng-team] [Bug 1878024] [NEW] disk usage of the nova image cache is not counted as used disk space

2020-05-11 Thread Balazs Gibizer
Public bug reported:

Description
===
The nova-compute service keeps a local image cache for glance images used for 
nova servers to avoid multiple download of the same image from glance. The disk 
usage of such cache is not calculated as local disk usage in nova and not 
reported to placement as used DISK_GB. This leads to disk over-allocation.

Also the size of that cache cannot be limited by nova configuration so
the deployer cannot reserve  disk space for that cache with
reserved_host_disk_mb config.

Steps to reproduce
==
* Set up a single node devstack
* Create and upload an image with a not too small physical size. Like an image 
with 1G physical size.
* Check the current disk usage of the Host OS and configure 
reserved_host_disk_mb in nova-cpu.conf accordingly.
* Boot two servers from that image with a flavor, like d1 (disk=5G)
* Nova will download the glance image once to the local cache which result in a 
1GB disk usage
* Nova will create two root file systems, one for each VM. Those disks 
initially has minimal physical disk size, but has 5G virtual size.
* At this point Nova allocated 5G + 5G of DISK_GB in placement, but due to the 
image in the cache the total disk usage of the two VMs + cache can be 5G + 5G + 
1G, if both VMs overwrite and fills the content of its own disk.

Expected result
===
Option A)
Nova maintains a DISK_GB allocation in placement for the images in its cache. 
This way the expected DISK_GB allocation in placement is 5G + 5G + 1G at the end

Option B)
Nova provides a config option to limit the maximum size of the image cache and 
therefore the deployer can include the maximum image cache size into the 
reserved_host_disk_mb during dimensioning of the disk space of the compute.

Actual result
=
Only 5G + 5G was allocation from placement. So disk space is over-allocated by 
the image cache.

Environment
===

Devstack from recent master

stack@aio:/opt/stack/nova$ git log --oneline | head -n 1
4b62c90063 Merge "Remove stale nested backport from InstancePCIRequests"

libvirt driver with file based image backend

Logs & Configs
==
http://paste.openstack.org/show/793388/

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: placement resource-tracker

** Tags added: placement resource-tracker

** Description changed:

  Description
  ===
- The nova-compute service keeps a local image cache for glance images used for 
nova servers to avoid multiple download of the same image from glance. The disk 
usage of such cache is not calculated as local disk usage in nova and not 
reported to placement as used DISK_GB. This leads to disk over-allocation. 
+ The nova-compute service keeps a local image cache for glance images used for 
nova servers to avoid multiple download of the same image from glance. The disk 
usage of such cache is not calculated as local disk usage in nova and not 
reported to placement as used DISK_GB. This leads to disk over-allocation.
  
  Also the size of that cache cannot be limited by nova configuration so
  the deployer cannot reserve  disk space for that cache with
  reserved_host_disk_mb config.
  
  Steps to reproduce
  ==
  * Set up a single node devstack
  * Create and upload an image with a not too small physical size. Like an 
image with 1G physical size.
- * Check the current disk usage of the Host OS and configure 
reserved_host_disk_mb in nova-cpu.conf accordingly. 
+ * Check the current disk usage of the Host OS and configure 
reserved_host_disk_mb in nova-cpu.conf accordingly.
  * Boot two servers from that image with a flavor, like d1 (disk=5G)
  * Nova will download the glance image once to the local cache which result in 
a 1GB disk usage
  * Nova will create two root file systems, one for each VM. Those disks 
initially has minimal physical disk size, but has 5G virtual size.
  * At this point Nova allocated 5G + 5G of DISK_GB in placement, but due to 
the image in the cache the total disk usage of the two VMs + cache can be 5G + 
5G + 1G, if both VMs overwrite and fills the content of its own disk.
- 
  
  Expected result
  ===
  Option A)
  Nova maintains a DISK_GB allocation in placement for the images in its cache. 
This way the expected DISK_GB allocation in placement is 5G + 5G + 1G at the end
  
  Option B)
  Nova provides a config option to limit the maximum size of the image cache 
and therefore the deployer can include the maximum image cache size into the 
reserved_host_disk_mb during dimensioning of the disk space of the compute.
  
- 
  Actual result
  =
  Only 5G + 5G was allocation from placement. So disk space is over-allocated 
by the image cache.
- 
  
  Environment
  ===
  
  Devstack from recent master
  
  stack@aio:/opt/stack/nova$ git log --oneline | head -n 1
  4b62c90063 Merge "Remove stale nested backport from InstancePCIRequests"
  
+ libvirt driver with file based 

[Yahoo-eng-team] [Bug 1876139] Re: Groovy cloud-images failing during growpart

2020-05-11 Thread Scott Moser
** Changed in: cloud-init
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1876139

Title:
  Groovy cloud-images failing during growpart

Status in cloud-images:
  Invalid
Status in cloud-init:
  Invalid
Status in cloud-utils:
  Fix Committed
Status in cloud-utils package in Ubuntu:
  Fix Released

Bug description:
  Was running on Azure, but I expect this happens on all cloud images.
  We did not see our disk grow as expected on first boot.

  Took a look at /var/log/cloud-init and saw the following:

  2020-04-30 16:04:46,837 - util.py[WARNING]: Failed growpart --dry-run for 
(/dev/sda, 1)
  2020-04-30 16:04:46,837 - util.py[DEBUG]: Failed growpart --dry-run for 
(/dev/sda, 1)
  Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/cloudinit/config/cc_growpart.py", line 
145, in resize
  util.subp(["growpart", '--dry-run', diskdev, partnum])
File "/usr/lib/python3/dist-packages/cloudinit/util.py", line 2084, in subp
  raise ProcessExecutionError(stdout=out, stderr=err,
  cloudinit.util.ProcessExecutionError: Unexpected error while running command.
  Command: ['growpart', '--dry-run', '/dev/sda', '1']
  Exit code: 2
  Reason: -
  Stdout: FAILED: sfdisk not found

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-images/+bug/1876139/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1876139] Re: Groovy cloud-images failing during growpart

2020-05-11 Thread Pat Viafore
This is now fixed, thank you

** Changed in: cloud-images
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1876139

Title:
  Groovy cloud-images failing during growpart

Status in cloud-images:
  Invalid
Status in cloud-init:
  Incomplete
Status in cloud-utils:
  Fix Committed
Status in cloud-utils package in Ubuntu:
  Fix Released

Bug description:
  Was running on Azure, but I expect this happens on all cloud images.
  We did not see our disk grow as expected on first boot.

  Took a look at /var/log/cloud-init and saw the following:

  2020-04-30 16:04:46,837 - util.py[WARNING]: Failed growpart --dry-run for 
(/dev/sda, 1)
  2020-04-30 16:04:46,837 - util.py[DEBUG]: Failed growpart --dry-run for 
(/dev/sda, 1)
  Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/cloudinit/config/cc_growpart.py", line 
145, in resize
  util.subp(["growpart", '--dry-run', diskdev, partnum])
File "/usr/lib/python3/dist-packages/cloudinit/util.py", line 2084, in subp
  raise ProcessExecutionError(stdout=out, stderr=err,
  cloudinit.util.ProcessExecutionError: Unexpected error while running command.
  Command: ['growpart', '--dry-run', '/dev/sda', '1']
  Exit code: 2
  Reason: -
  Stdout: FAILED: sfdisk not found

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-images/+bug/1876139/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1877978] [NEW] SNAT Problem Floating IP

2020-05-11 Thread Christopher Wellie
Public bug reported:

Hello,

we have problem.

I have two MTRS from From ONE SRC IP to ONE Floating IP i have on second
mtr packet loss


I have the Same Problem from VM to 8.8.8.8 by 2 MTRs dann have my local neutron 
gw Packtetloss


http://paste.openstack.org/raw/793371/

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1877978

Title:
  SNAT Problem Floating IP

Status in neutron:
  New

Bug description:
  Hello,

  we have problem.

  I have two MTRS from From ONE SRC IP to ONE Floating IP i have on
  second mtr packet loss

  
  I have the Same Problem from VM to 8.8.8.8 by 2 MTRs dann have my local 
neutron gw Packtetloss

  
  http://paste.openstack.org/raw/793371/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1877978/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1877977] [NEW] [DVR] Recovery from openvswitch restart fails when veth are used for bridges interconnection

2020-05-11 Thread Slawek Kaplonski
Public bug reported:

In case of DVR routers, when use_veth_interconnection is set to True,
when openvswitch service is restared, recovery from that isn't done
correctly and FIPs aren't reachable until neutron-ovs-agent is
restarted.

All works fine when patch ports are used for interconnection.

** Affects: neutron
 Importance: Medium
 Status: Confirmed


** Tags: l3-dvr-backlog ovs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1877977

Title:
  [DVR] Recovery from openvswitch restart fails when veth are used for
  bridges interconnection

Status in neutron:
  Confirmed

Bug description:
  In case of DVR routers, when use_veth_interconnection is set to True,
  when openvswitch service is restared, recovery from that isn't done
  correctly and FIPs aren't reachable until neutron-ovs-agent is
  restarted.

  All works fine when patch ports are used for interconnection.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1877977/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp