Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package kubevirt for openSUSE:Factory 
checked in at 2023-08-30 10:20:50
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/kubevirt (Old)
 and      /work/SRC/openSUSE:Factory/.kubevirt.new.1766 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "kubevirt"

Wed Aug 30 10:20:50 2023 rev:65 rq:1107892 version:1.0.0

Changes:
--------
--- /work/SRC/openSUSE:Factory/kubevirt/kubevirt.changes        2023-08-17 
19:44:07.274853986 +0200
+++ /work/SRC/openSUSE:Factory/.kubevirt.new.1766/kubevirt.changes      
2023-08-30 10:23:37.438874119 +0200
@@ -1,0 +2,6 @@
+Mon Aug 28 11:23:30 UTC 2023 - Vasily Ulyanov <vasily.ulya...@suse.com>
+
+- Delete VMI prior to NFS server pod in tests
+  0015-tests-Delete-VMI-prior-to-NFS-server-pod.patch
+
+-------------------------------------------------------------------

New:
----
  0015-tests-Delete-VMI-prior-to-NFS-server-pod.patch

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ kubevirt.spec ++++++
--- /var/tmp/diff_new_pack.GApTxj/_old  2023-08-30 10:23:39.726955790 +0200
+++ /var/tmp/diff_new_pack.GApTxj/_new  2023-08-30 10:23:39.730955933 +0200
@@ -42,6 +42,7 @@
 Patch12:        0012-Wait-for-new-hotplug-attachment-pod-to-be-ready.patch
 Patch13:        0013-Adapt-e2e-tests-to-CDI-1.57.0.patch
 Patch14:        0014-Export-create-populator-compatible-datavolumes-from-.patch
+Patch15:        0015-tests-Delete-VMI-prior-to-NFS-server-pod.patch
 BuildRequires:  glibc-devel-static
 BuildRequires:  golang-packaging
 BuildRequires:  pkgconfig

++++++ 0015-tests-Delete-VMI-prior-to-NFS-server-pod.patch ++++++
>From 1d2feb4ac5ac5f26ccd4abf2270caf0599ae893c Mon Sep 17 00:00:00 2001
From: Vasiliy Ulyanov <vulya...@suse.de>
Date: Mon, 21 Aug 2023 11:32:56 +0200
Subject: [PATCH 1/3] tests: Delete VMI prior to NFS server pod

Kubelet fails to umount the NFS volume and gets stuck when performing
pod cleanup if the server has already gone. Ensure that the VMI is
deleted first and the corresponding volume is released (current global
tests cleanup hook does not guarantee deletion order).

Signed-off-by: Vasiliy Ulyanov <vulya...@suse.de>
---
 tests/storage/storage.go | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/tests/storage/storage.go b/tests/storage/storage.go
index 672ba2355..a4dfc3dd5 100644
--- a/tests/storage/storage.go
+++ b/tests/storage/storage.go
@@ -216,6 +216,11 @@ var _ = SIGDescribe("Storage", func() {
                                var pvName string
                                var nfsPod *k8sv1.Pod
                                AfterEach(func() {
+                                       // Ensure VMI is deleted before 
bringing down the NFS server
+                                       err = 
virtClient.VirtualMachineInstance(vmi.Namespace).Delete(context.Background(), 
vmi.Name, &metav1.DeleteOptions{})
+                                       Expect(err).ToNot(HaveOccurred(), 
failedDeleteVMI)
+                                       
libwait.WaitForVirtualMachineToDisappearWithTimeout(vmi, 120)
+
                                        if targetImagePath != 
testsuite.HostPathAlpine {
                                                
tests.DeleteAlpineWithNonQEMUPermissions()
                                        }
-- 
2.41.0


>From 029cb1fd6fee273f5b43615674d8c54143a3ef47 Mon Sep 17 00:00:00 2001
From: Vasiliy Ulyanov <vulya...@suse.de>
Date: Wed, 23 Aug 2023 14:41:57 +0200
Subject: [PATCH 2/3] tests: Delete AlpineWithNonQEMUPermissions image

Previously the disk image was not deleted because targetImagePath was
not set properly. The condition in AfterEach was never positive:

    if targetImagePath != testsuite.HostPathAlpine {
        tests.DeleteAlpineWithNonQEMUPermissions()
    }

Signed-off-by: Vasiliy Ulyanov <vulya...@suse.de>
---
 tests/storage/storage.go | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/tests/storage/storage.go b/tests/storage/storage.go
index a4dfc3dd5..90bb6e871 100644
--- a/tests/storage/storage.go
+++ b/tests/storage/storage.go
@@ -231,11 +231,10 @@ var _ = SIGDescribe("Storage", func() {
                                        var nodeName string
                                        // Start the VirtualMachineInstance 
with the PVC attached
                                        if storageEngine == "nfs" {
-                                               targetImage := targetImagePath
                                                if !imageOwnedByQEMU {
-                                                       targetImage, nodeName = 
tests.CopyAlpineWithNonQEMUPermissions()
+                                                       targetImagePath, 
nodeName = tests.CopyAlpineWithNonQEMUPermissions()
                                                }
-                                               nfsPod = 
storageframework.InitNFS(targetImage, nodeName)
+                                               nfsPod = 
storageframework.InitNFS(targetImagePath, nodeName)
                                                pvName = 
createNFSPvAndPvc(family, nfsPod)
                                        } else {
                                                pvName = 
tests.DiskAlpineHostPath
-- 
2.41.0


>From ca0be7fc564ce755d3e88c6f0ea6afcfc32b60b7 Mon Sep 17 00:00:00 2001
From: Vasiliy Ulyanov <vulya...@suse.de>
Date: Thu, 24 Aug 2023 11:34:47 +0200
Subject: [PATCH 3/3] Hack nfs-server image to run it 'graceless'

The NFS grace period is set to 90 seconds and it stalls the clients
trying to access the share right after the server start. This may affect
the tests and lead to timeouts so disable the setting.

Signed-off-by: Vasiliy Ulyanov <vulya...@suse.de>
---
 images/nfs-server/BUILD.bazel   | 12 ++++++++++++
 images/nfs-server/entrypoint.sh | 12 ++++++++++++
 2 files changed, 24 insertions(+)
 create mode 100644 images/nfs-server/entrypoint.sh

diff --git a/images/nfs-server/BUILD.bazel b/images/nfs-server/BUILD.bazel
index 343d72cf1..8494fcac5 100644
--- a/images/nfs-server/BUILD.bazel
+++ b/images/nfs-server/BUILD.bazel
@@ -2,6 +2,14 @@ load(
     "@io_bazel_rules_docker//container:container.bzl",
     "container_image",
 )
+load("@rules_pkg//:pkg.bzl", "pkg_tar")
+
+pkg_tar(
+    name = "entrypoint",
+    srcs = [":entrypoint.sh"],
+    mode = "0775",
+    package_dir = "/",
+)
 
 container_image(
     name = "nfs-server-image",
@@ -13,6 +21,7 @@ container_image(
         "@io_bazel_rules_go//go/platform:linux_arm64": 
"@nfs-server_aarch64//image",
         "//conditions:default": "@nfs-server//image",
     }),
+    cmd = ["/entrypoint.sh"],
     ports = [
         "111/udp",
         "2049/udp",
@@ -25,5 +34,8 @@ container_image(
         "32766/tcp",
         "32767/tcp",
     ],
+    tars = [
+        ":entrypoint",
+    ],
     visibility = ["//visibility:public"],
 )
diff --git a/images/nfs-server/entrypoint.sh b/images/nfs-server/entrypoint.sh
new file mode 100644
index 000000000..aa40154cd
--- /dev/null
+++ b/images/nfs-server/entrypoint.sh
@@ -0,0 +1,12 @@
+#!/bin/bash
+
+set -euxo pipefail
+
+# The NFS grace period is set to 90 seconds and it stalls the clients
+# trying to access the share right after the server start. This may affect
+# the tests and lead to timeouts so disable the setting.
+sed -i"" \
+    -e "s#Grace_Period = 90#Graceless = true#g" \
+    /opt/start_nfs.sh
+
+exec /opt/start_nfs.sh
-- 
2.41.0

Reply via email to