Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package helm for openSUSE:Factory checked in 
at 2026-01-17 14:52:51
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/helm (Old)
 and      /work/SRC/openSUSE:Factory/.helm.new.1928 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "helm"

Sat Jan 17 14:52:51 2026 rev:94 rq:1327467 version:4.0.5

Changes:
--------
--- /work/SRC/openSUSE:Factory/helm/helm.changes        2025-12-27 
12:48:36.712412838 +0100
+++ /work/SRC/openSUSE:Factory/.helm.new.1928/helm.changes      2026-01-17 
14:53:35.680284010 +0100
@@ -1,0 +2,64 @@
+Thu Jan 15 05:58:15 UTC 2026 - Johannes Kastl 
<[email protected]>
+
+- Update to version 4.0.5:
+  * Notable Changes
+    - Fixed bug where helm uninstall with --keep-history did not
+      suspend previous deployed releases #12556
+    - Fixed rollback error when a manifest is removed in a failed
+      upgrade #13437
+    - Fixed check to ensure CLI plugin does not load with the same
+      name as an existing Helm command
+    - Fixed helm test --logs failure with hook-delete-policy
+      "hook-failed" or "hook-succeed" #9098
+    - Fixed a bug where empty dependency lists were incorrectly
+      treated as present
+    - Fixed a bug where the watch library did not only watch
+      namespaces associated with the objects
+    - Fixed regression in downloader plugins environment variables
+      #31612
+    - Fixed bug where --server-side flag is not respected with helm
+      upgrade --install #31627
+    - For SDK users: exposed KUBECONFIG to env
+  * Changelog
+    - fix(upgrade): pass --server-side flag to install when using
+      upgrade --install 1b6053d (Evans Mungai)
+    - fix(cli): handle nil config in EnvSettings.Namespace()
+      1e3ee1d (Zadkiel AHARONIAN)
+    - fix(getter): pass settings environment variables 31bd995
+      (Zadkiel AHARONIAN)
+    - test(statuswait): fix Copilot code review suggestion for
+      goroutine in tests 41a6b36 (Mohsen Mottaghi)
+    - test(statuswait): add more tests suggested by Copilot code
+      review 2a2e6f7 (Mohsen Mottaghi)
+    - test(statuswait): add some tests for statuswait 3818c02
+      (Mohsen Mottaghi)
+    - fix: use namespace-scoped watching to avoid cluster-wide LIST
+      permissions 66cab24 (Mohsen Mottaghi)
+    - Use length check for MetaDependencies instead of nil
+      comparison abf2007 (Calvin Bui)
+    - Deal with golint warning with private executeShutdownFunc
+      4b3de18 (Benoit Tigeot)
+    - Code review 3212770 (Benoit Tigeot)
+    - Fix linting issue 417aae9 (Benoit Tigeot)
+    - Update pkg/action/hooks.go 6c838b4 (Michelle Fernandez
+      Bieber)
+    - added check for nil shutdown c5d87f2 (Michelle Fernandez
+      Bieber)
+    - cleaned up empty line 53175b7 (Michelle Fernandez Bieber)
+    - updated comment and made defer of shutdown function return
+      errors as before and not the possible shutdown error d2df1ab
+      (Michelle Fernandez Bieber)
+    - added shutdown hook that is executed after the logs have been
+      retrieved 5b223de (Michelle Fernandez Bieber)
+    - Fix TestCliPluginExitCode e845b68 (tison)
+    - Check plugin name is not used 30bfd57 (tison)
+    - Fix rollback for missing resources 0fd2c41 (Feruzjon
+      Muyassarov)
+    - fix: assign KUBECONFIG environment variable value to
+      env.Kubeconfig b456e27 (LinPr)
+    - fix(rollback): errors.Is instead of string comp e2021f8
+      (Hidde Beydals)
+    - fix(uninstall): supersede deployed releases af7c153 (Hidde
+      Beydals)
+
+-------------------------------------------------------------------

Old:
----
  helm-4.0.4.obscpio

New:
----
  helm-4.0.5.obscpio

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ helm.spec ++++++
--- /var/tmp/diff_new_pack.IFE6fG/_old  2026-01-17 14:53:36.568321031 +0100
+++ /var/tmp/diff_new_pack.IFE6fG/_new  2026-01-17 14:53:36.572321197 +0100
@@ -1,7 +1,7 @@
 #
 # spec file for package helm
 #
-# Copyright (c) 2025 SUSE LLC and contributors
+# Copyright (c) 2026 SUSE LLC and contributors
 #
 # All modifications and additions to the file contributed by third parties
 # remain the property of their copyright owners, unless otherwise agreed
@@ -17,7 +17,7 @@
 
 
 Name:           helm
-Version:        4.0.4
+Version:        4.0.5
 Release:        0
 Summary:        The Kubernetes Package Manager
 License:        Apache-2.0

++++++ _service ++++++
--- /var/tmp/diff_new_pack.IFE6fG/_old  2026-01-17 14:53:36.616323032 +0100
+++ /var/tmp/diff_new_pack.IFE6fG/_new  2026-01-17 14:53:36.628323532 +0100
@@ -5,7 +5,7 @@
     <param name="exclude">.git</param>
     <param name="versionformat">@PARENT_TAG@</param>
     <param name="versionrewrite-pattern">v(.*)</param>
-    <param name="revision">v4.0.4</param>
+    <param name="revision">v4.0.5</param>
     <param name="changesgenerate">enable</param>
   </service>
   <service name="set_version" mode="manual">

++++++ _servicedata ++++++
--- /var/tmp/diff_new_pack.IFE6fG/_old  2026-01-17 14:53:36.664325033 +0100
+++ /var/tmp/diff_new_pack.IFE6fG/_new  2026-01-17 14:53:36.668325200 +0100
@@ -1,6 +1,6 @@
 <servicedata>
 <service name="tar_scm">
                 <param name="url">https://github.com/helm/helm.git</param>
-              <param 
name="changesrevision">8650e1dad9e6ae38b41f60b712af9218a0d8cc11</param></service></servicedata>
+              <param 
name="changesrevision">1b6053d48b51673c5581973f5ae7e104f627fcf5</param></service></servicedata>
 (No newline at EOF)
 

++++++ helm-4.0.4.obscpio -> helm-4.0.5.obscpio ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/helm-4.0.4/internal/plugin/runtime.go 
new/helm-4.0.5/internal/plugin/runtime.go
--- old/helm-4.0.4/internal/plugin/runtime.go   2025-12-13 01:39:50.000000000 
+0100
+++ new/helm-4.0.5/internal/plugin/runtime.go   2026-01-14 23:43:46.000000000 
+0100
@@ -53,13 +53,13 @@
        return config, nil
 }
 
-// parseEnv takes a list of "KEY=value" environment variable strings
+// ParseEnv takes a list of "KEY=value" environment variable strings
 // and transforms the result into a map[KEY]=value
 //
 // - empty input strings are ignored
 // - input strings with no value are stored as empty strings
 // - duplicate keys overwrite earlier values
-func parseEnv(env []string) map[string]string {
+func ParseEnv(env []string) map[string]string {
        result := make(map[string]string, len(env))
        for _, envVar := range env {
                parts := strings.SplitN(envVar, "=", 2)
@@ -75,7 +75,9 @@
        return result
 }
 
-func formatEnv(env map[string]string) []string {
+// FormatEnv takes a map[KEY]=value and transforms it into
+// a list of "KEY=value" environment variable strings
+func FormatEnv(env map[string]string) []string {
        result := make([]string, 0, len(env))
        for key, value := range env {
                result = append(result, fmt.Sprintf("%s=%s", key, value))
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/helm-4.0.4/internal/plugin/runtime_extismv1.go 
new/helm-4.0.5/internal/plugin/runtime_extismv1.go
--- old/helm-4.0.4/internal/plugin/runtime_extismv1.go  2025-12-13 
01:39:50.000000000 +0100
+++ new/helm-4.0.5/internal/plugin/runtime_extismv1.go  2026-01-14 
23:43:46.000000000 +0100
@@ -259,7 +259,7 @@
                mc = mc.WithStderr(input.Stderr)
        }
        if len(input.Env) > 0 {
-               env := parseEnv(input.Env)
+               env := ParseEnv(input.Env)
                for k, v := range env {
                        mc = mc.WithEnv(k, v)
                }
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/helm-4.0.4/internal/plugin/runtime_subprocess.go 
new/helm-4.0.5/internal/plugin/runtime_subprocess.go
--- old/helm-4.0.4/internal/plugin/runtime_subprocess.go        2025-12-13 
01:39:50.000000000 +0100
+++ new/helm-4.0.5/internal/plugin/runtime_subprocess.go        2026-01-14 
23:43:46.000000000 +0100
@@ -139,7 +139,7 @@
                return nil
        }
 
-       env := parseEnv(os.Environ())
+       env := ParseEnv(os.Environ())
        maps.Insert(env, maps.All(r.EnvVars))
        env["HELM_PLUGIN_NAME"] = r.metadata.Name
        env["HELM_PLUGIN_DIR"] = r.pluginDir
@@ -150,7 +150,7 @@
        }
 
        cmd := exec.Command(main, argv...)
-       cmd.Env = formatEnv(env)
+       cmd.Env = FormatEnv(env)
        cmd.Stdout = os.Stdout
        cmd.Stderr = os.Stderr
 
@@ -198,9 +198,9 @@
 
        cmds := r.RuntimeConfig.PlatformCommand
 
-       env := parseEnv(os.Environ())
+       env := ParseEnv(os.Environ())
        maps.Insert(env, maps.All(r.EnvVars))
-       maps.Insert(env, maps.All(parseEnv(input.Env)))
+       maps.Insert(env, maps.All(ParseEnv(input.Env)))
        env["HELM_PLUGIN_NAME"] = r.metadata.Name
        env["HELM_PLUGIN_DIR"] = r.pluginDir
 
@@ -210,7 +210,7 @@
        }
 
        cmd := exec.Command(command, args...)
-       cmd.Env = formatEnv(env)
+       cmd.Env = FormatEnv(env)
 
        cmd.Stdin = input.Stdin
        cmd.Stdout = input.Stdout
@@ -231,9 +231,9 @@
                return nil, fmt.Errorf("plugin %q input message does not 
implement InputMessagePostRendererV1", r.metadata.Name)
        }
 
-       env := parseEnv(os.Environ())
+       env := ParseEnv(os.Environ())
        maps.Insert(env, maps.All(r.EnvVars))
-       maps.Insert(env, maps.All(parseEnv(input.Env)))
+       maps.Insert(env, maps.All(ParseEnv(input.Env)))
        env["HELM_PLUGIN_NAME"] = r.metadata.Name
        env["HELM_PLUGIN_DIR"] = r.pluginDir
 
@@ -261,7 +261,7 @@
        postRendered := &bytes.Buffer{}
        stderr := &bytes.Buffer{}
 
-       cmd.Env = formatEnv(env)
+       cmd.Env = FormatEnv(env)
        cmd.Stdout = postRendered
        cmd.Stderr = stderr
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/helm-4.0.4/internal/plugin/runtime_subprocess_getter.go 
new/helm-4.0.5/internal/plugin/runtime_subprocess_getter.go
--- old/helm-4.0.4/internal/plugin/runtime_subprocess_getter.go 2025-12-13 
01:39:50.000000000 +0100
+++ new/helm-4.0.5/internal/plugin/runtime_subprocess_getter.go 2026-01-14 
23:43:46.000000000 +0100
@@ -56,9 +56,9 @@
                return nil, fmt.Errorf("no downloader found for protocol %q", 
msg.Protocol)
        }
 
-       env := parseEnv(os.Environ())
+       env := ParseEnv(os.Environ())
        maps.Insert(env, maps.All(r.EnvVars))
-       maps.Insert(env, maps.All(parseEnv(input.Env)))
+       maps.Insert(env, maps.All(ParseEnv(input.Env)))
        env["HELM_PLUGIN_NAME"] = r.metadata.Name
        env["HELM_PLUGIN_DIR"] = r.pluginDir
        env["HELM_PLUGIN_USERNAME"] = msg.Options.Username
@@ -83,7 +83,7 @@
        cmd := exec.Command(
                pluginCommand,
                args...)
-       cmd.Env = formatEnv(env)
+       cmd.Env = FormatEnv(env)
        cmd.Stdout = &buf
        cmd.Stderr = os.Stderr
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/helm-4.0.4/internal/plugin/runtime_test.go 
new/helm-4.0.5/internal/plugin/runtime_test.go
--- old/helm-4.0.4/internal/plugin/runtime_test.go      2025-12-13 
01:39:50.000000000 +0100
+++ new/helm-4.0.5/internal/plugin/runtime_test.go      2026-01-14 
23:43:46.000000000 +0100
@@ -56,7 +56,7 @@
 
        for name, tc := range testCases {
                t.Run(name, func(t *testing.T) {
-                       result := parseEnv(tc.env)
+                       result := ParseEnv(tc.env)
                        assert.Equal(t, tc.expected, result)
                })
        }
@@ -93,7 +93,7 @@
 
        for name, tc := range testCases {
                t.Run(name, func(t *testing.T) {
-                       result := formatEnv(tc.env)
+                       result := FormatEnv(tc.env)
                        assert.ElementsMatch(t, tc.expected, result)
                })
        }
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/helm-4.0.4/pkg/action/hooks.go 
new/helm-4.0.5/pkg/action/hooks.go
--- old/helm-4.0.4/pkg/action/hooks.go  2025-12-13 01:39:50.000000000 +0100
+++ new/helm-4.0.5/pkg/action/hooks.go  2026-01-14 23:43:46.000000000 +0100
@@ -33,6 +33,27 @@
 
 // execHook executes all of the hooks for the given hook event.
 func (cfg *Configuration) execHook(rl *release.Release, hook 
release.HookEvent, waitStrategy kube.WaitStrategy, timeout time.Duration, 
serverSideApply bool) error {
+       shutdown, err := cfg.execHookWithDelayedShutdown(rl, hook, 
waitStrategy, timeout, serverSideApply)
+       if shutdown == nil {
+               return err
+       }
+       if err != nil {
+               if err := shutdown(); err != nil {
+                       return err
+               }
+               return err
+       }
+       return shutdown()
+}
+
+type ExecuteShutdownFunc = func() error
+
+func shutdownNoOp() error {
+       return nil
+}
+
+// execHookWithDelayedShutdown executes all of the hooks for the given hook 
event and returns a shutdownHook function to trigger deletions after doing 
other things like e.g. retrieving logs.
+func (cfg *Configuration) execHookWithDelayedShutdown(rl *release.Release, 
hook release.HookEvent, waitStrategy kube.WaitStrategy, timeout time.Duration, 
serverSideApply bool) (ExecuteShutdownFunc, error) {
        executingHooks := []*release.Hook{}
 
        for _, h := range rl.Hooks {
@@ -51,12 +72,12 @@
                cfg.hookSetDeletePolicy(h)
 
                if err := cfg.deleteHookByPolicy(h, 
release.HookBeforeHookCreation, waitStrategy, timeout); err != nil {
-                       return err
+                       return shutdownNoOp, err
                }
 
                resources, err := 
cfg.KubeClient.Build(bytes.NewBufferString(h.Manifest), true)
                if err != nil {
-                       return fmt.Errorf("unable to build kubernetes object 
for %s hook %s: %w", hook, h.Path, err)
+                       return shutdownNoOp, fmt.Errorf("unable to build 
kubernetes object for %s hook %s: %w", hook, h.Path, err)
                }
 
                // Record the time at which the hook was applied to the cluster
@@ -77,12 +98,12 @@
                        kube.ClientCreateOptionServerSideApply(serverSideApply, 
false)); err != nil {
                        h.LastRun.CompletedAt = time.Now()
                        h.LastRun.Phase = release.HookPhaseFailed
-                       return fmt.Errorf("warning: Hook %s %s failed: %w", 
hook, h.Path, err)
+                       return shutdownNoOp, fmt.Errorf("warning: Hook %s %s 
failed: %w", hook, h.Path, err)
                }
 
                waiter, err := cfg.KubeClient.GetWaiter(waitStrategy)
                if err != nil {
-                       return fmt.Errorf("unable to get waiter: %w", err)
+                       return shutdownNoOp, fmt.Errorf("unable to get waiter: 
%w", err)
                }
                // Watch hook resources until they have completed
                err = waiter.WatchUntilReady(resources, timeout)
@@ -98,36 +119,38 @@
                        }
                        // If a hook is failed, check the annotation of the 
hook to determine whether the hook should be deleted
                        // under failed condition. If so, then clear the 
corresponding resource object in the hook
-                       if errDeleting := cfg.deleteHookByPolicy(h, 
release.HookFailed, waitStrategy, timeout); errDeleting != nil {
-                               // We log the error here as we want to 
propagate the hook failure upwards to the release object.
-                               log.Printf("error deleting the hook resource on 
hook failure: %v", errDeleting)
-                       }
-
-                       // If a hook is failed, check the annotation of the 
previous successful hooks to determine whether the hooks
-                       // should be deleted under succeeded condition.
-                       if err := cfg.deleteHooksByPolicy(executingHooks[0:i], 
release.HookSucceeded, waitStrategy, timeout); err != nil {
+                       return func() error {
+                               if errDeleting := cfg.deleteHookByPolicy(h, 
release.HookFailed, waitStrategy, timeout); errDeleting != nil {
+                                       // We log the error here as we want to 
propagate the hook failure upwards to the release object.
+                                       log.Printf("error deleting the hook 
resource on hook failure: %v", errDeleting)
+                               }
+
+                               // If a hook is failed, check the annotation of 
the previous successful hooks to determine whether the hooks
+                               // should be deleted under succeeded condition.
+                               if err := 
cfg.deleteHooksByPolicy(executingHooks[0:i], release.HookSucceeded, 
waitStrategy, timeout); err != nil {
+                                       return err
+                               }
                                return err
-                       }
-
-                       return err
+                       }, err
                }
                h.LastRun.Phase = release.HookPhaseSucceeded
        }
 
-       // If all hooks are successful, check the annotation of each hook to 
determine whether the hook should be deleted
-       // or output should be logged under succeeded condition. If so, then 
clear the corresponding resource object in each hook
-       for i := len(executingHooks) - 1; i >= 0; i-- {
-               h := executingHooks[i]
-               if err := cfg.outputLogsByPolicy(h, rl.Namespace, 
release.HookOutputOnSucceeded); err != nil {
-                       // We log here as we still want to attempt hook 
resource deletion even if output logging fails.
-                       log.Printf("error outputting logs for hook failure: 
%v", err)
-               }
-               if err := cfg.deleteHookByPolicy(h, release.HookSucceeded, 
waitStrategy, timeout); err != nil {
-                       return err
+       return func() error {
+               // If all hooks are successful, check the annotation of each 
hook to determine whether the hook should be deleted
+               // or output should be logged under succeeded condition. If so, 
then clear the corresponding resource object in each hook
+               for i := len(executingHooks) - 1; i >= 0; i-- {
+                       h := executingHooks[i]
+                       if err := cfg.outputLogsByPolicy(h, rl.Namespace, 
release.HookOutputOnSucceeded); err != nil {
+                               // We log here as we still want to attempt hook 
resource deletion even if output logging fails.
+                               log.Printf("error outputting logs for hook 
failure: %v", err)
+                       }
+                       if err := cfg.deleteHookByPolicy(h, 
release.HookSucceeded, waitStrategy, timeout); err != nil {
+                               return err
+                       }
                }
-       }
-
-       return nil
+               return nil
+       }, nil
 }
 
 // hookByWeight is a sorter for hooks
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/helm-4.0.4/pkg/action/package.go 
new/helm-4.0.5/pkg/action/package.go
--- old/helm-4.0.4/pkg/action/package.go        2025-12-13 01:39:50.000000000 
+0100
+++ new/helm-4.0.5/pkg/action/package.go        2026-01-14 23:43:46.000000000 
+0100
@@ -103,7 +103,7 @@
                ch.Metadata.AppVersion = p.AppVersion
        }
 
-       if reqs := ac.MetaDependencies(); reqs != nil {
+       if reqs := ac.MetaDependencies(); len(reqs) > 0 {
                if err := CheckDependencies(ch, reqs); err != nil {
                        return "", err
                }
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/helm-4.0.4/pkg/action/release_testing.go 
new/helm-4.0.5/pkg/action/release_testing.go
--- old/helm-4.0.4/pkg/action/release_testing.go        2025-12-13 
01:39:50.000000000 +0100
+++ new/helm-4.0.5/pkg/action/release_testing.go        2026-01-14 
23:43:46.000000000 +0100
@@ -57,24 +57,24 @@
 }
 
 // Run executes 'helm test' against the given release.
-func (r *ReleaseTesting) Run(name string) (ri.Releaser, error) {
+func (r *ReleaseTesting) Run(name string) (ri.Releaser, ExecuteShutdownFunc, 
error) {
        if err := r.cfg.KubeClient.IsReachable(); err != nil {
-               return nil, err
+               return nil, shutdownNoOp, err
        }
 
        if err := chartutil.ValidateReleaseName(name); err != nil {
-               return nil, fmt.Errorf("releaseTest: Release name is invalid: 
%s", name)
+               return nil, shutdownNoOp, fmt.Errorf("releaseTest: Release name 
is invalid: %s", name)
        }
 
        // finds the non-deleted release with the given name
        reli, err := r.cfg.Releases.Last(name)
        if err != nil {
-               return reli, err
+               return reli, shutdownNoOp, err
        }
 
        rel, err := releaserToV1Release(reli)
        if err != nil {
-               return rel, err
+               return reli, shutdownNoOp, err
        }
 
        skippedHooks := []*release.Hook{}
@@ -102,14 +102,16 @@
        }
 
        serverSideApply := rel.ApplyMethod == 
string(release.ApplyMethodServerSideApply)
-       if err := r.cfg.execHook(rel, release.HookTest, 
kube.StatusWatcherStrategy, r.Timeout, serverSideApply); err != nil {
+       shutdown, err := r.cfg.execHookWithDelayedShutdown(rel, 
release.HookTest, kube.StatusWatcherStrategy, r.Timeout, serverSideApply)
+
+       if err != nil {
                rel.Hooks = append(skippedHooks, rel.Hooks...)
-               r.cfg.Releases.Update(rel)
-               return rel, err
+               r.cfg.Releases.Update(reli)
+               return reli, shutdown, err
        }
 
        rel.Hooks = append(skippedHooks, rel.Hooks...)
-       return rel, r.cfg.Releases.Update(rel)
+       return reli, shutdown, r.cfg.Releases.Update(reli)
 }
 
 // GetPodLogs will write the logs for all test pods in the given release into
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/helm-4.0.4/pkg/action/rollback.go 
new/helm-4.0.5/pkg/action/rollback.go
--- old/helm-4.0.4/pkg/action/rollback.go       2025-12-13 01:39:50.000000000 
+0100
+++ new/helm-4.0.5/pkg/action/rollback.go       2026-01-14 23:43:46.000000000 
+0100
@@ -18,8 +18,8 @@
 
 import (
        "bytes"
+       "errors"
        "fmt"
-       "strings"
        "time"
 
        metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -28,6 +28,7 @@
        "helm.sh/helm/v4/pkg/kube"
        "helm.sh/helm/v4/pkg/release/common"
        release "helm.sh/helm/v4/pkg/release/v1"
+       "helm.sh/helm/v4/pkg/storage/driver"
 )
 
 // Rollback is the action for rolling back to a given release.
@@ -278,7 +279,7 @@
        }
 
        deployed, err := r.cfg.Releases.DeployedAll(currentRelease.Name)
-       if err != nil && !strings.Contains(err.Error(), "has no deployed 
releases") {
+       if err != nil && !errors.Is(err, driver.ErrNoDeployedReleases) {
                return nil, err
        }
        // Supersede all previous deployments, see issue #2941.
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/helm-4.0.4/pkg/action/uninstall.go 
new/helm-4.0.5/pkg/action/uninstall.go
--- old/helm-4.0.4/pkg/action/uninstall.go      2025-12-13 01:39:50.000000000 
+0100
+++ new/helm-4.0.5/pkg/action/uninstall.go      2026-01-14 23:43:46.000000000 
+0100
@@ -188,6 +188,25 @@
                u.cfg.Logger().Debug("uninstall: Failed to store updated 
release", slog.Any("error", err))
        }
 
+       // Supersede all previous deployments, see issue #12556 (which is a
+       // variation on #2941).
+       deployed, err := u.cfg.Releases.DeployedAll(name)
+       if err != nil && !errors.Is(err, driver.ErrNoDeployedReleases) {
+               return nil, err
+       }
+       for _, reli := range deployed {
+               rel, err := releaserToV1Release(reli)
+               if err != nil {
+                       return nil, err
+               }
+
+               u.cfg.Logger().Debug("superseding previous deployment", 
"version", rel.Version)
+               rel.Info.Status = common.StatusSuperseded
+               if err := u.cfg.Releases.Update(rel); err != nil {
+                       u.cfg.Logger().Debug("uninstall: Failed to store 
updated release", slog.Any("error", err))
+               }
+       }
+
        if len(errs) > 0 {
                return res, fmt.Errorf("uninstallation completed with %d 
error(s): %w", len(errs), joinErrors(errs, "; "))
        }
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/helm-4.0.4/pkg/cli/environment.go 
new/helm-4.0.5/pkg/cli/environment.go
--- old/helm-4.0.4/pkg/cli/environment.go       2025-12-13 01:39:50.000000000 
+0100
+++ new/helm-4.0.5/pkg/cli/environment.go       2026-01-14 23:43:46.000000000 
+0100
@@ -99,6 +99,7 @@
        env := &EnvSettings{
                namespace:                 os.Getenv("HELM_NAMESPACE"),
                MaxHistory:                envIntOr("HELM_MAX_HISTORY", 
defaultMaxHistory),
+               KubeConfig:                os.Getenv("KUBECONFIG"),
                KubeContext:               os.Getenv("HELM_KUBECONTEXT"),
                KubeToken:                 os.Getenv("HELM_KUBETOKEN"),
                KubeAsUser:                os.Getenv("HELM_KUBEASUSER"),
@@ -274,8 +275,10 @@
 
 // Namespace gets the namespace from the configuration
 func (s *EnvSettings) Namespace() string {
-       if ns, _, err := s.config.ToRawKubeConfigLoader().Namespace(); err == 
nil {
-               return ns
+       if s.config != nil {
+               if ns, _, err := s.config.ToRawKubeConfigLoader().Namespace(); 
err == nil {
+                       return ns
+               }
        }
        if s.namespace != "" {
                return s.namespace
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/helm-4.0.4/pkg/cmd/install.go 
new/helm-4.0.5/pkg/cmd/install.go
--- old/helm-4.0.4/pkg/cmd/install.go   2025-12-13 01:39:50.000000000 +0100
+++ new/helm-4.0.5/pkg/cmd/install.go   2026-01-14 23:43:46.000000000 +0100
@@ -275,7 +275,7 @@
                slog.Warn("this chart is deprecated")
        }
 
-       if req := ac.MetaDependencies(); req != nil {
+       if req := ac.MetaDependencies(); len(req) > 0 {
                // If CheckDependencies returns an error, we have unfulfilled 
dependencies.
                // As of Helm 2.4.0, this is treated as a stopping condition:
                // https://github.com/helm/helm/issues/2209
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/helm-4.0.4/pkg/cmd/load_plugins.go 
new/helm-4.0.5/pkg/cmd/load_plugins.go
--- old/helm-4.0.4/pkg/cmd/load_plugins.go      2025-12-13 01:39:50.000000000 
+0100
+++ new/helm-4.0.5/pkg/cmd/load_plugins.go      2026-01-14 23:43:46.000000000 
+0100
@@ -132,7 +132,13 @@
                        DisableFlagParsing: true,
                }
 
-               // TODO: Make sure a command with this name does not already 
exist.
+               for _, cmd := range baseCmd.Commands() {
+                       if cmd.Name() == c.Name() {
+                               fmt.Fprintf(os.Stderr, "failed to load plugins: 
name conflicts %s\n", c.Name())
+                               return
+                       }
+               }
+
                baseCmd.AddCommand(c)
 
                // For completion, we try to load more details about the 
plugins so as to allow for command and
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/helm-4.0.4/pkg/cmd/plugin_test.go 
new/helm-4.0.5/pkg/cmd/plugin_test.go
--- old/helm-4.0.4/pkg/cmd/plugin_test.go       2025-12-13 01:39:50.000000000 
+0100
+++ new/helm-4.0.5/pkg/cmd/plugin_test.go       2026-01-14 23:43:46.000000000 
+0100
@@ -114,9 +114,9 @@
        }{
                {"args", "echo args", "This echos args", "-a -b -c\n", 
[]string{"-a", "-b", "-c"}, 0},
                {"echo", "echo stuff", "This echos stuff", "hello\n", 
[]string{}, 0},
-               {"env", "env stuff", "show the env", "HELM_PLUGIN_NAME=env\n", 
[]string{}, 0},
                {"exitwith", "exitwith code", "This exits with the specified 
exit code", "", []string{"2"}, 2},
                {"fullenv", "show env vars", "show all env vars", 
fullEnvOutput, []string{}, 0},
+               {"shortenv", "env stuff", "show the env", 
"HELM_PLUGIN_NAME=shortenv\n", []string{}, 0},
        }
 
        pluginCmds := cmd.Commands()
@@ -254,10 +254,6 @@
        tests := []staticCompletionDetails{
                {"args", []string{}, []string{}, []staticCompletionDetails{}},
                {"echo", []string{}, []string{}, []staticCompletionDetails{}},
-               {"env", []string{}, []string{"global"}, 
[]staticCompletionDetails{
-                       {"list", []string{}, []string{"a", "all", "log"}, 
[]staticCompletionDetails{}},
-                       {"remove", []string{"all", "one"}, []string{}, 
[]staticCompletionDetails{}},
-               }},
                {"exitwith", []string{}, []string{}, []staticCompletionDetails{
                        {"code", []string{}, []string{"a", "b"}, 
[]staticCompletionDetails{}},
                }},
@@ -268,6 +264,10 @@
                                {"more", []string{"one", "two"}, []string{"b", 
"ball"}, []staticCompletionDetails{}},
                        }},
                }},
+               {"shortenv", []string{}, []string{"global"}, 
[]staticCompletionDetails{
+                       {"list", []string{}, []string{"a", "all", "log"}, 
[]staticCompletionDetails{}},
+                       {"remove", []string{"all", "one"}, []string{}, 
[]staticCompletionDetails{}},
+               }},
        }
        checkCommand(t, cmd.Commands(), tests)
 }
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/helm-4.0.4/pkg/cmd/release_testing.go 
new/helm-4.0.5/pkg/cmd/release_testing.go
--- old/helm-4.0.4/pkg/cmd/release_testing.go   2025-12-13 01:39:50.000000000 
+0100
+++ new/helm-4.0.5/pkg/cmd/release_testing.go   2026-01-14 23:43:46.000000000 
+0100
@@ -55,7 +55,7 @@
                        }
                        return compListReleases(toComplete, args, cfg)
                },
-               RunE: func(_ *cobra.Command, args []string) error {
+               RunE: func(_ *cobra.Command, args []string) (returnError error) 
{
                        client.Namespace = settings.Namespace()
                        notName := regexp.MustCompile(`^!\s?name=`)
                        for _, f := range filter {
@@ -65,7 +65,16 @@
                                        
client.Filters[action.ExcludeNameFilter] = 
append(client.Filters[action.ExcludeNameFilter], 
notName.ReplaceAllLiteralString(f, ""))
                                }
                        }
-                       reli, runErr := client.Run(args[0])
+
+                       reli, shutdown, runErr := client.Run(args[0])
+                       defer func() {
+                               if shutdownErr := shutdown(); shutdownErr != 
nil {
+                                       if returnError == nil {
+                                               returnError = shutdownErr
+                                       }
+                               }
+                       }()
+
                        // We only return an error if we weren't even able to 
get the
                        // release, otherwise we keep going so we can print 
status and logs
                        // if requested
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/helm-4.0.4/pkg/cmd/testdata/helmhome/helm/plugins/env/completion.yaml 
new/helm-4.0.5/pkg/cmd/testdata/helmhome/helm/plugins/env/completion.yaml
--- old/helm-4.0.4/pkg/cmd/testdata/helmhome/helm/plugins/env/completion.yaml   
2025-12-13 01:39:50.000000000 +0100
+++ new/helm-4.0.5/pkg/cmd/testdata/helmhome/helm/plugins/env/completion.yaml   
1970-01-01 01:00:00.000000000 +0100
@@ -1,13 +0,0 @@
-name: env
-commands:
-  - name: list
-    flags:
-    - a
-    - all
-    - log
-  - name: remove
-    validArgs:
-    - all
-    - one
-flags:
-- global
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/helm-4.0.4/pkg/cmd/testdata/helmhome/helm/plugins/env/plugin-name.sh 
new/helm-4.0.5/pkg/cmd/testdata/helmhome/helm/plugins/env/plugin-name.sh
--- old/helm-4.0.4/pkg/cmd/testdata/helmhome/helm/plugins/env/plugin-name.sh    
2025-12-13 01:39:50.000000000 +0100
+++ new/helm-4.0.5/pkg/cmd/testdata/helmhome/helm/plugins/env/plugin-name.sh    
1970-01-01 01:00:00.000000000 +0100
@@ -1,3 +0,0 @@
-#!/usr/bin/env sh
-
-echo HELM_PLUGIN_NAME=${HELM_PLUGIN_NAME}
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/helm-4.0.4/pkg/cmd/testdata/helmhome/helm/plugins/env/plugin.yaml 
new/helm-4.0.5/pkg/cmd/testdata/helmhome/helm/plugins/env/plugin.yaml
--- old/helm-4.0.4/pkg/cmd/testdata/helmhome/helm/plugins/env/plugin.yaml       
2025-12-13 01:39:50.000000000 +0100
+++ new/helm-4.0.5/pkg/cmd/testdata/helmhome/helm/plugins/env/plugin.yaml       
1970-01-01 01:00:00.000000000 +0100
@@ -1,12 +0,0 @@
----
-apiVersion: v1
-name: env
-type: cli/v1
-runtime: subprocess
-config:
-  shortHelp: "env stuff"
-  longHelp: "show the env"
-  ignoreFlags: false
-runtimeConfig:
-  platformCommand:
-  - command: ${HELM_PLUGIN_DIR}/plugin-name.sh
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/helm-4.0.4/pkg/cmd/testdata/helmhome/helm/plugins/shortenv/completion.yaml 
new/helm-4.0.5/pkg/cmd/testdata/helmhome/helm/plugins/shortenv/completion.yaml
--- 
old/helm-4.0.4/pkg/cmd/testdata/helmhome/helm/plugins/shortenv/completion.yaml  
    1970-01-01 01:00:00.000000000 +0100
+++ 
new/helm-4.0.5/pkg/cmd/testdata/helmhome/helm/plugins/shortenv/completion.yaml  
    2026-01-14 23:43:46.000000000 +0100
@@ -0,0 +1,13 @@
+name: shortenv
+commands:
+  - name: list
+    flags:
+    - a
+    - all
+    - log
+  - name: remove
+    validArgs:
+    - all
+    - one
+flags:
+- global
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/helm-4.0.4/pkg/cmd/testdata/helmhome/helm/plugins/shortenv/plugin-name.sh 
new/helm-4.0.5/pkg/cmd/testdata/helmhome/helm/plugins/shortenv/plugin-name.sh
--- 
old/helm-4.0.4/pkg/cmd/testdata/helmhome/helm/plugins/shortenv/plugin-name.sh   
    1970-01-01 01:00:00.000000000 +0100
+++ 
new/helm-4.0.5/pkg/cmd/testdata/helmhome/helm/plugins/shortenv/plugin-name.sh   
    2026-01-14 23:43:46.000000000 +0100
@@ -0,0 +1,3 @@
+#!/usr/bin/env sh
+
+echo HELM_PLUGIN_NAME=${HELM_PLUGIN_NAME}
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/helm-4.0.4/pkg/cmd/testdata/helmhome/helm/plugins/shortenv/plugin.yaml 
new/helm-4.0.5/pkg/cmd/testdata/helmhome/helm/plugins/shortenv/plugin.yaml
--- old/helm-4.0.4/pkg/cmd/testdata/helmhome/helm/plugins/shortenv/plugin.yaml  
1970-01-01 01:00:00.000000000 +0100
+++ new/helm-4.0.5/pkg/cmd/testdata/helmhome/helm/plugins/shortenv/plugin.yaml  
2026-01-14 23:43:46.000000000 +0100
@@ -0,0 +1,12 @@
+---
+apiVersion: v1
+name: shortenv
+type: cli/v1
+runtime: subprocess
+config:
+  shortHelp: "env stuff"
+  longHelp: "show the env"
+  ignoreFlags: false
+runtimeConfig:
+  platformCommand:
+  - command: ${HELM_PLUGIN_DIR}/plugin-name.sh
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/helm-4.0.4/pkg/cmd/testdata/output/plugin_list_comp.txt 
new/helm-4.0.5/pkg/cmd/testdata/output/plugin_list_comp.txt
--- old/helm-4.0.4/pkg/cmd/testdata/output/plugin_list_comp.txt 2025-12-13 
01:39:50.000000000 +0100
+++ new/helm-4.0.5/pkg/cmd/testdata/output/plugin_list_comp.txt 2026-01-14 
23:43:46.000000000 +0100
@@ -1,7 +1,7 @@
 args   echo args
 echo   echo stuff
-env    env stuff
 exitwith       exitwith code
 fullenv        show env vars
+shortenv       env stuff
 :4
 Completion ended with directive: ShellCompDirectiveNoFileComp
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/helm-4.0.4/pkg/cmd/testdata/output/plugin_repeat_comp.txt 
new/helm-4.0.5/pkg/cmd/testdata/output/plugin_repeat_comp.txt
--- old/helm-4.0.4/pkg/cmd/testdata/output/plugin_repeat_comp.txt       
2025-12-13 01:39:50.000000000 +0100
+++ new/helm-4.0.5/pkg/cmd/testdata/output/plugin_repeat_comp.txt       
2026-01-14 23:43:46.000000000 +0100
@@ -1,6 +1,6 @@
 echo   echo stuff
-env    env stuff
 exitwith       exitwith code
 fullenv        show env vars
+shortenv       env stuff
 :4
 Completion ended with directive: ShellCompDirectiveNoFileComp
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/helm-4.0.4/pkg/cmd/testdata/output/uninstall-keep-history-earlier-deployed.txt
 
new/helm-4.0.5/pkg/cmd/testdata/output/uninstall-keep-history-earlier-deployed.txt
--- 
old/helm-4.0.4/pkg/cmd/testdata/output/uninstall-keep-history-earlier-deployed.txt
  1970-01-01 01:00:00.000000000 +0100
+++ 
new/helm-4.0.5/pkg/cmd/testdata/output/uninstall-keep-history-earlier-deployed.txt
  2026-01-14 23:43:46.000000000 +0100
@@ -0,0 +1 @@
+release "aeneas" uninstalled
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/helm-4.0.4/pkg/cmd/uninstall_test.go 
new/helm-4.0.5/pkg/cmd/uninstall_test.go
--- old/helm-4.0.4/pkg/cmd/uninstall_test.go    2025-12-13 01:39:50.000000000 
+0100
+++ new/helm-4.0.5/pkg/cmd/uninstall_test.go    2026-01-14 23:43:46.000000000 
+0100
@@ -19,6 +19,7 @@
 import (
        "testing"
 
+       "helm.sh/helm/v4/pkg/release/common"
        release "helm.sh/helm/v4/pkg/release/v1"
 )
 
@@ -58,6 +59,15 @@
                        rels:   
[]*release.Release{release.Mock(&release.MockReleaseOptions{Name: "aeneas"})},
                },
                {
+                       name:   "keep history with earlier deployed release",
+                       cmd:    "uninstall aeneas --keep-history",
+                       golden: 
"output/uninstall-keep-history-earlier-deployed.txt",
+                       rels: []*release.Release{
+                               release.Mock(&release.MockReleaseOptions{Name: 
"aeneas", Version: 1, Status: common.StatusDeployed}),
+                               release.Mock(&release.MockReleaseOptions{Name: 
"aeneas", Version: 2, Status: common.StatusFailed}),
+                       },
+               },
+               {
                        name:   "wait",
                        cmd:    "uninstall aeneas --wait",
                        golden: "output/uninstall-wait.txt",
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/helm-4.0.4/pkg/cmd/upgrade.go 
new/helm-4.0.5/pkg/cmd/upgrade.go
--- old/helm-4.0.4/pkg/cmd/upgrade.go   2025-12-13 01:39:50.000000000 +0100
+++ new/helm-4.0.5/pkg/cmd/upgrade.go   2026-01-14 23:43:46.000000000 +0100
@@ -153,6 +153,8 @@
                                        instClient.EnableDNS = client.EnableDNS
                                        instClient.HideSecret = 
client.HideSecret
                                        instClient.TakeOwnership = 
client.TakeOwnership
+                                       instClient.ForceConflicts = 
client.ForceConflicts
+                                       instClient.ServerSideApply = 
client.ServerSideApply != "false"
 
                                        if isReleaseUninstalled(versions) {
                                                instClient.Replace = true
@@ -200,7 +202,7 @@
                        if err != nil {
                                return err
                        }
-                       if req := ac.MetaDependencies(); req != nil {
+                       if req := ac.MetaDependencies(); len(req) > 0 {
                                if err := action.CheckDependencies(ch, req); 
err != nil {
                                        err = fmt.Errorf("an error occurred 
while checking for chart dependencies. You may need to run `helm dependency 
build` to fetch missing dependencies: %w", err)
                                        if client.DependencyUpdate {
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/helm-4.0.4/pkg/cmd/upgrade_test.go 
new/helm-4.0.5/pkg/cmd/upgrade_test.go
--- old/helm-4.0.4/pkg/cmd/upgrade_test.go      2025-12-13 01:39:50.000000000 
+0100
+++ new/helm-4.0.5/pkg/cmd/upgrade_test.go      2026-01-14 23:43:46.000000000 
+0100
@@ -605,3 +605,58 @@
                t.Error("expected error when --hide-secret used without 
--dry-run")
        }
 }
+
+func TestUpgradeInstallServerSideApply(t *testing.T) {
+       _, _, chartPath := prepareMockRelease(t, "ssa-test")
+
+       defer resetEnv()()
+
+       tests := []struct {
+               name                string
+               serverSideFlag      string
+               expectedApplyMethod string
+       }{
+               {
+                       name:                "upgrade --install with 
--server-side=false uses client-side apply",
+                       serverSideFlag:      "--server-side=false",
+                       expectedApplyMethod: "csa",
+               },
+               {
+                       name:                "upgrade --install with 
--server-side=true uses server-side apply",
+                       serverSideFlag:      "--server-side=true",
+                       expectedApplyMethod: "ssa",
+               },
+               {
+                       name:                "upgrade --install with 
--server-side=auto uses server-side apply (default for new install)",
+                       serverSideFlag:      "--server-side=auto",
+                       expectedApplyMethod: "ssa",
+               },
+       }
+
+       for _, tt := range tests {
+               t.Run(tt.name, func(t *testing.T) {
+                       store := storageFixture()
+                       releaseName := fmt.Sprintf("ssa-test-%s", 
tt.expectedApplyMethod)
+
+                       cmd := fmt.Sprintf("upgrade %s --install %s '%s'", 
releaseName, tt.serverSideFlag, chartPath)
+                       _, _, err := executeActionCommandC(store, cmd)
+                       if err != nil {
+                               t.Fatalf("unexpected error: %v", err)
+                       }
+
+                       rel, err := store.Get(releaseName, 1)
+                       if err != nil {
+                               t.Fatalf("unexpected error getting release: 
%v", err)
+                       }
+
+                       relV1, err := releaserToV1Release(rel)
+                       if err != nil {
+                               t.Fatalf("unexpected error converting release: 
%v", err)
+                       }
+
+                       if relV1.ApplyMethod != tt.expectedApplyMethod {
+                               t.Errorf("expected ApplyMethod %q, got %q", 
tt.expectedApplyMethod, relV1.ApplyMethod)
+                       }
+               })
+       }
+}
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/helm-4.0.4/pkg/getter/plugingetter.go 
new/helm-4.0.5/pkg/getter/plugingetter.go
--- old/helm-4.0.4/pkg/getter/plugingetter.go   2025-12-13 01:39:50.000000000 
+0100
+++ new/helm-4.0.5/pkg/getter/plugingetter.go   2026-01-14 23:43:46.000000000 
+0100
@@ -38,12 +38,14 @@
        if err != nil {
                return nil, err
        }
+       env := plugin.FormatEnv(settings.EnvVars())
        pluginConstructorBuilder := func(plg plugin.Plugin) Constructor {
                return func(option ...Option) (Getter, error) {
 
                        return &getterPlugin{
                                options: append([]Option{}, option...),
                                plg:     plg,
+                               env:     env,
                        }, nil
                }
        }
@@ -91,6 +93,7 @@
 type getterPlugin struct {
        options []Option
        plg     plugin.Plugin
+       env     []string
 }
 
 func (g *getterPlugin) Get(href string, options ...Option) (*bytes.Buffer, 
error) {
@@ -108,6 +111,7 @@
                        Options:  opts,
                        Protocol: u.Scheme,
                },
+               Env: g.env,
                // TODO should we pass Stdin, Stdout, and Stderr through Input 
here to getter plugins?
                // Stdout: os.Stdout,
        }
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/helm-4.0.4/pkg/getter/plugingetter_test.go 
new/helm-4.0.5/pkg/getter/plugingetter_test.go
--- old/helm-4.0.4/pkg/getter/plugingetter_test.go      2025-12-13 
01:39:50.000000000 +0100
+++ new/helm-4.0.5/pkg/getter/plugingetter_test.go      2026-01-14 
23:43:46.000000000 +0100
@@ -144,3 +144,27 @@
 
        assert.Equal(t, "fake-plugin output", buf.String())
 }
+
+func TestCollectGetterPluginsPassesEnv(t *testing.T) {
+       env := cli.New()
+       env.PluginsDirectory = pluginDir
+       env.Debug = true
+
+       providers, err := collectGetterPlugins(env)
+       require.NoError(t, err)
+       require.NotEmpty(t, providers, "expected at least one plugin provider")
+
+       getter, err := providers.ByScheme("test")
+       require.NoError(t, err)
+
+       gp, ok := getter.(*getterPlugin)
+       require.True(t, ok, "expected getter to be a *getterPlugin")
+
+       require.NotEmpty(t, gp.env, "expected env to be set on getterPlugin")
+       envMap := plugin.ParseEnv(gp.env)
+
+       assert.Contains(t, envMap, "HELM_DEBUG", "expected HELM_DEBUG in env")
+       assert.Equal(t, "true", envMap["HELM_DEBUG"], "expected HELM_DEBUG to 
be true")
+       assert.Contains(t, envMap, "HELM_PLUGINS", "expected HELM_PLUGINS in 
env")
+       assert.Equal(t, pluginDir, envMap["HELM_PLUGINS"], "expected 
HELM_PLUGINS to match pluginsDirectory")
+}
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/helm-4.0.4/pkg/kube/client.go 
new/helm-4.0.5/pkg/kube/client.go
--- old/helm-4.0.4/pkg/kube/client.go   2025-12-13 01:39:50.000000000 +0100
+++ new/helm-4.0.5/pkg/kube/client.go   2026-01-14 23:43:46.000000000 +0100
@@ -557,7 +557,32 @@
                original := originals.Get(target)
                if original == nil {
                        kind := target.Mapping.GroupVersionKind.Kind
-                       return fmt.Errorf("original object %s with the name %q 
not found", kind, target.Name)
+
+                       slog.Warn("resource exists on cluster but not in 
original release, using cluster state as baseline",
+                               "namespace", target.Namespace, "name", 
target.Name, "kind", kind)
+
+                       currentObj, err := helper.Get(target.Namespace, 
target.Name)
+                       if err != nil {
+                               return fmt.Errorf("original object %s with the 
name %q not found", kind, target.Name)
+                       }
+
+                       // Create a temporary Info with the current cluster 
state to use as "original"
+                       currentInfo := &resource.Info{
+                               Client:    target.Client,
+                               Mapping:   target.Mapping,
+                               Namespace: target.Namespace,
+                               Name:      target.Name,
+                               Object:    currentObj,
+                       }
+
+                       if err := updateApplyFunc(currentInfo, target); err != 
nil {
+                               updateErrors = append(updateErrors, err)
+                       }
+
+                       // Because we check for errors later, append the info 
regardless
+                       res.Updated = append(res.Updated, target)
+
+                       return nil
                }
 
                if err := updateApplyFunc(original, target); err != nil {
@@ -595,7 +620,9 @@
                if err := deleteResource(info, 
metav1.DeletePropagationBackground); err != nil {
                        c.Logger().Debug("failed to delete resource", 
"namespace", info.Namespace, "name", info.Name, "kind", 
info.Mapping.GroupVersionKind.Kind, slog.Any("error", err))
                        if !apierrors.IsNotFound(err) {
-                               updateErrors = append(updateErrors, 
fmt.Errorf("failed to delete resource %s: %w", info.Name, err))
+                               updateErrors = append(updateErrors, fmt.Errorf(
+                                       "failed to delete resource 
namespace=%s, name=%s, kind=%s: %w",
+                                       info.Namespace, info.Name, 
info.Mapping.GroupVersionKind.Kind, err))
                        }
                        continue
                }
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/helm-4.0.4/pkg/kube/client_test.go 
new/helm-4.0.5/pkg/kube/client_test.go
--- old/helm-4.0.4/pkg/kube/client_test.go      2025-12-13 01:39:50.000000000 
+0100
+++ new/helm-4.0.5/pkg/kube/client_test.go      2026-01-14 23:43:46.000000000 
+0100
@@ -411,7 +411,25 @@
                                "/namespaces/default/pods/forbidden:GET",
                                "/namespaces/default/pods/forbidden:DELETE",
                        ),
-                       ExpectedError: "failed to delete resource forbidden:",
+                       ExpectedError: "failed to delete resource 
namespace=default, name=forbidden, kind=Pod:",
+               },
+               "rollback after failed upgrade with removed resource": {
+                       // Simulates rollback scenario:
+                       // - Revision 1 had "newpod"
+                       // - Revision 2 removed "newpod" but upgrade failed 
(OriginalPods is empty)
+                       // - Cluster still has "newpod" from Revision 1
+                       // - Rolling back to Revision 1 (TargetPods with 
"newpod") should succeed
+                       OriginalPods:                 v1.PodList{},         // 
Revision 2 (failed) - resource was removed
+                       TargetPods:                   newPodList("newpod"), // 
Revision 1 - rolling back to this
+                       ThreeWayMergeForUnstructured: false,
+                       ServerSideApply:              true,
+                       ExpectedActions: []string{
+                               "/namespaces/default/pods/newpod:GET",   // 
Check if resource exists
+                               "/namespaces/default/pods/newpod:GET",   // Get 
current state (first call in update path)
+                               "/namespaces/default/pods/newpod:GET",   // Get 
current cluster state to use as baseline
+                               "/namespaces/default/pods/newpod:PATCH", // 
Update using cluster state as baseline
+                       },
+                       ExpectedError: "",
                },
        }
 
@@ -428,6 +446,10 @@
                                p, m := req.URL.Path, req.Method
 
                                switch {
+                               case p == "/namespaces/default/pods/newpod" && 
m == http.MethodGet:
+                                       return newResponse(http.StatusOK, 
&listTarget.Items[0])
+                               case p == "/namespaces/default/pods/newpod" && 
m == http.MethodPatch:
+                                       return newResponse(http.StatusOK, 
&listTarget.Items[0])
                                case p == "/namespaces/default/pods/starfish" 
&& m == http.MethodGet:
                                        return newResponse(http.StatusOK, 
&listOriginal.Items[0])
                                case p == "/namespaces/default/pods/otter" && m 
== http.MethodGet:
@@ -519,9 +541,23 @@
                                require.NoError(t, err)
                        }
 
-                       assert.Len(t, result.Created, 1, "expected 1 resource 
created, got %d", len(result.Created))
-                       assert.Len(t, result.Updated, 2, "expected 2 resource 
updated, got %d", len(result.Updated))
-                       assert.Len(t, result.Deleted, 1, "expected 1 resource 
deleted, got %d", len(result.Deleted))
+                       // Special handling for the rollback test case
+                       if name == "rollback after failed upgrade with removed 
resource" {
+                               assert.Len(t, result.Created, 0, "expected 0 
resource created, got %d", len(result.Created))
+                               assert.Len(t, result.Updated, 1, "expected 1 
resource updated, got %d", len(result.Updated))
+                               assert.Len(t, result.Deleted, 0, "expected 0 
resource deleted, got %d", len(result.Deleted))
+                       } else {
+                               assert.Len(t, result.Created, 1, "expected 1 
resource created, got %d", len(result.Created))
+                               assert.Len(t, result.Updated, 2, "expected 2 
resource updated, got %d", len(result.Updated))
+                               assert.Len(t, result.Deleted, 1, "expected 1 
resource deleted, got %d", len(result.Deleted))
+                       }
+
+                       if tc.ExpectedError != "" {
+                               require.Error(t, err)
+                               require.Contains(t, err.Error(), 
tc.ExpectedError)
+                       } else {
+                               require.NoError(t, err)
+                       }
 
                        actions := []string{}
                        for _, action := range client.Actions {
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/helm-4.0.4/pkg/kube/statuswait.go 
new/helm-4.0.5/pkg/kube/statuswait.go
--- old/helm-4.0.4/pkg/kube/statuswait.go       2025-12-13 01:39:50.000000000 
+0100
+++ new/helm-4.0.5/pkg/kube/statuswait.go       2026-01-14 23:43:46.000000000 
+0100
@@ -113,7 +113,9 @@
                }
                resources = append(resources, obj)
        }
-       eventCh := sw.Watch(cancelCtx, resources, watcher.Options{})
+       eventCh := sw.Watch(cancelCtx, resources, watcher.Options{
+               RESTScopeStrategy: watcher.RESTScopeNamespace,
+       })
        statusCollector := collector.NewResourceStatusCollector(resources)
        done := statusCollector.ListenWithObserver(eventCh, 
statusObserver(cancel, status.NotFoundStatus))
        <-done
@@ -156,7 +158,9 @@
                resources = append(resources, obj)
        }
 
-       eventCh := sw.Watch(cancelCtx, resources, watcher.Options{})
+       eventCh := sw.Watch(cancelCtx, resources, watcher.Options{
+               RESTScopeStrategy: watcher.RESTScopeNamespace,
+       })
        statusCollector := collector.NewResourceStatusCollector(resources)
        done := statusCollector.ListenWithObserver(eventCh, 
statusObserver(cancel, status.CurrentStatus))
        <-done
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/helm-4.0.4/pkg/kube/statuswait_test.go 
new/helm-4.0.5/pkg/kube/statuswait_test.go
--- old/helm-4.0.4/pkg/kube/statuswait_test.go  2025-12-13 01:39:50.000000000 
+0100
+++ new/helm-4.0.5/pkg/kube/statuswait_test.go  2026-01-14 23:43:46.000000000 
+0100
@@ -17,7 +17,10 @@
 package kube // import "helm.sh/helm/v3/pkg/kube"
 
 import (
+       "context"
        "errors"
+       "fmt"
+       "strings"
        "testing"
        "time"
 
@@ -27,11 +30,14 @@
        appsv1 "k8s.io/api/apps/v1"
        batchv1 "k8s.io/api/batch/v1"
        v1 "k8s.io/api/core/v1"
+       apierrors "k8s.io/apimachinery/pkg/api/errors"
        "k8s.io/apimachinery/pkg/api/meta"
+       metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
        "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
        "k8s.io/apimachinery/pkg/runtime"
        "k8s.io/apimachinery/pkg/runtime/schema"
        "k8s.io/apimachinery/pkg/util/yaml"
+       "k8s.io/client-go/dynamic"
        dynamicfake "k8s.io/client-go/dynamic/fake"
        "k8s.io/kubectl/pkg/scheme"
 )
@@ -153,6 +159,83 @@
         - containerPort: 80
 `
 
+var podNamespace1Manifest = `
+apiVersion: v1
+kind: Pod
+metadata:
+  name: pod-ns1
+  namespace: namespace-1
+status:
+  conditions:
+  - type: Ready
+    status: "True"
+  phase: Running
+`
+
+var podNamespace2Manifest = `
+apiVersion: v1
+kind: Pod
+metadata:
+  name: pod-ns2
+  namespace: namespace-2
+status:
+  conditions:
+  - type: Ready
+    status: "True"
+  phase: Running
+`
+
+var podNamespace1NoStatusManifest = `
+apiVersion: v1
+kind: Pod
+metadata:
+  name: pod-ns1
+  namespace: namespace-1
+`
+
+var jobNamespace1CompleteManifest = `
+apiVersion: batch/v1
+kind: Job
+metadata:
+  name: job-ns1
+  namespace: namespace-1
+  generation: 1
+status:
+  succeeded: 1
+  active: 0
+  conditions:
+  - type: Complete
+    status: "True"
+`
+
+var podNamespace2SucceededManifest = `
+apiVersion: v1
+kind: Pod
+metadata:
+  name: pod-ns2
+  namespace: namespace-2
+status:
+  phase: Succeeded
+`
+
+var clusterRoleManifest = `
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+  name: test-cluster-role
+rules:
+- apiGroups: [""]
+  resources: ["pods"]
+  verbs: ["get", "list"]
+`
+
+var namespaceManifest = `
+apiVersion: v1
+kind: Namespace
+metadata:
+  name: test-namespace
+`
+
 func getGVR(t *testing.T, mapper meta.RESTMapper, obj 
*unstructured.Unstructured) schema.GroupVersionResource {
        t.Helper()
        gvk := obj.GroupVersionKind()
@@ -232,11 +315,11 @@
                        for _, objToDelete := range objsToDelete {
                                u := objToDelete.(*unstructured.Unstructured)
                                gvr := getGVR(t, fakeMapper, u)
-                               go func() {
+                               go func(gvr schema.GroupVersionResource, u 
*unstructured.Unstructured) {
                                        time.Sleep(timeUntilPodDelete)
                                        err := fakeClient.Tracker().Delete(gvr, 
u.GetNamespace(), u.GetName())
                                        assert.NoError(t, err)
-                               }()
+                               }(gvr, u)
                        }
                        resourceList := getResourceListFromRuntimeObjs(t, c, 
objsToCreate)
                        err := statusWaiter.WaitForDelete(resourceList, timeout)
@@ -448,3 +531,413 @@
                })
        }
 }
+
+func TestStatusWaitMultipleNamespaces(t *testing.T) {
+       t.Parallel()
+       tests := []struct {
+               name         string
+               objManifests []string
+               expectErrs   []error
+               testFunc     func(statusWaiter, ResourceList, time.Duration) 
error
+       }{
+               {
+                       name:         "pods in multiple namespaces",
+                       objManifests: []string{podNamespace1Manifest, 
podNamespace2Manifest},
+                       testFunc: func(sw statusWaiter, rl ResourceList, 
timeout time.Duration) error {
+                               return sw.Wait(rl, timeout)
+                       },
+               },
+               {
+                       name:         "hooks in multiple namespaces",
+                       objManifests: []string{jobNamespace1CompleteManifest, 
podNamespace2SucceededManifest},
+                       testFunc: func(sw statusWaiter, rl ResourceList, 
timeout time.Duration) error {
+                               return sw.WatchUntilReady(rl, timeout)
+                       },
+               },
+               {
+                       name:         "error when resource not ready in one 
namespace",
+                       objManifests: []string{podNamespace1NoStatusManifest, 
podNamespace2Manifest},
+                       expectErrs:   []error{errors.New("resource not ready, 
name: pod-ns1, kind: Pod, status: InProgress"), errors.New("context deadline 
exceeded")},
+                       testFunc: func(sw statusWaiter, rl ResourceList, 
timeout time.Duration) error {
+                               return sw.Wait(rl, timeout)
+                       },
+               },
+               {
+                       name:         "delete resources in multiple namespaces",
+                       objManifests: []string{podNamespace1Manifest, 
podNamespace2Manifest},
+                       testFunc: func(sw statusWaiter, rl ResourceList, 
timeout time.Duration) error {
+                               return sw.WaitForDelete(rl, timeout)
+                       },
+               },
+               {
+                       name:         "cluster-scoped resources work correctly 
with unrestricted permissions",
+                       objManifests: []string{podNamespace1Manifest, 
clusterRoleManifest},
+                       testFunc: func(sw statusWaiter, rl ResourceList, 
timeout time.Duration) error {
+                               return sw.Wait(rl, timeout)
+                       },
+               },
+               {
+                       name:         "namespace-scoped and cluster-scoped 
resources work together",
+                       objManifests: []string{podNamespace1Manifest, 
podNamespace2Manifest, clusterRoleManifest},
+                       testFunc: func(sw statusWaiter, rl ResourceList, 
timeout time.Duration) error {
+                               return sw.Wait(rl, timeout)
+                       },
+               },
+               {
+                       name:         "delete cluster-scoped resources works 
correctly",
+                       objManifests: []string{podNamespace1Manifest, 
namespaceManifest},
+                       testFunc: func(sw statusWaiter, rl ResourceList, 
timeout time.Duration) error {
+                               return sw.WaitForDelete(rl, timeout)
+                       },
+               },
+               {
+                       name:         "watch cluster-scoped resources works 
correctly",
+                       objManifests: []string{clusterRoleManifest},
+                       testFunc: func(sw statusWaiter, rl ResourceList, 
timeout time.Duration) error {
+                               return sw.WatchUntilReady(rl, timeout)
+                       },
+               },
+       }
+
+       for _, tt := range tests {
+               t.Run(tt.name, func(t *testing.T) {
+                       t.Parallel()
+                       c := newTestClient(t)
+                       fakeClient := 
dynamicfake.NewSimpleDynamicClient(scheme.Scheme)
+                       fakeMapper := testutil.NewFakeRESTMapper(
+                               v1.SchemeGroupVersion.WithKind("Pod"),
+                               batchv1.SchemeGroupVersion.WithKind("Job"),
+                               schema.GroupVersion{Group: 
"rbac.authorization.k8s.io", Version: "v1"}.WithKind("ClusterRole"),
+                               v1.SchemeGroupVersion.WithKind("Namespace"),
+                       )
+                       sw := statusWaiter{
+                               client:     fakeClient,
+                               restMapper: fakeMapper,
+                       }
+                       objs := getRuntimeObjFromManifests(t, tt.objManifests)
+                       for _, obj := range objs {
+                               u := obj.(*unstructured.Unstructured)
+                               gvr := getGVR(t, fakeMapper, u)
+                               err := fakeClient.Tracker().Create(gvr, u, 
u.GetNamespace())
+                               assert.NoError(t, err)
+                       }
+
+                       if strings.Contains(tt.name, "delete") {
+                               timeUntilDelete := time.Millisecond * 500
+                               for _, obj := range objs {
+                                       u := obj.(*unstructured.Unstructured)
+                                       gvr := getGVR(t, fakeMapper, u)
+                                       go func(gvr 
schema.GroupVersionResource, u *unstructured.Unstructured) {
+                                               time.Sleep(timeUntilDelete)
+                                               err := 
fakeClient.Tracker().Delete(gvr, u.GetNamespace(), u.GetName())
+                                               assert.NoError(t, err)
+                                       }(gvr, u)
+                               }
+                       }
+
+                       resourceList := getResourceListFromRuntimeObjs(t, c, 
objs)
+                       err := tt.testFunc(sw, resourceList, time.Second*3)
+                       if tt.expectErrs != nil {
+                               assert.EqualError(t, err, 
errors.Join(tt.expectErrs...).Error())
+                               return
+                       }
+                       assert.NoError(t, err)
+               })
+       }
+}
+
+type restrictedDynamicClient struct {
+       dynamic.Interface
+       allowedNamespaces          map[string]bool
+       clusterScopedListAttempted bool
+}
+
+func newRestrictedDynamicClient(baseClient dynamic.Interface, 
allowedNamespaces []string) *restrictedDynamicClient {
+       allowed := make(map[string]bool)
+       for _, ns := range allowedNamespaces {
+               allowed[ns] = true
+       }
+       return &restrictedDynamicClient{
+               Interface:         baseClient,
+               allowedNamespaces: allowed,
+       }
+}
+
+func (r *restrictedDynamicClient) Resource(resource 
schema.GroupVersionResource) dynamic.NamespaceableResourceInterface {
+       return &restrictedNamespaceableResource{
+               NamespaceableResourceInterface: r.Interface.Resource(resource),
+               allowedNamespaces:              r.allowedNamespaces,
+               clusterScopedListAttempted:     &r.clusterScopedListAttempted,
+       }
+}
+
+type restrictedNamespaceableResource struct {
+       dynamic.NamespaceableResourceInterface
+       allowedNamespaces          map[string]bool
+       clusterScopedListAttempted *bool
+}
+
+func (r *restrictedNamespaceableResource) Namespace(ns string) 
dynamic.ResourceInterface {
+       return &restrictedResource{
+               ResourceInterface:          
r.NamespaceableResourceInterface.Namespace(ns),
+               namespace:                  ns,
+               allowedNamespaces:          r.allowedNamespaces,
+               clusterScopedListAttempted: r.clusterScopedListAttempted,
+       }
+}
+
+func (r *restrictedNamespaceableResource) List(_ context.Context, _ 
metav1.ListOptions) (*unstructured.UnstructuredList, error) {
+       *r.clusterScopedListAttempted = true
+       return nil, apierrors.NewForbidden(
+               schema.GroupResource{Resource: "pods"},
+               "",
+               fmt.Errorf("user does not have cluster-wide LIST permissions 
for cluster-scoped resources"),
+       )
+}
+
+type restrictedResource struct {
+       dynamic.ResourceInterface
+       namespace                  string
+       allowedNamespaces          map[string]bool
+       clusterScopedListAttempted *bool
+}
+
+func (r *restrictedResource) List(ctx context.Context, opts 
metav1.ListOptions) (*unstructured.UnstructuredList, error) {
+       if r.namespace == "" {
+               *r.clusterScopedListAttempted = true
+               return nil, apierrors.NewForbidden(
+                       schema.GroupResource{Resource: "pods"},
+                       "",
+                       fmt.Errorf("user does not have cluster-wide LIST 
permissions for cluster-scoped resources"),
+               )
+       }
+       if !r.allowedNamespaces[r.namespace] {
+               return nil, apierrors.NewForbidden(
+                       schema.GroupResource{Resource: "pods"},
+                       "",
+                       fmt.Errorf("user does not have LIST permissions in 
namespace %q", r.namespace),
+               )
+       }
+       return r.ResourceInterface.List(ctx, opts)
+}
+
+func TestStatusWaitRestrictedRBAC(t *testing.T) {
+       t.Parallel()
+       tests := []struct {
+               name              string
+               objManifests      []string
+               allowedNamespaces []string
+               expectErrs        []error
+               testFunc          func(statusWaiter, ResourceList, 
time.Duration) error
+       }{
+               {
+                       name:              "pods in multiple namespaces with 
namespace permissions",
+                       objManifests:      []string{podNamespace1Manifest, 
podNamespace2Manifest},
+                       allowedNamespaces: []string{"namespace-1", 
"namespace-2"},
+                       testFunc: func(sw statusWaiter, rl ResourceList, 
timeout time.Duration) error {
+                               return sw.Wait(rl, timeout)
+                       },
+               },
+               {
+                       name:              "delete pods in multiple namespaces 
with namespace permissions",
+                       objManifests:      []string{podNamespace1Manifest, 
podNamespace2Manifest},
+                       allowedNamespaces: []string{"namespace-1", 
"namespace-2"},
+                       testFunc: func(sw statusWaiter, rl ResourceList, 
timeout time.Duration) error {
+                               return sw.WaitForDelete(rl, timeout)
+                       },
+               },
+               {
+                       name:              "hooks in multiple namespaces with 
namespace permissions",
+                       objManifests:      
[]string{jobNamespace1CompleteManifest, podNamespace2SucceededManifest},
+                       allowedNamespaces: []string{"namespace-1", 
"namespace-2"},
+                       testFunc: func(sw statusWaiter, rl ResourceList, 
timeout time.Duration) error {
+                               return sw.WatchUntilReady(rl, timeout)
+                       },
+               },
+               {
+                       name:              "error when cluster-scoped resource 
included",
+                       objManifests:      []string{podNamespace1Manifest, 
clusterRoleManifest},
+                       allowedNamespaces: []string{"namespace-1"},
+                       expectErrs:        []error{fmt.Errorf("user does not 
have cluster-wide LIST permissions for cluster-scoped resources")},
+                       testFunc: func(sw statusWaiter, rl ResourceList, 
timeout time.Duration) error {
+                               return sw.Wait(rl, timeout)
+                       },
+               },
+               {
+                       name:              "error when deleting cluster-scoped 
resource",
+                       objManifests:      []string{podNamespace1Manifest, 
namespaceManifest},
+                       allowedNamespaces: []string{"namespace-1"},
+                       expectErrs:        []error{fmt.Errorf("user does not 
have cluster-wide LIST permissions for cluster-scoped resources")},
+                       testFunc: func(sw statusWaiter, rl ResourceList, 
timeout time.Duration) error {
+                               return sw.WaitForDelete(rl, timeout)
+                       },
+               },
+               {
+                       name:              "error when accessing disallowed 
namespace",
+                       objManifests:      []string{podNamespace1Manifest, 
podNamespace2Manifest},
+                       allowedNamespaces: []string{"namespace-1"},
+                       expectErrs:        []error{fmt.Errorf("user does not 
have LIST permissions in namespace %q", "namespace-2")},
+                       testFunc: func(sw statusWaiter, rl ResourceList, 
timeout time.Duration) error {
+                               return sw.Wait(rl, timeout)
+                       },
+               },
+       }
+
+       for _, tt := range tests {
+               t.Run(tt.name, func(t *testing.T) {
+                       t.Parallel()
+                       c := newTestClient(t)
+                       baseFakeClient := 
dynamicfake.NewSimpleDynamicClient(scheme.Scheme)
+                       fakeMapper := testutil.NewFakeRESTMapper(
+                               v1.SchemeGroupVersion.WithKind("Pod"),
+                               batchv1.SchemeGroupVersion.WithKind("Job"),
+                               schema.GroupVersion{Group: 
"rbac.authorization.k8s.io", Version: "v1"}.WithKind("ClusterRole"),
+                               v1.SchemeGroupVersion.WithKind("Namespace"),
+                       )
+                       restrictedClient := 
newRestrictedDynamicClient(baseFakeClient, tt.allowedNamespaces)
+                       sw := statusWaiter{
+                               client:     restrictedClient,
+                               restMapper: fakeMapper,
+                       }
+                       objs := getRuntimeObjFromManifests(t, tt.objManifests)
+                       for _, obj := range objs {
+                               u := obj.(*unstructured.Unstructured)
+                               gvr := getGVR(t, fakeMapper, u)
+                               err := baseFakeClient.Tracker().Create(gvr, u, 
u.GetNamespace())
+                               assert.NoError(t, err)
+                       }
+
+                       if strings.Contains(tt.name, "delet") {
+                               timeUntilDelete := time.Millisecond * 500
+                               for _, obj := range objs {
+                                       u := obj.(*unstructured.Unstructured)
+                                       gvr := getGVR(t, fakeMapper, u)
+                                       go func(gvr 
schema.GroupVersionResource, u *unstructured.Unstructured) {
+                                               time.Sleep(timeUntilDelete)
+                                               err := 
baseFakeClient.Tracker().Delete(gvr, u.GetNamespace(), u.GetName())
+                                               assert.NoError(t, err)
+                                       }(gvr, u)
+                               }
+                       }
+
+                       resourceList := getResourceListFromRuntimeObjs(t, c, 
objs)
+                       err := tt.testFunc(sw, resourceList, time.Second*3)
+                       if tt.expectErrs != nil {
+                               require.Error(t, err)
+                               for _, expectedErr := range tt.expectErrs {
+                                       assert.Contains(t, err.Error(), 
expectedErr.Error())
+                               }
+                               return
+                       }
+                       assert.NoError(t, err)
+                       assert.False(t, 
restrictedClient.clusterScopedListAttempted)
+               })
+       }
+}
+
+func TestStatusWaitMixedResources(t *testing.T) {
+       t.Parallel()
+       tests := []struct {
+               name              string
+               objManifests      []string
+               allowedNamespaces []string
+               expectErrs        []error
+               testFunc          func(statusWaiter, ResourceList, 
time.Duration) error
+       }{
+               {
+                       name:              "wait succeeds with namespace-scoped 
resources only",
+                       objManifests:      []string{podNamespace1Manifest, 
podNamespace2Manifest},
+                       allowedNamespaces: []string{"namespace-1", 
"namespace-2"},
+                       testFunc: func(sw statusWaiter, rl ResourceList, 
timeout time.Duration) error {
+                               return sw.Wait(rl, timeout)
+                       },
+               },
+               {
+                       name:              "wait fails when cluster-scoped 
resource included",
+                       objManifests:      []string{podNamespace1Manifest, 
clusterRoleManifest},
+                       allowedNamespaces: []string{"namespace-1"},
+                       expectErrs:        []error{fmt.Errorf("user does not 
have cluster-wide LIST permissions for cluster-scoped resources")},
+                       testFunc: func(sw statusWaiter, rl ResourceList, 
timeout time.Duration) error {
+                               return sw.Wait(rl, timeout)
+                       },
+               },
+               {
+                       name:              "waitForDelete fails when 
cluster-scoped resource included",
+                       objManifests:      []string{podNamespace1Manifest, 
clusterRoleManifest},
+                       allowedNamespaces: []string{"namespace-1"},
+                       expectErrs:        []error{fmt.Errorf("user does not 
have cluster-wide LIST permissions for cluster-scoped resources")},
+                       testFunc: func(sw statusWaiter, rl ResourceList, 
timeout time.Duration) error {
+                               return sw.WaitForDelete(rl, timeout)
+                       },
+               },
+               {
+                       name:              "wait fails when namespace resource 
included",
+                       objManifests:      []string{podNamespace1Manifest, 
namespaceManifest},
+                       allowedNamespaces: []string{"namespace-1"},
+                       expectErrs:        []error{fmt.Errorf("user does not 
have cluster-wide LIST permissions for cluster-scoped resources")},
+                       testFunc: func(sw statusWaiter, rl ResourceList, 
timeout time.Duration) error {
+                               return sw.Wait(rl, timeout)
+                       },
+               },
+               {
+                       name:              "error when accessing disallowed 
namespace",
+                       objManifests:      []string{podNamespace1Manifest, 
podNamespace2Manifest},
+                       allowedNamespaces: []string{"namespace-1"},
+                       expectErrs:        []error{fmt.Errorf("user does not 
have LIST permissions in namespace %q", "namespace-2")},
+                       testFunc: func(sw statusWaiter, rl ResourceList, 
timeout time.Duration) error {
+                               return sw.Wait(rl, timeout)
+                       },
+               },
+       }
+
+       for _, tt := range tests {
+               t.Run(tt.name, func(t *testing.T) {
+                       t.Parallel()
+                       c := newTestClient(t)
+                       baseFakeClient := 
dynamicfake.NewSimpleDynamicClient(scheme.Scheme)
+                       fakeMapper := testutil.NewFakeRESTMapper(
+                               v1.SchemeGroupVersion.WithKind("Pod"),
+                               batchv1.SchemeGroupVersion.WithKind("Job"),
+                               schema.GroupVersion{Group: 
"rbac.authorization.k8s.io", Version: "v1"}.WithKind("ClusterRole"),
+                               v1.SchemeGroupVersion.WithKind("Namespace"),
+                       )
+                       restrictedClient := 
newRestrictedDynamicClient(baseFakeClient, tt.allowedNamespaces)
+                       sw := statusWaiter{
+                               client:     restrictedClient,
+                               restMapper: fakeMapper,
+                       }
+                       objs := getRuntimeObjFromManifests(t, tt.objManifests)
+                       for _, obj := range objs {
+                               u := obj.(*unstructured.Unstructured)
+                               gvr := getGVR(t, fakeMapper, u)
+                               err := baseFakeClient.Tracker().Create(gvr, u, 
u.GetNamespace())
+                               assert.NoError(t, err)
+                       }
+
+                       if strings.Contains(tt.name, "delet") {
+                               timeUntilDelete := time.Millisecond * 500
+                               for _, obj := range objs {
+                                       u := obj.(*unstructured.Unstructured)
+                                       gvr := getGVR(t, fakeMapper, u)
+                                       go func(gvr 
schema.GroupVersionResource, u *unstructured.Unstructured) {
+                                               time.Sleep(timeUntilDelete)
+                                               err := 
baseFakeClient.Tracker().Delete(gvr, u.GetNamespace(), u.GetName())
+                                               assert.NoError(t, err)
+                                       }(gvr, u)
+                               }
+                       }
+
+                       resourceList := getResourceListFromRuntimeObjs(t, c, 
objs)
+                       err := tt.testFunc(sw, resourceList, time.Second*3)
+                       if tt.expectErrs != nil {
+                               require.Error(t, err)
+                               for _, expectedErr := range tt.expectErrs {
+                                       assert.Contains(t, err.Error(), 
expectedErr.Error())
+                               }
+                               return
+                       }
+                       assert.NoError(t, err)
+                       assert.False(t, 
restrictedClient.clusterScopedListAttempted)
+               })
+       }
+}

++++++ helm.obsinfo ++++++
--- /var/tmp/diff_new_pack.IFE6fG/_old  2026-01-17 14:53:38.488401075 +0100
+++ /var/tmp/diff_new_pack.IFE6fG/_new  2026-01-17 14:53:38.496401408 +0100
@@ -1,5 +1,5 @@
 name: helm
-version: 4.0.4
-mtime: 1765586390
-commit: 8650e1dad9e6ae38b41f60b712af9218a0d8cc11
+version: 4.0.5
+mtime: 1768430626
+commit: 1b6053d48b51673c5581973f5ae7e104f627fcf5
 

++++++ vendor.tar.gz ++++++
/work/SRC/openSUSE:Factory/helm/vendor.tar.gz 
/work/SRC/openSUSE:Factory/.helm.new.1928/vendor.tar.gz differ: char 13, line 1

Reply via email to