** Description changed:

  On Ubuntu 22.04.3, when running a k3s workload that uses volumes (using 
default local-path storageClass), process gvfs-disks2-volume-monitor can take 
around 100% of one CPU core, and process gsd-housekeeping around 25% of one CPU 
core.
  Even if the actual k3s workload is idle.
  
  Steps To Reproduce:
  
  - Use or install a desktop Ubuntu 22.04.3 (with default settings)
  - Install K3s on it (current version is "v1.28.4+k3s2"), with default 
settings: "curl -sfL https://get.k3s.io | sh -"
  - Deploy k8s manifests with many volumes, like 
https://gitlab.com/-/snippets/3634487: "wget 
https://gitlab.com/-/snippets/3634487/raw/main/deployment-wit-many-volumes.yaml 
&& sudo k3s kubectl apply -f deployment-wit-many-volumes.yaml"
  - Check CPU consumption on the host, with top, gnome-system-monitor or 
anything else
  
  Expected behavior:
  Gnome desktop tools should not interfere with k3s.
  
  Actual behavior:
  Processes gvfs-disks2-volume-monitor and gsd-housekeeping consume a lot of 
CPU, at least at provisioning time.
  Same CPU consumption if you then remove the workload ("sudo k3s kubectl 
delete -f deployment-wit-many-volumes.yaml"), until the PVs are deleted by k3s.
  I have other workloads (with data in PVs) where this CPU consumption is 
always there, when the workload is running.
  
  Additional context:
  The symptoms are very similar to https://github.com/k3s-io/k3s/issues/522, 
but the workaround of comment 
https://github.com/k3s-io/k3s/issues/522#issuecomment-811737023 (adding a udev 
rule to ignore some loopback devices) does not help.
  
  Executing systemctl stop --user gvfs-udisks2-volume-monitor can be a
  temporary workaround
  
+ Technical details: k3s uses containerd to run containers. The local-path 
storageClass mounts local volumes (physically stored in 
/var/lib/rancher/k3s/storage subfolders) in these containers.
+ I suppose gnome applications try to scan these mount points. In this case, 
the solution might be to make them ignore them, a bit like 
https://github.com/moby/moby/blob/b96a0909f0ebc683de817665ff090d57ced6f981/contrib/udev/80-docker.rules
 does for docker
+ 
  NB: Was initially reported on https://github.com/k3s-io/k3s/issues/9093

** Description changed:

  On Ubuntu 22.04.3, when running a k3s workload that uses volumes (using 
default local-path storageClass), process gvfs-disks2-volume-monitor can take 
around 100% of one CPU core, and process gsd-housekeeping around 25% of one CPU 
core.
  Even if the actual k3s workload is idle.
  
  Steps To Reproduce:
  
  - Use or install a desktop Ubuntu 22.04.3 (with default settings)
  - Install K3s on it (current version is "v1.28.4+k3s2"), with default 
settings: "curl -sfL https://get.k3s.io | sh -"
  - Deploy k8s manifests with many volumes, like 
https://gitlab.com/-/snippets/3634487: "wget 
https://gitlab.com/-/snippets/3634487/raw/main/deployment-wit-many-volumes.yaml 
&& sudo k3s kubectl apply -f deployment-wit-many-volumes.yaml"
  - Check CPU consumption on the host, with top, gnome-system-monitor or 
anything else
  
  Expected behavior:
  Gnome desktop tools should not interfere with k3s.
  
  Actual behavior:
  Processes gvfs-disks2-volume-monitor and gsd-housekeeping consume a lot of 
CPU, at least at provisioning time.
  Same CPU consumption if you then remove the workload ("sudo k3s kubectl 
delete -f deployment-wit-many-volumes.yaml"), until the PVs are deleted by k3s.
  I have other workloads (with data in PVs) where this CPU consumption is 
always there, when the workload is running.
  
  Additional context:
  The symptoms are very similar to https://github.com/k3s-io/k3s/issues/522, 
but the workaround of comment 
https://github.com/k3s-io/k3s/issues/522#issuecomment-811737023 (adding a udev 
rule to ignore some loopback devices) does not help.
  
- Executing systemctl stop --user gvfs-udisks2-volume-monitor can be a
+ Executing "systemctl stop --user gvfs-udisks2-volume-monitor" can be a
  temporary workaround
  
  Technical details: k3s uses containerd to run containers. The local-path 
storageClass mounts local volumes (physically stored in 
/var/lib/rancher/k3s/storage subfolders) in these containers.
  I suppose gnome applications try to scan these mount points. In this case, 
the solution might be to make them ignore them, a bit like 
https://github.com/moby/moby/blob/b96a0909f0ebc683de817665ff090d57ced6f981/contrib/udev/80-docker.rules
 does for docker
  
  NB: Was initially reported on https://github.com/k3s-io/k3s/issues/9093

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to gvfs in Ubuntu.
https://bugs.launchpad.net/bugs/2047356

Title:
  gvfs-disks2-volume-monitor and gsd-housekeeping processes can eat a
  lot of CPU with k3s workload

Status in gvfs package in Ubuntu:
  New

Bug description:
  On Ubuntu 22.04.3, when running a k3s workload that uses volumes (using 
default local-path storageClass), process gvfs-disks2-volume-monitor can take 
around 100% of one CPU core, and process gsd-housekeeping around 25% of one CPU 
core.
  Even if the actual k3s workload is idle.

  Steps To Reproduce:

  - Use or install a desktop Ubuntu 22.04.3 (with default settings)
  - Install K3s on it (current version is "v1.28.4+k3s2"), with default 
settings: "curl -sfL https://get.k3s.io | sh -"
  - Deploy k8s manifests with many volumes, like 
https://gitlab.com/-/snippets/3634487: "wget 
https://gitlab.com/-/snippets/3634487/raw/main/deployment-wit-many-volumes.yaml 
&& sudo k3s kubectl apply -f deployment-wit-many-volumes.yaml"
  - Check CPU consumption on the host, with top, gnome-system-monitor or 
anything else

  Expected behavior:
  Gnome desktop tools should not interfere with k3s.

  Actual behavior:
  Processes gvfs-disks2-volume-monitor and gsd-housekeeping consume a lot of 
CPU, at least at provisioning time.
  Same CPU consumption if you then remove the workload ("sudo k3s kubectl 
delete -f deployment-wit-many-volumes.yaml"), until the PVs are deleted by k3s.
  I have other workloads (with data in PVs) where this CPU consumption is 
always there, when the workload is running.

  Additional context:
  The symptoms are very similar to https://github.com/k3s-io/k3s/issues/522, 
but the workaround of comment 
https://github.com/k3s-io/k3s/issues/522#issuecomment-811737023 (adding a udev 
rule to ignore some loopback devices) does not help.

  Executing "systemctl stop --user gvfs-udisks2-volume-monitor" can be a
  temporary workaround

  Technical details: k3s uses containerd to run containers. The local-path 
storageClass mounts local volumes (physically stored in 
/var/lib/rancher/k3s/storage subfolders) in these containers.
  I suppose gnome applications try to scan these mount points. In this case, 
the solution might be to make them ignore them, a bit like 
https://github.com/moby/moby/blob/b96a0909f0ebc683de817665ff090d57ced6f981/contrib/udev/80-docker.rules
 does for docker

  NB: Was initially reported on
  https://github.com/k3s-io/k3s/issues/9093

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/gvfs/+bug/2047356/+subscriptions


-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to     : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to