--- Begin Message ---
Hi Marco,

you can check "zfs list -o space", which will give you a more detailed view of what is using the space:

root@xxx:~# zfs list -o space
NAME                          AVAIL   USED  USEDSNAP  USEDDS USEDREFRESERV  USEDCHILD rpool                          507G   354G        0B    104K      0B       354G rpool/ROOT                     507G  4.40G        0B     96K      0B      4.40G rpool/ROOT/pve-1               507G  4.40G     1.05G   3.35G      0B         0B rpool/data                     507G   312G        0B    112K      0B       312G rpool/data/subvol-105-disk-0  8.62G  11.4G     49.2M   11.4G      0B         0B

Used = overall used
Usedsnap = Used by Snapshots
Usedds = Used Disk Space (not counting snapshots, only live data)
Usedchild = Used by datasets/zvols further down in the same path (in my example, rpool has the same amount of Used and Usedchild space, since there is nothing directly inside of rpool itself)

Cheers,
Matthieu

Am 24.09.2025 um 18:29 schrieb Marco Gaiarin:
Mandi! Marco Gaiarin
   In chel di` si favelave...

Uh, wait... effectively we forgot to enable 'discard' on volumes, and we have
enabled afterward (but rebooted the VM).
I'll check refreservation property and report back.
No, volumes seems have all refreservation to 'none', as expected; current
situation is:

  root@lamprologus:~# zfs list | grep ^rpool-data
  rpool-data                  54.2T  3.84T   171K  /rpool-data
  rpool-data/vm-100-disk-0    1.11T  3.84T  1.11T  -
  rpool-data/vm-100-disk-1    2.32T  3.84T  2.32T  -
  rpool-data/vm-100-disk-10   1.82T  3.84T  1.82T  -
  rpool-data/vm-100-disk-11   2.03T  3.84T  2.03T  -
  rpool-data/vm-100-disk-12   1.96T  3.84T  1.96T  -
  rpool-data/vm-100-disk-13   2.48T  3.84T  2.48T  -
  rpool-data/vm-100-disk-14   2.21T  3.84T  2.21T  -
  rpool-data/vm-100-disk-15   2.42T  3.84T  2.42T  -
  rpool-data/vm-100-disk-16   2.15T  3.84T  2.15T  -
  rpool-data/vm-100-disk-17   2.14T  3.84T  2.14T  -
  rpool-data/vm-100-disk-18   3.39T  3.84T  3.39T  -
  rpool-data/vm-100-disk-19   3.40T  3.84T  3.40T  -
  rpool-data/vm-100-disk-2    1.32T  3.84T  1.32T  -
  rpool-data/vm-100-disk-20   3.36T  3.84T  3.36T  -
  rpool-data/vm-100-disk-21   2.50T  3.84T  2.50T  -
  rpool-data/vm-100-disk-22   3.22T  3.84T  3.22T  -
  rpool-data/vm-100-disk-23   2.73T  3.84T  2.73T  -
  rpool-data/vm-100-disk-24   2.53T  3.84T  2.53T  -
  rpool-data/vm-100-disk-3     213K  3.84T   213K  -
  rpool-data/vm-100-disk-4     213K  3.84T   213K  -
  rpool-data/vm-100-disk-5    2.33T  3.84T  2.33T  -
  rpool-data/vm-100-disk-6    2.28T  3.84T  2.28T  -
  rpool-data/vm-100-disk-7    2.13T  3.84T  2.13T  -
  rpool-data/vm-100-disk-8    2.29T  3.84T  2.29T  -
  rpool-data/vm-100-disk-9    2.11T  3.84T  2.11T  -

and a random volume (but all are similar):

  root@lamprologus:~# zfs get all rpool-data/vm-100-disk-18 | grep 
refreservation
  rpool-data/vm-100-disk-18  refreservation        none                   
default
  rpool-data/vm-100-disk-18  usedbyrefreservation  0B                     -

Another strange thing is that all are 2TB volumes:

  root@lamprologus:~# cat /etc/pve/qemu-server/100.conf | grep vm-100-disk-19
  scsi20: rpool-data:vm-100-disk-19,backup=0,discard=on,replicate=0,size=2000G

but:

  root@lamprologus:~# zfs list rpool-data/vm-100-disk-19
  NAME                        USED  AVAIL  REFER  MOUNTPOINT
  rpool-data/vm-100-disk-19  3.40T  3.84T  3.40T  -

why 'USED' is 3.40T?


Thanks.




--- End Message ---
_______________________________________________
pve-user mailing list
[email protected]
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user

Reply via email to