On April 20, 2022 4:54 am, Lindsay Mathieson wrote: > This is really odd - was downloading a large amount of data in a debian > VM last night, something went wrong (my problem), it didn't stop and > filled up the volume. > > > Should be a problem as the virtual disk only exists to store temporary data: > > * vm-100-disk-1 > * 256GB > * 1 Partition, formatted and mounted as EXT4 > * Located under rpool/data > > > Trouble is, it kept expanding past 256GB, using up all the free space on > the host boot drive. This morning everything was down and I had to > delete the volume to get a functioning system. > > zfs list of volumes and snapshots: > > NAME USED AVAIL REFER MOUNTPOINT > rpool 450G 0B 104K /rpool > rpool/ROOT 17.3G 0B 96K /rpool/ROOT > rpool/ROOT/pve-1 17.3G 0B 17.3G / > rpool/data 432G 0B 128K /rpool/data > rpool/data/basevol-101-disk-0 563M 0B 563M > /rpool/data/basevol-101-disk-0 > rpool/data/basevol-102-disk-0 562M 0B 562M > /rpool/data/basevol-102-disk-0 > rpool/data/subvol-151-disk-0 911M 0B 911M > /rpool/data/subvol-151-disk-0 > rpool/data/subvol-152-disk-0 712M 0B 712M > /rpool/data/subvol-152-disk-0 > rpool/data/subvol-153-disk-0 712M 0B 712M > /rpool/data/subvol-153-disk-0 > rpool/data/subvol-154-disk-0 710M 0B 710M > /rpool/data/subvol-154-disk-0 > rpool/data/subvol-155-disk-0 838M 0B 838M > /rpool/data/subvol-155-disk-0 > rpool/data/vm-100-disk-0 47.3G 0B 45.0G - > _*rpool/data/vm-100-disk-1 338G 0B 235G -*_
used 338, refered 235G - so you either have snapshots, or raidz overhead taking up the extra space. > rpool/data/vm-100-state-fsck 2.05G 0B 2.05G - > rpool/data/vm-201-disk-0 40.1G 0B 38.0G - > rpool/data/vm-201-disk-1 176K 0B 104K - > root@px-server:~# > > > NAME USED AVAIL REFER MOUNTPOINT > rpool/data/basevol-101-disk-0@__base__ 8K - 563M - > rpool/data/basevol-102-disk-0@__base__ 8K - 562M - > rpool/data/vm-100-disk-0@fsck 2.32G - 42.7G - > rpool/data/vm-100-disk-1@fsck 103G - 164G - snapshots taking up 105G at least, which lines up nicely with 338-235 = 103G (doesn't have to, snapshot space accounting is a bit complicated). > rpool/data/vm-201-disk-0@BIOSChange 2.12G - 37.7G - > rpool/data/vm-201-disk-1@BIOSChange 72K - 96K - > > How was this even possible? see above. is the zvol thin-provisioned? if yes, then likely the snapshots are at fault. for regular zvols, creating a snapshot would already take care of having enough space at snapshot creationg time, and such a situation cannot arise. with thin-provisioned storage it's always possible to overcommit and run out of space. _______________________________________________ pve-user mailing list [email protected] https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
