Re: [qubes-users] Thin Pool metadata full

2019-05-13 Thread mnv
May 13, 2019 2:48 AM, "Chris Laprise"  wrote:

> This is thin lvm's Achilles heel. In Qubes' case there is no un-allocated 
> space in the volume group
> (the installer created the pool to occupy all the remaining space) so you 
> can't simply enlarge the
> metadata size.
> 
> What you can do (something I've done myself) is to take advantage of the 
> unnecessarily huge swap
> volume created by the installer. You shouldn't need more than about 1-1.5GB 
> for dom0, so if swap is
> like 8GB you can easily reduce it to make room for more tmeta:
> 
> # swapoff -a
> # lvresize -L -200M qubes_dom0/swap
> # mkswap /dev/qubes_dom0/swap
> # swapon -a
> # lvextend --poolmetadatasize +200M qubes_dom0/pool00
> 
> -- 
> Chris Laprise, tas...@posteo.net
> https://github.com/tasket
> https://twitter.com/ttaskett
> PGP: BEE2 20C5 356E 764A 73EB 4AB3 1DC4 D106 F07F 1886


Thank you!!

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/9656dfb468ced2e597ca16086778233e%40disroot.org.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] Thin Pool metadata full

2019-05-12 Thread Chris Laprise
This is thin lvm's Achilles heel. In Qubes' case there is no 
un-allocated space in the volume group (the installer created the pool 
to occupy all the remaining space) so you can't simply enlarge the 
metadata size.


What you can do (something I've done myself) is to take advantage of the 
unnecessarily huge swap volume created by the installer. You shouldn't 
need more than about 1-1.5GB for dom0, so if swap is like 8GB you can 
easily reduce it to make room for more tmeta:


# swapoff -a
# lvresize -L -200M qubes_dom0/swap
# mkswap /dev/qubes_dom0/swap
# swapon -a
# lvextend --poolmetadatasize +200M qubes_dom0/pool00

--

Chris Laprise, tas...@posteo.net
https://github.com/tasket
https://twitter.com/ttaskett
PGP: BEE2 20C5 356E 764A 73EB  4AB3 1DC4 D106 F07F 1886

--
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/2516ed5c-338a-0c77-8f6e-5c8599ce292f%40posteo.net.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] Thin Pool metadata full

2019-05-12 Thread mnv
--- Logical volume ---
  Internal LV Name   lvol0_pmspare
  VG Namequb
  LV UUIDJ8VXWl-UoWw-C6bU-mcPC-C6Fa-7SK1-s3aybZ
  LV Write Accessread/write
  LV Creation host, time localhost.localdomain, 2018-04-26 14:35:24 +0200
  LV Status  NOT available
  LV Size72.00 MiB
  Current LE 18
  Segments   1
  Allocation inherit
  Read ahead sectors auto
   
  --- Segments ---
  Logical extents 0 to 17:
Typelinear
Physical volume /dev/sdc3
Physical extents0 to 17
   
   
  --- Logical volume ---
  Internal LV Name   pool00_tmeta
  VG Namequb
  LV UUIDusxHzG-Tsgj-dAEl-Wr8L-jdK4-hLSR-udf7Wg
  LV Write Accessread/write
  LV Creation host, time localhost.localdomain, 2018-04-26 14:35:24 +0200
  LV Status  available
  # open 1
  LV Size72.00 MiB
  Current LE 18
  Segments   1
  Allocation inherit
  Read ahead sectors auto
  - currently set to 256
  Block device   253:0
   
  --- Segments ---
  Logical extents 0 to 17:
Typelinear
Physical volume /dev/sdc3
Physical extents35858 to 35875
   
   
  --- Logical volume ---
  Internal LV Name   pool00_tdata
  VG Namequb
  LV UUIDaegSMJ-maMx-te7y-7xYf-QUoM-piP1-sCXTi5
  LV Write Accessread/write
  LV Creation host, time localhost.localdomain, 2018-04-26 14:35:24 +0200
  LV Status  available
  # open 1
  LV Size218.42 GiB
  Current LE 55916
  Segments   3
  Allocation inherit
  Read ahead sectors auto
  - currently set to 256
  Block device   253:1
   
  --- Segments ---
  Logical extents 0 to 35839:
Typelinear
Physical volume /dev/sdc3
Physical extents18 to 35857
   
  Logical extents 35840 to 51852:
Typelinear
Physical volume /dev/sdc4
Physical extents0 to 16012
   
  Logical extents 51853 to 55915:
Typelinear
Physical volume /dev/sdc3
Physical extents36900 to 40962

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/2a8a5c6e1835da8a98e6af2dfd25761d%40disroot.org.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] Thin Pool metadata full

2019-05-12 Thread mnv
May 12, 2019 10:05 AM, m...@disroot.org wrote:

>> Backup your full system first. You can then try running sudo fstrim -av in 
>> dom0 and your VMs, and
>> deleting unused ones.
 
 No matter how much space I free up by deleting VMs I can't extend the 
metadata. Volumes are
 completely full.
 I have to shrink something first, maybe the PV or the pool itself. Shrinking 
`rt` aka dom0 root doesn't
 work. 72MB isn't nearly enough for my usage here.
 
 Pasting the output of `lvs` again, hopefully with better formatting:
 
 LV VG   Attr   LSize  Pool Origin Data% Meta% Move Log 
Cpy%Sync Convert
 00 qub -wi-ao 4.00g
 [lvol0_pmspare]qub ewi--- 72.00m
 pool00 qub twi-aotz-- 218.42g 87.55 94.46
 [pool00_tdata] qub Twi-ao 218.42g
 [pool00_tmeta] qub ewi-ao 72.00m
 rt qub Vwi-aotz-- 139.00g pool00  15.82

`sudo lvdisplay --maps`

  --- Logical volume ---
  LV Namepool00
  VG Namequb
  LV UUID0HBdwg-MMUw-DOTi-rBqc-aZft-6oeX-XzY8ic
  LV Write Accessread/write
  LV Creation host, time localhost.localdomain, 2018-04-26 14:35:24 +0200
  LV Pool metadata   pool00_tmeta
  LV Pool data   pool00_tdata
  LV Status  available
  # open 222
  LV Size218.42 GiB
  Allocated pool data87.50%
  Allocated metadata 94.84%
  Current LE 55916
  Segments   1
  Allocation inherit
  Read ahead sectors auto
  - currently set to 256
  Block device   253:2
   
  --- Segments ---
  Logical extents 0 to 55915:
Typethin-pool
Monitoring  monitored
Chunk size  128.00 KiB
Discardspassdown
Thin count  221
Transaction ID  31775
Zero new blocks yes
   
   
  --- Logical volume ---
  LV Path/dev/qub/rt
  LV Namert
  VG Namequb
  LV UUIDzxQIVC-sc5O-0r6I-iysK-KA3u-SxFf-rkHlYS
  LV Write Accessread/write
  LV Creation host, time localhost.localdomain, 2018-04-26 14:35:25 +0200
  LV Pool name   pool00
  LV Status  available
  # open 1
  LV Size139.00 GiB
  Mapped size15.77%
  Current LE 35584
  Segments   1
  Allocation inherit
  Read ahead sectors auto
  - currently set to 256
  Block device   253:3
   
  --- Segments ---
  Virtual extents 0 to 35583:
Typethin
Device ID   1
   
   
  --- Logical volume ---
  LV Path/dev/qub/00
  LV Name00
  VG Namequb
  LV UUID0dZzlp-a0K9-T7fD-A6UJ-aATj-dlgw-nyBGYg
  LV Write Accessread/write
  LV Creation host, time localhost.localdomain, 2018-04-26 14:35:51 +0200
  LV Status  available
  # open 1
  LV Size4.00 GiB
  Current LE 1024
  Segments   1
  Allocation inherit
  Read ahead sectors auto
  - currently set to 256
  Block device   253:4
   
  --- Segments ---
  Logical extents 0 to 1023:
Typelinear
Physical volume /dev/sdc3
Physical extents35876 to 36899




`sudo pvdisplay --maps` 

  --- Physical volume ---
  PV Name   /dev/sdc3
  VG Name   qub
  PV Size   160.01 GiB / not usable 3.00 MiB
  Allocatable   yes (but full)
  PE Size   4.00 MiB
  Total PE  40963
  Free PE   0
  Allocated PE  40963
  PV UUID   awkn4E-cdKu-X4tk-4OWl-rIZn-usxG-KbEwAv
 

  --- Physical Segments ---
  Physical extent 0 to 17:
Logical volume  /dev/qub/lvol0_pmspare
Logical extents 0 to 17
  Physical extent 18 to 35857:
Logical volume  /dev/qub/pool00_tdata
Logical extents 0 to 35839
  Physical extent 35858 to 35875:
Logical volume  /dev/qub/pool00_tmeta
Logical extents 0 to 17
  Physical extent 35876 to 36899:
Logical volume  /dev/qub/00
Logical extents 0 to 1023
  Physical extent 36900 to 40962:
Logical volume  /dev/qub/pool00_tdata
Logical extents 51853 to 55915
   

  --- Physical volume ---
  PV Name   /dev/sdc4
  VG Name   qub
  PV Size   62.55 GiB / not usable 2.59 MiB
  Allocatable   yes (but full)
  PE Size   4.00 MiB
  Total PE  16013
  Free PE   0
  Allocated PE  16013
  PV UUID   qghdZz-0yuW-8RgI-bc9f-HdAY-3LHV-qtunpq
   
  --- Physical Segments ---
  Physical extent 0 to 16012:
Logical volume  /dev/qub/pool00_tdata
Logical extents 35840 to 51852

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from 

Re: [qubes-users] Thin Pool metadata full

2019-05-12 Thread 'awokd' via qubes-users

m...@disroot.org:

`Domain disp6345 has failed to start: WARNING: Remaining free space in metadata of 
thin pool qub/pool00 is too low (94.73%% >= 94.44%%). Resize is recommended. 
Cannot create new thin volume, free space in thin pool qub/pool00 reached 
threshold.`

If I try to extend the metadata it says:
`sudo lvextend --poolmetadatasize +10M qub/pool00
  Rounding size to boundary between physical extents: 12.00 MiB.
  Insufficient free space: 3 extents needed, but only 0 available`

Not included VM volumes here:

`sudo lvs -a`
` LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
  00 qub -wi-ao 4.00g
  [lvol0_pmspare] qub ewi--- 72.00m
  pool00 qub twi-aotz-- 218.42g 87.42 94.30
  [pool00_tdata] qub Twi-ao 218.42g
  [pool00_tmeta] qub ewi-ao 72.00m
  rt qub Vwi-aotz-- 139.00g pool00 15.81 `
Not included VM devices here either:

`sudo pvs -a`
` /dev/sdc3 qub lvm2 a-- 160.01g 0
  /dev/sdc4 qub lvm2 a-- 62.55g 0
  /dev/qub/00 --- 0 0
  /dev/qub/rt --- 0 0`

Any help is appreciated.

Backup your full system first. You can then try running sudo fstrim -av 
in dom0 and your VMs, and deleting unused ones.


--
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/a23bb169-be41-a03c-e57b-f4446da96cb3%40danwin1210.me.
For more options, visit https://groups.google.com/d/optout.