On Wednesday, March 11, 2020 at 1:34:17 AM UTC, maiski wrote:
>
>
> Quoting brend...@gmail.com <javascript:>: 
> > 
> > Qubes 4.1 (in development) has added a warning (in addition to the 
> current 
> > lvm space usage warning) for lvm metadata usage above a threshold. 4.0 
> > doesn't have the metadata nearing full warning, and that's what tends to 
> > cause these types of thinpool issues. 
> > 
> > In addition to the warning, Qubes 4.1 is also doubling (vs. the lvm 
> default 
> > value) the amount of space set aside for lvm thinpool metadata which 
> will 
> > substantially reduce the chances of ever hitting this issue under 4.1. 
> > 
> > Brendan 
> > 
> > PS - above is not helpful for recovering this machine, of course. 
> However, 
> > recovery from this can be very difficult and even after recovery not 
> > guaranteed to recover all the data. The Qubes devs are aware of this and 
> > very much want to avoid these issues in the next release. 
>
> Hm, yes, this does not help:/ 
> What about running fstrim on the ssd and try booting again? 
> @brendan: I've seen that you had some thoughts about lvm in some postings, 
> so would you care to elaborate/brainstorm on the situation i   
> described, you know, every input is valuable right now :) 



 TBH, I wouldn't know what to do. Ran into a similar problem with 4.0 a 
long while back and just reinstalled because it seemed insurmountable at 
the time.

I've been reducing my main pool usage and manually monitoring the metadata 
to avoid the situation with my current install, waiting for 4.1 to become 
stable before moving to it.

Chris Laprise (tasket) would be a better resource, if he's willing to jump 
in.

Brendan

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/0c13f771-7bdc-4246-8459-216cb5dabbe2%40googlegroups.com.

Reply via email to