Re: [qubes-users] fedora-40-minimal install - message about fstrim

2024-10-17 Thread 'Rusty Bird' via qubes-users
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Ulrich Windl:
> Of course if fstrim fails, it has the same amount of block to trim
> on the next run.

But if 'fstrim --verbose' prints a number of trimmed bytes at all and
not an error, then apparently the trimming didn't fail (this time).

To test the different behavior of filesystems like ext4 that keep
track of already discarded blocks, and filesystems like XFS and Btrfs
that don't (or not fully), here's a little script:

https://gist.github.com/rustybird/750a5b28e7b285669fe90851e6f48b32

It creates a filesystem on a 5 GiB loop device, writes three 1 GiB
files inside the mountpoint, deletes two of them; and runs fstrim
three times while looking at the disk usage of the backing file after
each fstrim run. Results:

# ./fstrimtest ext4
3139M   img
mnt: 3.8 GiB (4122611712 bytes) trimmed
1091M   img
mnt: 0 B (0 bytes) trimmed
1091M   img
mnt: 0 B (0 bytes) trimmed
1091M   img

# ./fstrimtest xfs
3137M   img
mnt: 3.9 GiB (4227661824 bytes) trimmed
1089M   img
mnt: 3.9 GiB (4227661824 bytes) trimmed
1089M   img
mnt: 3.9 GiB (4227661824 bytes) trimmed
1089M   img

# ./fstrimtest btrfs
3084M   img
mnt: 3.5 GiB (3766091776 bytes) trimmed
1028M   img
mnt: 3 GiB (3255435264 bytes) trimmed
1028M   img
mnt: 3 GiB (3255435264 bytes) trimmed
1028M   img

Rusty
-BEGIN PGP SIGNATURE-

iQKSBAEBCAB9FiEEhLWbz8YrEp/hsG0ERp149HqvKt8FAmcRQLJfFIAALgAo
aXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDg0
QjU5QkNGQzYyQjEyOUZFMUIwNkQwNDQ2OUQ3OEY0N0FBRjJBREYACgkQRp149Hqv
Kt8Cxg/4weCl+CT4wjGYyeooSQTAdwmKT3XzsAR+U5Ht6fHqxhCxX/agEoWs42h1
ymQH2iYgxoddf+zQ07A2p20x0ZYmHHt43IXlpohcUkipYwAkxfvSP8LFyj0nLziZ
sdksxKG/sZbN0o/vrlZrul4Ze0SSNqO7itE+YVgim47vsL537k78WAkEtSOqRvQX
8ES1CHmQh2qBysbIla9w7hyQUmDht2fIkmfFvy29OFCzGf4U7R3i4Agokjh797q4
8Azs9RyK7OIH7+U93+7u/BmBRm0IxkeKqeNiZlf2negZ2I9uFfiHIVqhFx0qyEWD
M9PtpvlhcVfljKoNbmwrL5cTFMFBwlrXdRlqAN6808uzGmp/PVn1L2vUmLBfql/o
0czNbqcbTGNSwgoG5RD+iSvnqS2glxbrQugw8nlViPjlnlq5PCZtT51uKpj3hZgE
OBpUL4vFe+nI62pKu7Taulpm9mt+hxXEnMQkzOx+j9dIrpdsx3wNuGN2hjAuWTgv
FOgWaFNd1MJ6+QKyBAcw4lANBgy2NUhw6smAy1qwColQ4P64eP0CJAgPCjwHHHym
jkOl+H+D/lbld0RjpHe6L2RkwvZ8l0Dvxjggtzjrd/O4DIqrplm3Cyo+yJKjZkhO
zL+txZb6HSfVjupwcRJgocarwnanSGqdN+cPcD6cgvyqdkayww==
=vCPI
-END PGP SIGNATURE-


-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/ZxFAso9yGp_cKc_v%40mutt.


Re: [qubes-users] fedora-40-minimal install - message about fstrim

2024-10-16 Thread 'Rusty Bird' via qubes-users
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

'Rusty Bird' via qubes-users:
> Marek Marczykowski-Górecki:
> > On Wed, Oct 16, 2024 at 04:28:14PM +, Rusty Bird wrote:
> > > Also, on file-reflink
> > > systems, where the dom0 root filesystem is storing (possibly many
> > > terabytes worth of) VM volumes, fstrim can take really long. E.g.
> > > here on my main Btrfs system, which is otherwise quite fast:
> > > 
> > > # time fstrim /var/tmp/
> > > real  4m29.240s
> > 
> > But that takes long only if there is really a lot of data to discard,
> > no?
> 
> # for i in 1 2 3; do time fstrim /var/tmp/; done 2>&1 | grep real
> real  4m24.308s
> real  4m34.060s
> real  4m29.806s
> 
> I don't see anything in Btrfs tracking which unused blocks it has
> already issued discards for. Or in ext4, but it doesn't matter with
> the small ext4 dom0 root fs in an LVM Thin installation.

Actually ext4 does keep track:

https://serverfault.com/questions/1113127/fstrim-is-very-slow-on-xfs-and-always-return-same-value-unlike-ext4

> So a large fs
> that's neither almost empty nor almost full has to at least generate a
> gigantic list of (due to fragmentation) probably rather small ranges
> of blocks to be discarded in response to every fstrim and forward it
> through the block subsystem (which I don't think is keeping track
> either?) to the drive. Only after all of that overhead, I guess the
> drive might respond faster if it had already done most of the work
> last time.

Rusty
-BEGIN PGP SIGNATURE-

iQKTBAEBCAB9FiEEhLWbz8YrEp/hsG0ERp149HqvKt8FAmcQHAJfFIAALgAo
aXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDg0
QjU5QkNGQzYyQjEyOUZFMUIwNkQwNDQ2OUQ3OEY0N0FBRjJBREYACgkQRp149Hqv
Kt//lQ/9GtwhYOxlQCr1FMVt8jf6VSdogeZzWnErQO7ldaujc730hZcHgmbtincl
1Eo81cLy5gQ8fKzBoZ7qNhNd6VjvFJirnhHs6zqQKFCTbOs+FAM2pGViAEGLtT8r
BST11UoTcrN+RBBgnd5o1wP99G/8ZWPOkB/bTAYPb6ZmYlq+oikl7fG/GbeXXFZh
keHdjdiS1uutwA2Ip9pZTkCGuyUd4xy3Y9aHQm2OuuQeYln+ZUUMMeo1LqESWLl1
S1m4CRLLkN8l8StNkz3jf7605+hiyY+NPdbYMXlRl9HhxKugORMAsrzFFVkk+80h
rFrxz49JLp8Cg55X0cShZNYpfaH3eOGw7PS932B+6q0qrHkb7yhCATj2VxBVdCD+
JOzaflxC32XyXDVcJrrhhfwrwoqySBVo/HSZ3O9hhvWEV9CLe+KgivsR72Ryy+eF
n1d0CxjYHEN4qWRN0qTG/YatEJqkZ1bWdtG6rAQrRvkNOomPo3ikGpqw9WKnkhAN
xfCGnoxjNOkCDYyrqIBhRfVKjMldIlkNRnwGGKjLARJJ08Wjmt+fiGeNeOn4qXOK
jwElSeaui2nHOnje1nSxrIsIotwUIK/msFjbJ/lt7xjvna/CgftMPysztHhLrjZU
I6OuIun83nkQnGTy3aAU5ohEWGVhUCB6w3RxqzTv+bWdaucBfmA=
=PCuB
-END PGP SIGNATURE-


-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/ZxAcAga7JycORHlJ%40mutt.


Re: [qubes-users] fedora-40-minimal install - message about fstrim

2024-10-16 Thread 'Rusty Bird' via qubes-users
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Marek Marczykowski-Górecki:
> On Wed, Oct 16, 2024 at 04:28:14PM +, Rusty Bird wrote:
> > Also, on file-reflink
> > systems, where the dom0 root filesystem is storing (possibly many
> > terabytes worth of) VM volumes, fstrim can take really long. E.g.
> > here on my main Btrfs system, which is otherwise quite fast:
> > 
> > # time fstrim /var/tmp/
> > real4m29.240s
> 
> But that takes long only if there is really a lot of data to discard,
> no?

# for i in 1 2 3; do time fstrim /var/tmp/; done 2>&1 | grep real
real4m24.308s
real4m34.060s
real4m29.806s

I don't see anything in Btrfs tracking which unused blocks it has
already issued discards for. Or in ext4, but it doesn't matter with
the small ext4 dom0 root fs in an LVM Thin installation. So a large fs
that's neither almost empty nor almost full has to at least generate a
gigantic list of (due to fragmentation) probably rather small ranges
of blocks to be discarded in response to every fstrim and forward it
through the block subsystem (which I don't think is keeping track
either?) to the drive. Only after all of that overhead, I guess the
drive might respond faster if it had already done most of the work
last time.

Rusty
-BEGIN PGP SIGNATURE-

iQKTBAEBCAB9FiEEhLWbz8YrEp/hsG0ERp149HqvKt8FAmcQFxxfFIAALgAo
aXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDg0
QjU5QkNGQzYyQjEyOUZFMUIwNkQwNDQ2OUQ3OEY0N0FBRjJBREYACgkQRp149Hqv
Kt9SAg/9H9nFGZ2+3y4caIBouczAoNhWEavYHVeo0tbhOvbIdrbr1qPghbl/fJlV
6DyCbbn39cp9XIEqGCYI0GGS1h+X+RWu8ufCX6TZXUMcMzx+3Ye+WYOoZ3MAUCKv
qUh9hxQkmAe4YD5O9hBWTY/L6Uf+wrIWVp1rmunz1sHTccy1RorAKxHF44Zohuq1
rWBUfAaiROl8CSBzuBC0Qx+VvRfA4YiIS/5+bmY0eZjIDg5SUMVNvj8aXAWBljTB
eXHIZs97BmIeS+VDeKl/8jzDFEF4a4whA9U9n1POfSum1SdiuVgF7McKwz3tDgv+
1qqu8pWsI85vdNPjTWcecjN5ju7jMA4nwYOr30TND7qi0wA3elMEWRQGjxcobAmM
fN8++xA44XD8gpSr4fCUDfIb4E60vcUpTSc6NrQSfTWLK/W+OMow+9rOUyS0oO85
FTeZTcvuF54Wz1d38aGNc2YKmS8r8EVOCytzu5yEFFOFrm4E4lCYUM6A0rBgywaN
yQYI46s4LVX+bcV80p6glQXHUpB40nFJ1xTmk6Y6sWMbdf4DdrIwPRKLkGVSCcwU
6IAvrZnpx5NIH/6nXR9MgY9aSKpgT6ZFZj7BNAMj9V2S/2l+OoJ8JvZsuqNJ/+uf
3UWaA9mlrGoso+as76dK0I+KUqR9/Gm2vyX1IJQCfwRNcOQVltY=
=mp6H
-END PGP SIGNATURE-


-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/ZxAXHJ-AoLXvD0Jf%40mutt.


Re: [qubes-users] fedora-40-minimal install - message about fstrim

2024-10-16 Thread 'Rusty Bird' via qubes-users
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Marek Marczykowski-Górecki:
> Maybe? You do need root for calling fstrim. And not calling it isn't
> really huge deal, as you explain below. And it failing shouldn't
> interrupt install anyway (subprocess.call, not subprocess.check_call).
> But the error message indeed may be confusing.
> Theoretically, sudo could be used for this call and that would be fine
> in dom0, but possibly less so in a qube (yes, you can install templates
> via Admin API from a qube), especially is passwordless-root package is
> not installed...

Ah okay, that answers why not 'sudo fstrim'. Also, on file-reflink
systems, where the dom0 root filesystem is storing (possibly many
terabytes worth of) VM volumes, fstrim can take really long. E.g.
here on my main Btrfs system, which is otherwise quite fast:

# time fstrim /var/tmp/
real4m29.240s

So now I'm thinking fstrim is overkill just to install a template.
Instead, maybe Salt or something could ensure that everyone (including
people who installed via qubes-dist-upgrade) has the 'discard' mount
option (or 'discard=async' for Btrfs, where that would be the default
on modern kernels if not overridden by 'discard[=sync]') unless a user
has explicitly added 'nodiscard'.

> > Then qvm-template was created (which like other qvm- tools usually
> > runs as a regular user) and now fstrim is skipped unless someone
> > happens to invoke qvm-template as root. Skipping seems like a bug,
> > but on R4.2 systems it's mitigated by the installer adding the
> > 'discard' mount option for the dom0 root filesystem, making fstrim
> > redundant.  Except for people who installed via qubes-dist-upgrade
> > or removed the mount option. For those, there's still the systemd
> > fstrim.timer that should release the space to LVM, hopefully soon
> > enough (weekly).

Rusty
-BEGIN PGP SIGNATURE-

iQKTBAEBCAB9FiEEhLWbz8YrEp/hsG0ERp149HqvKt8FAmcP6Z5fFIAALgAo
aXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDg0
QjU5QkNGQzYyQjEyOUZFMUIwNkQwNDQ2OUQ3OEY0N0FBRjJBREYACgkQRp149Hqv
Kt9Xhw/8C+Z2O/1cINMgc3mQFDCNQSKDNhXAlgCVixNUrvMm9m+mdr/NMvL1euRd
ndD6KH+ud5u04qLHHghXvzRZVnHtyRiGgKQOpM8/1Q/7ifG843brS+ataUWh7+Z8
e7oNt5Y1FQrFeLQGCLU1r/OqTVPrUXDzEOCatnZMX2AgnJSf2JS6pEh3o3S27x3y
ZZFeGsWiII6mGaHq38TWxwpu7WSM2K2ldSBRQKAF4UaDrWhzD+AtpyK1UYXzDlpz
FWNAPd7ELpx/kFoIOatum2LkUncCMsK+3kgUW9cXUnD/fquG4YUJpTamO00rBp2W
2s2PIFHVZkkf26sytq2xFV/NwjkvQHyj+TT/LnWtPTdy6v3NmLxpWAlYnFy4Rz17
d0qViaXn26EctkYimXPaM5gvnmrLlDOlhvsDEf0ZMnLlCXlvXf9+N2c2FXjifkNZ
TVjOQmiMJj5/aNFw+S02rDQxAHydra5RiaK+G/XCQ0xSuhKEVm72UWJILutyFeic
MUYyf0KKJFKxSPI8O11MmJNzOYCKGQu2p4XkvwLtmuRqQZcPG+eGYHSh+qgpLjiB
4sSbqj7tnmMoIcNn9Ckh+04F/kYN/TJ5pjWvC6U9Cf0NRE0YVbv/K8mT5hrZqYFC
2WHqBBd2Sa5CTKsiZ3yrS7TPmyqco+fuDpQhSWYa+88W0BD175o=
=wetx
-END PGP SIGNATURE-

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/Zw_pnubyDzme6VOR%40mutt.


Re: [qubes-users] fedora-40-minimal install - message about fstrim

2024-10-16 Thread 'Rusty Bird' via qubes-users
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Boryeu Mao:
> On Tue, Oct 15, 2024 at 3:59 AM Rusty Bird  wrote:
> > Boryeu Mao:
> > > For the template install command on Qubes release 4.2.3
> > >
> > >sudo qubes-dom0-update qubes-template-fedora-40-minimal
> > >
> > > I received a message that
> > >
> > >fstrim: /var/tmp/tmpsd1ns61v/var/lib/qubes/vm-template: the discard
> > > operation is not supported
> >
> > Did you maybe mount a tmpfs at /var/tmp?

> [...] no manual tmpfs mount.

I assume you're seeing the same "not supported" message if you run:

$ sudo fstrim /var/tmp/

The only thing I can think of is that you have custom partitioning,
and the storage layer immediately underneath the filesystem hosting
/var/tmp/ is dm-crypt (unusual for an LVM Thin installation), and
dm-crypt has been mapped with discard disabled.

Your storage tree (showing discard support) can be printed with:

$ lsblk --output +DISC-MAX

> > https://github.com/QubesOS/qubes-core-admin-client/commit/4a9b57f91fdf3a2b35a5cf707970d05bf9cadba7

> In the qvm_template_postprocess.py (which the above link points to), fstrim
> is called only if the root user does the template install.

To me this looks like something that was missed in the move to
qvm-template:

Previously, qubes-dom0-update (which had to be run as root) would
install templates as normal RPM packages. I guess the logic to skip
fstrim for non-root users might have been put there to ease testing
the qvm-template-postprocess tool? CCing Marek

Then qvm-template was created (which like other qvm- tools usually
runs as a regular user) and now fstrim is skipped unless someone
happens to invoke qvm-template as root. Skipping seems like a bug, but
on R4.2 systems it's mitigated by the installer adding the 'discard'
mount option for the dom0 root filesystem, making fstrim redundant.
Except for people who installed via qubes-dist-upgrade or removed the
mount option. For those, there's still the systemd fstrim.timer that
should release the space to LVM, hopefully soon enough (weekly).

Finally, you've used qubes-dom0-update, which nowadays calls
qvm-template for template related stuff. For this, qubes-dom0-update
can actually be run as non-root, but you ran it with sudo, so fstrim
was *not* skipped. (Which then failed on on your system.)

> Thank you very much for helping.

Happy to. It's interesting :)

Rusty
-BEGIN PGP SIGNATURE-

iQKTBAEBCAB9FiEEhLWbz8YrEp/hsG0ERp149HqvKt8FAmcP1btfFIAALgAo
aXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDg0
QjU5QkNGQzYyQjEyOUZFMUIwNkQwNDQ2OUQ3OEY0N0FBRjJBREYACgkQRp149Hqv
Kt/L3xAAjs5MdAVStMB33Dzahh6/oinggSp5fT4f+FFLSb3gKPrzLApi3OxKNnIp
0AX/lGjFftbRcVCrwvFV5vZxMp3wAAKn68htEmvqZ3QA0hSr3aHrnu0gnVCnyGUG
f0KEFmv1pluSB4Af0s5euJOq5Rocd2HxnfNNIDG+8rOZ6KRLBVpaQXzRP4ejTAsS
H8XVCeB182n7RveSheyrXXr+Z980WM/xz7Qg88Wsmgi47fkulJ4ZbI2GI4z9MVHl
q76pVvdKxotMCtNad3ii/tbJCGDHxacRsQhDPE36a6x+Mvgq/OlMJqbJWIBaD7VP
Pxzj33Nspy8CFf585w6l3INCHz+qV9YV6/Hb674HyGHsn4O/nHF+IecdqZb19O8P
XjsFK3FrW1kqxj3/5nMurZ6NrlWC2qFqCQcr9ALCodJQDFHfdComnnFu25krDmBd
vl/nSEwcLpMtrtl4AUrekbHTrlDDW/096+FJlE+TNmYbD7YEK7qBxipulGwCg6Jf
NBL47gbIAYISKY1L+YDY7jGhEYMMaUZy8T5P4uTHQ4Zhqa5oXRts4dtkuuvuWmBS
wDfKzSGrmr694C1HnSGuTWP/a5psi5YhrGcHWILct2Ix6oUn5Q80rMbttjNnpmWT
DgLjPdlAXrDvQErn09TLE/YJ4BW5ORcZSfeNBboff1AUKfRqvG4=
=+Iy5
-END PGP SIGNATURE-


-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/Zw_Vuw1gVARa8DtR%40mutt.


Re: [qubes-users] fedora-40-minimal install - message about fstrim

2024-10-15 Thread 'Rusty Bird' via qubes-users
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Boryeu Mao:
> For the template install command on Qubes release 4.2.3
> 
>sudo qubes-dom0-update qubes-template-fedora-40-minimal
> 
> I received a message that
> 
>fstrim: /var/tmp/tmpsd1ns61v/var/lib/qubes/vm-template: the discard 
> operation is not supported

Did you maybe mount a tmpfs at /var/tmp? That would explain fstrim not
working. It also wouldn't matter then.

> The template appears to be running normally, so perhaps this is a warning 
> message.

Pretty much. The fstrim invocation was added to inform the underlying
storage (LVM Thin by default) of the filesystem hosting /var/tmp that
the space previously used for temporary image files extracted during
the installation process can be freed:

https://github.com/QubesOS/qubes-core-admin-client/commit/4a9b57f91fdf3a2b35a5cf707970d05bf9cadba7

But it doesn't affect the installed template.

Rusty
-BEGIN PGP SIGNATURE-

iQKTBAEBCAB9FiEEhLWbz8YrEp/hsG0ERp149HqvKt8FAmcOSxFfFIAALgAo
aXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDg0
QjU5QkNGQzYyQjEyOUZFMUIwNkQwNDQ2OUQ3OEY0N0FBRjJBREYACgkQRp149Hqv
Kt/oVg/8CBMUFKZf9qgpJF90En3S9SFOq5B3pr32MZptzeVaN0FjyQzObQgx0ctK
vN0kPrUzwe8Djrp7CzT/eiC1eOGEcEGTHxkd1SrgcL0vFg+/SFs4QuXKZSjNwz/v
u6eW5x0Q+eMVRpIIfuR+2ctJnsrmIRp/rheRj68kmGq13H+m3NkWL3Tpb6WJayDl
sSOHtAW73UvQoAAOgfLAI/SB3RGlyD10E3AcUWQalJ4p3OwV0j9V0HPWZ+7QkAja
s+miDY/bTqsXkDyDr6Jz8VPBoOtyy1cDVAS3DLCSN7rLT/Scn1zSYRiCZMEc6JdN
McUEMjK4Cr3MoikrMPhWoDzpdgtPCHnnt+hJKC4Bsvp0PLYXGdcqn0Gm3AT5RJ6r
qzWGu6Q5rHIy7V6c9TMt6I+Yg4vp2w+Y5dG4uZ2d9xBJmOLZCUBDaJD48/6MTFnX
Z6h19+HzNlZHkby3+0u9DKoAYXV96dOhsC2DK9oNqskmwbiKHlsaQMnjVtYq2LTl
OKcEl3Xo9hjNBBgaWgvfZBbqzyvTBXDr+fmYX0hEjpCcsdG5f2iiEoYdtE0B9Dzc
cx0ylY9rkK8alRUmxg/v+DsJ/yTWhDhud1GKdORWCWB/TxdIoNGkmdwcTNHJ0cHK
Mdfi2xR4ULDxdjvPbyJMwakn1yEvftIGS8/BuVeFuAzb4+7rJOU=
=9zLR
-END PGP SIGNATURE-


-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/Zw5LEWwR3bf7UXRt%40mutt.


Re: [qubes-users] 'locking' a vm possible? (to prevent accidental shutdown)

2024-04-15 Thread 'Rusty Bird' via qubes-users
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Rusty Bird:
> Boryeu Mao:
> > An attempt to shutdown `sys-firewall` in `Qube Manager` receive a warning 
> > about running processes in the qube; similarly on command line 
> > `qvm-shutdown sys-firewall` fails with an error.  Is it possible to 
> > designate an appVM to behave similarly so it won't get shutdown 
> > accidentally?
> 
> Not as a user-facing feature AFAIK. But you could use the qubes.ext
> Python entry point
> 
> https://github.com/QubesOS/qubes-core-admin/blob/v4.2.21/qubes/ext/__init__.py#L57-L59
> 
> to add another "domain-pre-shutdown" event handler like this one
> (yours could e.g. check if the VM has a certain tag):
> 
> https://github.com/QubesOS/qubes-core-admin/blob/v4.2.21/qubes/ext/audio.py#L65-L75

Sorry, that second link should have been:

https://github.com/QubesOS/qubes-core-admin/blob/v4.2.21/qubes/ext/audio.py#L31-L38

Rusty
-BEGIN PGP SIGNATURE-

iQKTBAEBCAB9FiEEhLWbz8YrEp/hsG0ERp149HqvKt8FAmYdQPRfFIAALgAo
aXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDg0
QjU5QkNGQzYyQjEyOUZFMUIwNkQwNDQ2OUQ3OEY0N0FBRjJBREYACgkQRp149Hqv
Kt9fGw/+JHmmCw+Ly/YXJ5uYJknlH/Z8hpViEwPnIGuuz7dkiHYa53BeKg+ub035
EOt0Z2ir8NuhHGXdN77A4j1PA6gXypEBme3sxDoP0uHv1Tc3GSAgbR4NzF0qucxy
EQisGL7LAw05raT5vFv8eWsHwfR1OHAupXZKJzHfjX3CBUce51K2N/eyPiuoX4es
m/1lpLmLWJgXAk2MgvwNop4coRiexLuXGWYpeG+64SrDmB0oJhFZ+8rhUig5UZ41
ImpkZl+cbFIxVL+j0tcWLlaDt8yTIJzR2lw0afOvHZcqNHlNo2OPSm4HiMfrThVP
9oAAU5fvTLQtnVJ0Qw49/wm6nr2IFuR3J3Zkz4PA0jVzxuXL6OGzjLuJuFlj01Sj
qxK3oU9dsN2cXCkp0k8gq39UAyHZwaeViFnAxKNm/U/ykRlFhLiloTF3ZvJYl7Vv
1N54BKKY5RjjtVsBgbDfKVcfSR4UwNt6v2PECfp+l7SpJb4XFiCNb9AoU2UoPQjj
icOPXw8r7AAMZdm+ANuMhTivGIi+7HR4MQ4xKRmD1bJ1qhQPGyuq+6loYJQQX+r4
1evr5+hCbQjapWN5IA7mRSgzaUEPC0Yrc5Ttirw81dbuCIPyv+B2c8LwQDvcorIR
A5EhArjwq1nY1N1ArMUKVf5+ONcIu7K56fjnMxyZXer3zExcYyA=
=mP8j
-END PGP SIGNATURE-

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/Zh1A9DYFnKTnQt_z%40mutt.


Re: [qubes-users] 'locking' a vm possible? (to prevent accidental shutdown)

2024-04-15 Thread 'Rusty Bird' via qubes-users
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Boryeu Mao:
> An attempt to shutdown `sys-firewall` in `Qube Manager` receive a warning 
> about running processes in the qube; similarly on command line 
> `qvm-shutdown sys-firewall` fails with an error.  Is it possible to 
> designate an appVM to behave similarly so it won't get shutdown 
> accidentally?

Not as a user-facing feature AFAIK. But you could use the qubes.ext
Python entry point

https://github.com/QubesOS/qubes-core-admin/blob/v4.2.21/qubes/ext/__init__.py#L57-L59

to add another "domain-pre-shutdown" event handler like this one
(yours could e.g. check if the VM has a certain tag):

https://github.com/QubesOS/qubes-core-admin/blob/v4.2.21/qubes/ext/audio.py#L65-L75

Rusty
-BEGIN PGP SIGNATURE-

iQKTBAEBCAB9FiEEhLWbz8YrEp/hsG0ERp149HqvKt8FAmYdP79fFIAALgAo
aXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDg0
QjU5QkNGQzYyQjEyOUZFMUIwNkQwNDQ2OUQ3OEY0N0FBRjJBREYACgkQRp149Hqv
Kt/s0A//d0ks6I+il3Y/rnG5IINmEUMC8yKdTQM9E/xQQZlqSZOUHh4OSkdZB6ON
N/Iv1skUvVRuUxF8kFJ9M88FYH8X+fZsWr9ZQ18xPk+oQuQBarWTgT+TeprGj8CX
WSG1dfzyFs/m5DuE4M0xvzV9efIyfA80hRl/5VwLYLscMas2Dkvfcc8yWcdDkoY7
zKcI9jZzkUPfA5gAp92NWH10kYBdWlMYiqRLW22OT+Xe/dkohs/a80B1smKRZf7D
K9sF4CXauJxqxV8m+wMO8yma1jBEBoijkPZxf3m/z4SNl+cfcvLRvy+zV41dsTca
nkfvP2LflDWCpJFsdK77GQPGvx7ojX09ExAXu56kZJiQAn+rWFcX8edI+E+RQ0Z/
UMZ9a8Juj3s/myNEGr+MrhrdQ5qvUEafCOVBpLJG65xAw0B7eAAqG/vbboucaaVy
pQMFcYCyPxMzlMZQz82JHpzGiVscislC8naMYFneM9jsSL2K9D+P99tlHIziKt9w
dwUwvbuUOJtaZm94YMIbJkUaSK9BDInx49LAlzA5pAtRX4CMHY2YzYkLEUis2oAe
Ynj620eSnEwmPPa4sS97T+dnuO94S32UZrDLzYu7FZn4Rm5Gp6vq5pgbxXqkp8id
BdRn5dzQI6l4fijl+6FgfMTSZzVBNr7svjuGY8D0v3OfbywnT9Y=
=3CXB
-END PGP SIGNATURE-

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/Zh0_v3dVrNYbjzcT%40mutt.


Re: [qubes-users] Re: question on 'service-name' for the new (R4.2) qrexec policy

2024-02-13 Thread 'Rusty Bird' via qubes-users
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Boryeu Mao:
> > For R4.1.2 I had some RPC calls with + and - characters in the file 
> > name.  These are considered as invalid characters to be part of service 
> > names in the new qrexec policy format (e.g. in 
> > /etc/qubes/policy.d/30-user.policy).  Using wild card * works, but I 
> > wonder if there is any way to keep these characters in explicitly 
> > specifying the calls.

> Correction - only + is considered as invalid character.

Already in the old format, a file /etc/qubes-rpc/policy/foo+bar+baz
actually specified the policy for a qrexec service named 'foo' called
with one argument 'bar+baz'. 

(Invoking qrexec-client-vm for 'foo+bar+baz' will attempt to execute a
specialized implementation at /etc/qubes-rpc/foo+bar+baz first, or if
that doesn't exist /etc/qubes-rpc/foo for a general implementation.
That is still the same in R4.2.)

In the new policy format this would be written as a line starting with

foo +bar+baz

Note the whitespace before the first '+' character, which makes it a
little bit clearer what's going on.

Rusty
-BEGIN PGP SIGNATURE-

iQKTBAEBCgB9FiEEhLWbz8YrEp/hsG0ERp149HqvKt8FAmXLaSlfFIAALgAo
aXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDg0
QjU5QkNGQzYyQjEyOUZFMUIwNkQwNDQ2OUQ3OEY0N0FBRjJBREYACgkQRp149Hqv
Kt+a5g/+OirPpQTa3qsPULMrXFMNqyuuKkohAvFuCoOpBRlJK5KazFju9C9Nnu5b
377A5z/x2SIQldHgTKxDpHhymohr9d63CxCM9iKGMSJECaBWSJA3iSLTjBzp8KUZ
JZ3bTNdbztG6Pd06xNNCj3qpIUEDSV3cxkE4hPf3wpAqrhG3RRtpaJZ0CJ9QTxxX
Cg+IHMo/jalItP5dDCOizF8XZwNxO6sYfXGdVS7PsRIVsoaAJyN+b1/EG0HWfwh4
kqqG5ZMX3vYRkTFOfveWkEKKc4OPOAQ1RvD+CclceneUvPVDn39tUONeL5ptD6cK
Np+T/fMbrrW/0k280RJbaNj8H73SCRzMBG0zl1WrFKzYVAUL8kJi/0tJqJkqRArv
Dg2pT6GqUG0agzLf6tLeVyGYHpJ6OwJAIBJTo54k7+IXpUZltYxPKJbTEXKPfcri
jKCjNIWcMC44xKIFAxrqcdYcPWOBjPAxHFYiMJEq6Go4ufXU8atBdzD/4nzZOZPD
rUUM6NDDyiigcUUw13v9ccERXjwdPE575eUMhXO923Ce7TsUsFrSbA6pASa+BRyJ
4yeb36HMR0opkqhftxjN9QPPMLBGSNmQ+DTq7TZYT75jz8WoAvykBFsQ+FkoFCxN
7fKAbCAdVdRA6329PM/sO2YRKH+r6t7uVpjtJJcR5NFkN/J4mQ0=
=hsTB
-END PGP SIGNATURE-

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/ZctpKVnrYXENkrU3%40mutt.