Re: [systemd-devel] Dropping split-usr/unmerged-usr support

2022-04-07 Thread Wol

On 07/04/2022 23:48, Jason A. Donenfeld wrote:

A few conversations over the course of the day in an IRC channel isn't
necessarily representative of the whole project, but the impression I
got was way less so about hostility and more so just that nobody has
gotten around to doing the work and tracking whatever bugs come out of
it that need to be fixed. It's been started, but seems to have
fizzled. Maybe the recent discussion here and funny happenings over in
Debian will inject some life into it. So maybe we'll wind up with
merged usr after all. No promises, but I think it's much more a matter
of "when" than "if".

(My personal 2¢ is that I'd be happy to see systemd help corral us
stragglers into merged usr, and in the process, drop some complexity
of its own for supporting unmerged usr.)


I don't really have a horse in that race, I'm just left with the strong 
feeling that there are people who are strongly anti-systemd, and there 
are people who are pro-systemd, but what's important is THEY RESPECT 
EACH OTHER. It's just that the anti-systemd guys are in the majority, 
and it shows. (Funtoo left me with a very nasty taste, best described as 
"you always see in others, your own worst faults" :-(


Cheers,
Wol


Re: [systemd-devel] Dropping split-usr/unmerged-usr support

2022-04-07 Thread Wol

On 07/04/2022 17:47, Mike Gilbert wrote:

So, my guess would be that the people who dislike merged-/usr are also
the ones who dislike systemd, no? i.e. do they really matter if we are
talking about what to support in systemd? They'd not use our stuff
anyway, so why bother?


There's probably also a big minority of users (like me) who may be 
pro-systemd, but run a systemd-hostile distro for reasons that are 
nothing to do with systemd ...



There's probably a large overlap between users who don't like systemd
and users who don't like merged-/usr. I would guess we don't have a
critical mass of users/developers running systemd.

I could probably force the users who do run systemd to migrate to
merged-/usr, but I don't really see much benefit from that if all
other packages in Gentoo still need to support both configurations.


And I'm sorry if I upset Mike, but I class gentoo as systemd-hostile. 
It's MUCH easier to install/run Gentoo with OpenRC, systemd isn't that 
well documented (it's better than it was). There are people who support 
systemd, but I get the impression it's seen as an unwanted rival to OpenRC.


But there's not much choice out there for systemd-friendly source-based 
distros. Funtoo is openly anti-systemd. Sourceror (which I plan to play 
with) seems not to be that successful - looks like there are few users 
beyond the core developers ... and that feels like it's one of the 
better ones ...


Cheers,
Wol


Re: [systemd-devel] [EXT] Proposal to extend os-release/machine-info with field PREFER_HARDENED_CONFIG

2022-02-16 Thread Wol

On 16/02/2022 12:13, Stefan Schröder wrote:

Wouldn't/etc/default/* be the place to look such things up?
  
I am not sure. Is /etc/default standard across distributions? AFAIK it's Debian specific.

We should be looking to address this issue in a distribution independent way, 
shouldn't we?
  
I've got /etc/default (gentoo) but it explicitly says "copied from 
Debian". And on a cursory glance it's not designed to work with 
systemd... (quelle surprise :-)


Cheers,
Wol


Re: [systemd-devel] Proposal to extend os-release/machine-info with field PREFER_HARDENED_CONFIG

2022-02-16 Thread Wol

On 16/02/2022 17:11, Stefan Schröder wrote:

I must say, I am very sure that the primar focus should always be on
locking things down as well as we can for*everyone*  and as
*default*.



Yes, that'd be nice, but I don't think it's realistic. Having an opt-in via the 
proposed mechanism, it would be much easier to suggest alternative 'hardenend' 
configurations upstream if they didn't mess up the old defaults.

I'm having loads of trouble at work at present - everything is locked 
down tight because of GDPR and £Millions in fines if things go wrong.


There's no way I'm going to lock my home system down like that. What's 
the saying - the securest system is locked in a safe with no 
connectivity (and totally unusable :-). There is a very strong trade-off 
between "secure" and "usable", and different people have different 
tolerances for friction.


For me, passwd/shadow is more than secure enough - learning pam is too 
much effort/hassle for too little gain. For work, it's LDAP/2FA - 
mistakes and breaches are costly.


All that's being asked for here is some way of telling the system where 
on the usable/secure spectrum the computer should be configured. As I'm 
fond of saying, one size does NOT fit all ...


Cheers,
Wol


Re: [systemd-devel] Service activation

2022-02-13 Thread Wol

On 13/02/2022 16:42, Michael Biebl wrote:

So the answer to that is nice and simple,
"systemctl enable/start scarletdme.socket"

no, you start a socket by "systemctl start". You enable a socket,
service, unit,... via "systemctl enable"

enable and start are different concepts.


Yes. I know. But.

Bearing in mind my knowledge of systemd was pretty much NIL a week or 
two ago, apart from following recipes for gentoo ... (which is not 
particularly systemd-friendly ...)


I've learnt a heck of a lot very quickly :-) but as I see it "start" 
starts the service *now*, "enable" starts the service *at boot*.



Now what I don't want is for scarletdme.socket to invoke
scarletdme.service. How do I tell it that it is supposed to invoke
scarletdme@.service? Or have I messed up naming conventions? Or what the
hell is the proper way to do it?



Please read again what Mantas wrote. He explained all that rather nicely.


Just like a manual ... now I've gone back to it, (now I've found a 
*longer* explanation elsewhere,) it makes sense. "nowait", "template", 
all that stuff, I didn't understand that ...


Sorry, but when you're explaining things, you need to go into much more 
detail than you may be comfortable with. Otherwise you're just 
explaining terms the questioner doesn't understand, using other terms 
they don't understand ... :-) Them as can't do, teach ... because them 
as can do often can't teach :-)


Anyway, thanks a lot. It has helped, it made it much easier for me to 
"know what I didn't know" and find what I needed.


Cheers,
Wol


[systemd-devel] Service activation

2022-02-12 Thread Wol

More fun getting things to work ... :-)

So I've got a service, scarletdme.service, which fires up my db backend 
for running interactively. However, I also need a socket service for 
remote connections.


I've got the xinetd files, but if I'm running systemd, I want to use 
systemd :-)


So I've written scarletdme.socket, and scarletdme@.service, but the more 
I read, the more I don't understand ...


Do I enable scarletdme.socket the same as anything else eg "systemctl 
enable scarletdme.socket"? How does it know the difference between 
scarletdme.service and scarletdme@.service? I get the impression I need 
to put something in the .socket file to make it use scarletdme@ rather 
than scarletdme?



And once I've got all that sorted, I'm betting I'm going to have grief 
getting it to work properly, so while it's not much to do with systemd, 
is there any way I can get systemd to log all traffic back and forth so 
I can debug it?


Cheers,
Wol


Re: [systemd-devel] Converting xinetd files

2022-02-10 Thread Wol

On 11/02/2022 01:08, Stephen Hemminger wrote:

On Fri, 11 Feb 2022 00:57:11 +
Wol  wrote:


I've found the pid0 blog, and had no real trouble (I think, I haven't
tested it yet :-) converting an xinetd setup.

But the documentation (man systemd.service) didn't tell me how to
convert a couple of settings, namely xinetd had "user=" and "group=".
Okay, user= was root, so group= probably doesn't matter either, but how
do you get a service to change user and drop privileges? It would be
nice to know for the future, even the near future to try and modify
qm/scarletdme so it doesn't need root and lower any possible attack surface.

Cheers,
Wol


You probably want DynamicUser=


Thanks. Just looked in the man page and it doesn't appear to be there... 
How many other undocumented options are there? :-)


Cheers,
Wol


[systemd-devel] Converting xinetd files

2022-02-10 Thread Wol
I've found the pid0 blog, and had no real trouble (I think, I haven't 
tested it yet :-) converting an xinetd setup.


But the documentation (man systemd.service) didn't tell me how to 
convert a couple of settings, namely xinetd had "user=" and "group=". 
Okay, user= was root, so group= probably doesn't matter either, but how 
do you get a service to change user and drop privileges? It would be 
nice to know for the future, even the near future to try and modify 
qm/scarletdme so it doesn't need root and lower any possible attack surface.


Cheers,
Wol


Re: [systemd-devel] Authenticated Boot: dm-integrity modes

2021-12-02 Thread Wol

On 02/12/2021 21:24, Adrian Vovk wrote:

Hello Wol,

Please, read the blog post I'm responding to for context to what I'm
saying: 
https://0pointer.net/blog/authenticated-boot-and-disk-encryption-on-linux.html


dm-integrity is NOT ABOUT authentication

dm-integrity provides authentication when configured to use
sha256-hmac. I am not confusing dm-verity with dm-integrity.


Yup. Reading the blog makes it clearer ...

So HMAC provides integrity guarantees that it was written by users who 
knew the code, so I guess that is authentication.



What if they're WRITTEN by things outside of the kernel? At which point, when 
the kernel tries to read it, things will go well pear-shaped for the system.

Well that's my point. A clever attacker can modify the filesystem
outside of the kernel and exploit a kernel vulnerability. The point of
putting dm-integrity on the rootfs (in hmac mode) is to prevent the
rootfs from being modified offline. My point is that it's entirely



possible to maliciously modify other filesystems that *will* be
mounted and *cannot* us dm-integrity


Understood ...



You should always run dm-integrity on bare metal.

Lennart was proposing to use dm-integrity (in HMAC mode) inside of the
loopback image to verify that the filesystem inside of the image was
not maliciously modified to hijack the kernel. My argument was that,
given that the filesystem the image is stored on is authenticated, why
does the content of the image have to be authenticated? As I've
pointed out in a previous email, layering instances of dm-integrity on
top of each other is catastrophic for write performance

Seeing as it's a /home/user image I'd agree with you. If it's a rootfs, 
then that image needs to guarantee that the file-system hasn't been 
altered outside its own purview.


The only thing I don't understand is why layering dm-integrity in a loop 
device on top of dm-integrity on a real disk should necessarily hammer 
write performance. I can understand it chewing up ram cache and cpu, but 
it shouldn't magnify real writes that much.


Cheers,
Wol


Re: [systemd-devel] Authenticated Boot: dm-integrity modes

2021-12-02 Thread Wol

On 03/12/2021 00:05, Adrian Vovk wrote:

The only thing I don't understand is why layering dm-integrity in a loop

device on top of dm-integrity on a real disk should necessarily hammer
write performance. I can understand it chewing up ram cache and cpu, but
it shouldn't magnify real writes that much.

Well a write in the mounted home dir = 2 writes to the loopback file,
and a write to the loopback file is 2 writes to the block device. Thus
a write to the home dir is 4 writes to the block device. Am I
mistaken?


No I'd say you're right. But if it's a personal system, like my home 
server, I'd be more worried about read speed. Certainly on my system 
(did I say I had raid-5 :-) it's boot times (configuring lvm) and 
reading from disk that I notice.


Given that I have 16GB ram (with 32GB waiting to be installed :-) and 
aren't hammering the system, four (or more) writes via write 
amplification for every write I send from my app is almost un-noticed 
(apart from xosview telling me my cpu cores are working hard). And if I 
have a short burst of writes, there's plenty of cache so as pressure on 
the i/o path goes up it makes elevator optimisation easier and increases 
disk efficiency.


And if I was using a VM on a big server with lots of VMs, again making 
plenty of ram available to cache it smooths out the writes and reduces 
actual pressure on the real physical disks.


Cheers,
Wol


Re: [systemd-devel] Authenticated Boot: dm-integrity modes

2021-12-02 Thread Wol

On 02/12/2021 06:11, Adrian Vovk wrote:

Some more thoughts about the usefulness of dm-integrity:

1. There's some past work[1] on authenticated Btrfs, where the whole 
filesystem is authenticated w/ a keyed hash algorithm. It's basically 
dm-integrity built directly into the filesystem, with none of the 
performance and complexity penalty. I think it makes a lot more sense to 
reuse Btrfs's already-existing hashing infrastructure with HMAC than to 
put a second layer of integrity checking under it. Perhaps pushing for 
that work to land in the kernel is a better use of time than working 
around dm-integrity's limitations? I'd like to hear your thoughts on 
this most.


Hmmm. I use dm-integrity. I use ext4 not btrfs. And anyways, 
dm-integrity is NOT ABOUT authentication, so using btrfs's 
authentication capabilities is in addition to, not instead of, dm-integrity.


Are you confusing (like I did) dm-verity and dm-integrity?


2. Integrity-checking all the filesystems that will be mounted is 
infeasible. Here's two cases I can think of right off the bat:
- At some point, to do a system update, the ESP or the XBOOTLDR 
partition will be mounted by userspace. What if these filesystems are 
maliciously constructed to exploit the kernel? It's not possible to use 
any kind of integrity checking on these filesystems because they're read 
by things outside of the kernel (firmware, bootloader).


What if they're WRITTEN by things outside of the kernel? At which point, 
when the kernel tries to read it, things will go well pear-shaped for 
the system. So if an attacker gains access to the hard disk, modifies 
it, and waits for linux to read it, he's going to have a loonnggg wait :-)


Maybe the 
filesystem can be constructed to exploit the (notoriously poorly 
implemented) UEFI firmware itself!
- Any USB stick the user will plug into the computer *might* have a 
malicious filesystem on it. The only way to protect against this is to 
never mount USB sticks plugged into the device. Asking for an admin/root 
password will just be an annoyance and users will type it in. There's 
really no way around it; users will have to occasionally mount untrusted 
filesystems on their machine


3. What is dm-integrity protecting in the homed image? Assuming /home is 
protected from offline writes, there's no way an attacker can modify the 
contents of the loopback file anyways. Why layer dm-integrity on top of 
that?


What dm-integrity is doing is protecting whatever has been written to 
disk. If it's been written outside of dm-integrity, it won't read inside 
of dm-integrity. You should always run dm-integrity on bare metal.


My setup is ext4 over lvm over raid-5 over dm-integrity over spinning 
rust. So provided it's only the one hard drive (and I spot it in time), 
I don't care what happens to that drive. Something could scribble all 
over my filesystem corrupting the hell out of, and all I have to do is 
run a scrub and it's fixed. That would destroy your standard raid 5 - it 
has one extra bit of recovery information, and if it's missing two bits 
of information (a block is scrambled, and it doesn't know which block) 
then you're stuffed. What dm-integrity does is convert your scrambled 
block into a lost block, so raid now has just one piece of missing 
information - the lost block.


Cheers,
Wol


Re: [systemd-devel] Authenticated Boot: dm-integrity modes

2021-11-28 Thread Wol

On 28/11/2021 19:56, Adrian Vovk wrote:
- Journal mode: is slow. It atomically writes data+hash, so the 
situation I describe above can never happen. However, to pull this off 
it writes the data twice. Effectively every layer of journaled 
dm-integrity will cut write speeds in half. This isn't too bad to 
protect the rootfs since writes there will be rare, but it is terrible 
for /home. Layering systemd-homed's LUKS+dm-integrity image on top of 
that will cut performance in half again. So with the whole setup 
proposed by the blog post (even with dm-verity) writes to home will be 
limited to 1/4 of the drive's performance and the data will be written 
four times over. On top of performance issues, won't writing the data 4x 
wear out SSDs faster? Am I missing something?


Why can't you just enable journalling in systemd-homed, so we have 
LUKS+dm-integrity-journalling?


If the user needs to separate / and /home, isn't that just sensible design?

As for SSDs, the latest ones, as far as I can tell, have a lifespan 
measured in years even if they're being absolutely hammered by a stress 
test. If you're really worried about wearing out an SSD, put the journal 
on rotating rust, but I think those in the know are likely to tell you 
that the rust will die before the SSD.


Cheers,
Wol


[systemd-devel] systemd boot timer problem

2021-11-26 Thread Wol
The problem is easy to explain, but I can't solve it. Basically, I want 
to run a .service every nth boot as part of the boot process.


They always say "state what you're trying to achieve, not how you're 
trying to achieve it", so ...


I have a desktop/server which is rebooted pretty much every day. I want 
to take a snapshot of / every Saturday, which I need to do before it is 
mounted rw so I know it's clean (I'll then do a weekly system update, an 
emerge).


So basically what I'm trying to do is make my .timer ENABLE my service 
every Friday, but as far as I can tell it RUNS it instead. I don't think 
the timer service runs early enough in the boot process to enable it so 
it runs before root is mounted and can enable it in time.


And the RunOnce doesn't seem to be any help because it looks to me like 
that runs once per boot ...


So how do I configure a boot-time service to only run "occasionally" on 
my schedule?


Cheers,
Wol


[systemd-devel] systemd boot timer problem

2021-11-26 Thread Wol
The problem is easy to explain, but I can't solve it. Basically, I want 
to run a .service every nth boot as part of the boot process.


They always say "state what you're trying to achieve, not how you're 
trying to achieve it", so ...


I have a desktop/server which is rebooted pretty much every day. I want 
to take a snapshot of / every Saturday, which I need to do before it is 
mounted rw so I know it's clean (I'll then do a weekly system update, an 
emerge).


So basically what I'm trying to do is make my .timer ENABLE my service 
every Friday, but as far as I can tell it RUNS it instead. I don't think 
the timer service runs early enough in the boot process to enable it so 
it runs before root is mounted and can enable it in time.


And the RunOnce doesn't seem to be any help because it looks to me like 
that runs once per boot ...


So how do I configure a boot-time service to only run "occasionally" on 
my schedule?


Cheers,
Wol