Rich Freeman posted on Sat, 14 Jul 2012 19:57:41 -0400 as excerpted:

> On Sat, Jul 14, 2012 at 7:38 PM, Duncan <1i5t5.dun...@cox.net> wrote:
>> BTW, any "gentooish" documentation out there on rootfs as tmpfs, with
>> /etc and the like mounted on top of it, operationally ro, rw remounted
>> for updates?
>>
>> That's obviously going to take an initr*, which I've never really
>> understood to the point I'm comfortable with my ability to recover from
>> problems so I've not run one since my Mandrake era, but that's a status
>> that can change, and what with the /usr move and some computer problems
>> I just finished dealing with, I've been thinking about the possibility
>> lately.  So if there's some good docs on the topic someone can point me
>> at, I'd be grateful. =:^)
> 
> I doubt anybody has tried it, so you'll have to experiment.

"Anybody" /anybody/, or "anybody" on gentoo?  FWIW, there are people 
running it in general (IIRC much of the discussion was on Debian, some on 
Fedora/RH), but I didn't see anything out there written from a gentoo 
perspective.  Gentoo-based docs/perspective does help, as one isn't 
constantly having to translate binary-based assumptions into "gentooese", 
but there's enough out there in general that a suitably determined/
motivated person at the usual experienced gentoo user level should be 
able to do it, without having to be an /extreme/ wizard.  But so far I've 
not been /that/ motivated, and if there was gentoo docs available, it 
would bring the barriers down far enough that I likely /would/ then have 
the (now lower) required motivation/determination.

Just looking for that shortcut, is all. =:^)

> I imagine you could do it with a dracut module.  There is already a
> module that will parse a pre-boot fstab (/etc/fstab.sys).  The trick is
> that you need to create the root filesystem and the mountpoints within
> it first.  The trick will be how dracut handles not specifying a root
> filesystem.

While I do know dracut is an initr* helper, you just made me quite aware 
of just how much research I'd have to do on the topic. =:^\   I wasn't 
aware dracut even /had/ modules, while you're referring to them with the 
ease of familiarity...

> However, if anything I think the future trend will be towards having
> everything back on the root filesystem, since with btrfs you can set
> quotas on subvolumes and have a lot more flexibility in general, which
> you start to lose if you chop up your disks.  However, I guess you could
> still have one big btrfs filesystem and mount individual subvolumes out
> of it onto your root.  I'm not really sure what that gets you.  Having
> the root itself be a subvolume does have benefits, since you can then
> snapshot it and easily boot back off a snapshot if something goes wrong.

The big problem with btrfs subvolumes from my perspective is that they're 
still all on a single primary filesystem, and if that filesystem develops 
problems... all your eggs/data are in one big basket, good luck if the 
bottom drops out of it!

One lesson I've had drilled into my head repeatedly over now two decades 
of computer experience... don't put all your data in one basket!  It's a 
personal policy that's saved my @$$ more than a few times over the years.

Even with raid, when I first setup md/raid, I set it up as a nice big 
(partitioned) raid, with a second (similarly partitioned) raid as a 
backup.  With triple-digits gigs of data (this was pre-terabyte-drive 
era), a system-crash related re-add and resync would take /hours/.  

So when I rebuilt the setup, I created over a dozen (including working 
and backup copies of many of them) individual raids, each in its own set 
of partitions on the physical devices, some raids of which were further 
partitioned, some not, but only the media raid (and its backup) were 
anything like 100 gigs, and with many of even the working raids (plus all 
backups) not even activated for normal operation unless I was actually 
working on whatever data was on that raid, and in general even most of 
the the assembled raids with rw mounting not actively writing at the time 
of a crash, re-add and resync tended to be seconds or minutes, not hours.

So I'm about as strong a partitioning-policy advocate as you'll get, tho 
I do keep everything that the pm installs, along with the installation 
database (so /etc, /usr, /var, but not for instance /var/log or /usr/src, 
which are mountpoints), on the same (currently) rootfs of 8-ish gigs, 
with a backup root partition (actually two of them now) that I can point 
the kernel at from grub, if the working rootfs breaks for some reason.  
So the separate /usr/ thing hasn't affected me at all, because /usr/ is 
on rootfs.

But as I said I had some computer hardware issues recently, and they made 
me aware of just how nice it'd be to have that rootfs mounted read-only 
for normal operation -- no fsck/log-replay needed on read-only-at-time-of-
crash mounts! =:^)

So I'm pondering just how hard it would be...

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


Reply via email to