o1bigtenor via Dng <dng@lists.dyne.org> wrote:

> I have not ever installed like this so first the configuration.
> 
> Ryzen 7 3800X
> Asus TUF Gaming X570-Pro   mobo
> 64 GB ram
> 2 - 1 TB M2 drives
> 2 - 1 TB SSDs
> 
> I want to set the system up so that the drives are 2 sets of Raid-1 with
> (proposed)
> set 1
> /efi, /boot, /, /usr, /usr/local, /var, swap
> set 2
> /home
> 
> How do I set up the raid arrays?
> Are they set up first and then the system is installed?
> Or do I set up what I want on one of each of the sets and the copy
> that setup to the second (of the set) and make it raid after system
> install?
> 
> I can't seem to find anything done within the last 2 years talking about this.
> Don't see where it should be difficult but then - - - well I've
> thought that before(!!!!) and had a boat load of male bovine excrement
> to wade through!
> (So I'm asking before doing to forestall issues - - - I hope!)

Others have given good information. Unless things have changed since I last did 
an install (couple of years I think), you can just go into manual disk 
partitioning and do it from there. Unfortunately, to do an optimum install 
means getting the calculator out as the defaults are sub-optimal …

AFAIK, all disks these days are 4k sectors, or for SSD, probably bigger. 
Ideally you want your partitions aligned to these boundaries. So for example, 
leave sectors (unix 512 byte sectors) 0-63 unused, and start your first 
partition at sector 64. If you know that your SSD uses (say) 64k blocks 
internally, then leave sectors 0-127 unused and start the first partition at 
sector 128. From memory the partitioning tool in the installer doesn’t do this 
alignment unless you manually calculate all your partition start & end blocks.
Everything will work fine if things are not aligned, but performance will be 
sub-optimal in some situations.


My personal “recipe”, which contains a number of hangovers from “before it all 
‘just worked’” is :
* I partition each disk with a small /boot partition, which I then put into a 
RAID-1 using the old scheme. These days it’s not required as GRUB understands 
LVM and madam RAID - but going back quite a while now, that wasn’t the case, so 
using the old format for RAID made each member appear the same (as long as you 
only use it read-only as GRUB does during boot). The separate /boot was another 
hang over from even longer ago when there was a restriction in BIOS reading 
past a certain size of disk - so you had to have a /boot to ensure that all the 
files needed by LILO were readable by BIOS.
Old habits die hard !
But, a /boot plus initrd means that you have some basic tools available should 
your root filesystemm, or the raid or LVM it’s on, get into trouble.

After finishing the install (or transfer if I’m migrating/duplicating a 
system), I then do a “grub install /dev/sdx” for each disk in the raid set (I 
once had 5 !) which means the system can boot from any disk.


Here’s where I do different things depending on setup. I don’t have many 
bare-metal installs, more VMs. And it’s one of those areas where you trade off 
different pros and cons.


* Swap
You can create this on it’s own partition(s), or on a raid array on it’s own 
partitions, or dish it out with LVM.
With 2 disks, you can create a partition on each and use both of them as native 
swap. This is optimal in terms of disk space and performance - but if a disk 
falls over then there’s a risk that your system will too if it’s used swap on 
that disk AND it needs to swap it back in. Or you can RAID-1 the partitions 
which makes you safe against disk failure, but create overhead (need to write 
to both disks when swapping) and use more physical disk space. In both cases, 
you can’t easily change the swap size - you can add an LVM volume to increase 
it, you can’t reduce is reclaim and meaningful disk space.
Or just create an LV in LVM and use that - it’s the most flexible but adds the 
most software layers between system and disks.
Ideally your systems will rarely swap, and if they do they will just swap out 
very rarely used memory - such a daemon that’s running but not getting called 
on to do anything very often.


* / (root)
On bare-metal machines I have an array (partition on each disk & RAID) just for 
/. Most of my bare-metal machines are hosts for VMs (Xen), so the size needed 
is fairly predictable - I see on one of them it’s only 2G, and less than 50% 
used.
Because / isn’t written to a lot, this makes it quite robust against various 
issues that can arise, and makes troubleshooting easier. If you root filesystem 
is on LVM, and your LVM breaks, then you’re in a world of pain to fix it - or 
boot from some sort of recovery disk.

On VMs I generally just use LVM LVs for everything since it’s easy to mount 
filesystems on the host for maintenance.


* Then I typically create a partition for the rest of the disk - less a bit - 
RAID it and use it as an LVM PV. Note the “less a bit”.
If you have a raid array and a disk fails, you cannot replace the failed disk 
with one that’s even a single block smaller. I’ve been bitten by this in the 
past - with the support company sending me a replacement 9G drive that won’t 
work, and spending ages talking people through why one 9G disk is not the same 
as another 9G disk (they had to hunt around for the same model disk in the 
end). So I always leave a bit of space unused at the end of the disk to allow 
for these differences - and these days it’s unusual to be clamouring to use 
every last block.
As an aside, back in the 90s I used to deal in Apple Macs. The disks all had 
unused space - so at the factory they could just mass duplicate a master copy 
onto the disks without having to worry about different sized disks, the image 
just had to be small enough to fit on the smallest disk they used for each 
nominal size (back then, typically a choice of 20meg, 40meg, or 80meg if you 
had loads of brass).


Note: If you have 3 or more disks then you can pick and choose the RAID level 
you use. /boot is always RAID-1 so each disk holds a full copy of it. The rest 
you can pick and choose depending on your requirements. Generally RAID 5 gives 
you the most space, with more disks, RAID 6 is an option and gives you two-disk 
redundancy.
But if your priority is performance, then striped & mirrored or mirrored & 
striped gives you the best performance with single disk redundancy. Once over, 
you had to set up mirrored pairs, and then stripe the resulting volumes 
together; or stripe the partitions and then mirror the two stripe sets. Yes, a 
bit of a PITA and only works with an even number of members ! These days, Linux 
RAID supports RAID-10 where it’s done automatically and (IIRC) supports an odd 
number of members. Not to mention, these days you can add disks to arrays 
dynamically - it used to be “fun" finding the disk space to copy all your data 
to while you rebuilt RAID arrays from scratch. Not to mention the out of hours 
tedium of waiting for it to copy, and the feeling of trepidation (I hope the 
disk I’ve copied it all to it OK ...) when you go and nuke your existing array 
in order to build a new larger one.


* I **ALWAYS** have a separate /var. Trust me, if you have (e.g.) a runaway log 
and it fills the filesystem, then you will thank yourself for restricting it to 
/var.


After that, it’s all down to what the system is for. E.g. for a mail server 
I’ll have a separate /var/mail; for a web server, a separate filesystem for 
that (wherever it gets put); and so on; perhaps a filesystem for your 
database(s).
If it’s a system you “work on”, then you might want a separate /home for users’ 
home directories. Again, protects the system to a certain extent against users 
going mad creating big files.
You can do a lot of this with disk quotas these days, but separating 
filesystems is a powerful tool. And with LVM it’s generally fairly easy to 
resize the filesystems if you don’t get it right first time.


Now, back to how to install it !
You’ll need to go into the custom partitioner, and from there, you can 
partition the disks manually - don’t forget to set the partition types.

When you’ve partitioned the disks (and written the partitions out to disk), you 
need to go into the raid configurator and create your RAID array(s). When you 
come out of the RAID config, you should then see the array(s) listed along with 
the various partitions.

You can now go into the LVM manager and configure LVM volume group(s) (VGs), 
and then your logical volumes (LVs).
Again, when you exit the LVM config, you should see the LVs listed.

Make sure each partition/array/LV is set appropriately - whether to format it, 
where to mount it, and so on. This is the key bit to getting the different bits 
of the system where you want them.

From memory, I’ve found that it will then format the filesystems, mount then in 
the right places, and install the system on it. Your mdadm and lvm configs 
should be correctly configured in your installed system. I think by default in 
only does a grub-install on one disk, so when you’ve booted your new system, do 
this for each disk that’s part of your /boot array - it’s “annoying” to find 
(when a disk fails) that the others disks don’t have the first stage boot 
loader installed :(



Hope this helps, Simon

_______________________________________________
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng

Reply via email to