On Sat, 2006-09-09 at 10:30 -0400, Gregory Seidman wrote:
> On Fri, Sep 08, 2006 at 10:43:08PM -0500, Owen Heisler wrote:
> } On Sat, 2006-09-09 at 13:35 +1000, Paul Dwerryhouse wrote:
> [...]
> } Exactly what I was wondering.  Hopefully debian-installer will allow
> } creation of multiple partitions in a single RAID array when Etch is
> } released.  (Until then, I have some learning about mdadm to do)
> } 
> } > Might not necessarily work from the Debian installer though. Perhaps
> } > you'll have to go into a shell window and do the above by hand...
> } 
> } Right.  And (from what little I've seen) mdadm is rather user-friendly,
> } so that should help.
> 
> I missed the beginning of this discussion, but it sounds like you want the
> security of encryption on top of the flexibility of a partitionable device
> with the redundancy of RAID. So do I. In fact, I have it. It is worth
> noting that I do *not* bother with root on RAID, though I do keep
> /usr/local on it and I take backups of /etc with some regularity. (Yes, I
> should back up everything regularly; one of these days I will set that up.)

Exactly what I am wanting to do.  All I back up, though, is /etc, /home,
and a list of the non-automatically installed packages from aptitude.

> I have a pair of 250GB Firewire drives. I am using the entire drives and
> RAID devices, though I should probably have set up partitions slightly
> smaller than the entire drive. They are joined in a RAID1, and I use
> scsidev to give them specific device names to refer to in my mdadm.conf. I
> use /etc/init.d/cryptdisks to create the encryption loop on top of the
> assembled RAID device. The encryption loop device is formatted as an LVM
> physical volume (PV), which belongs to a volume group (VG) which has some
> eight logical volumes (LV) including /home, /usr/local, and
> /var/lib/postgresql.
> 
> I have a script to manage assembling the RAID, starting the encryption,
> activating LVM, and mounting the partitions. I can share this script if you
> would like it. The process is manual and must be performed after booting,
> not during. This has the advantage that I don't store the encryption
> password on disk anywhere, and I can rebooted the machine remotely and ssh
> in to mount the disks instead of having to be at the console to type the
> password at boot. Many of the typical services (e.g. exim4) do not run at
> init level 2, but the script changes to init level 3 after successfully
> mounting everything. 

Because I plan on having / on RAID5, everything must be done at boot.
And I'll just type in a passphrase on startup for the encryption.

I would like to see the script, but if I can get the installer to do
everything for me (as it has done well so far), I won't write any
scripts.

There is one thing the installer goofs on: if a swap partition is on a
logical volume which is on dmcrypt, it (cryptsetup actually, I think)
thinks the swap space is unsafe and fails.  So I have to set up the swap
partition later.

> This has been working for me for several years (though I had been using a
> RAID5 on SCSI and losetup instead of dm_crypt previously). I'm quite happy
> with it.

I will probably just use LVM on dmcrypt on RAID5 like you and others
have said.  I am just concerned about getting to the data later if I
have trouble, like via LiveCD; I have little experience with mdadm, LVM,
or encryption, so I could end up having trouble getting everything to
fall together right.

Something else: your opinions about performance
Does this kind of setup require a lot more CPU usage?  I suppose it
require _some_ more, but hopefully not enough to make a big difference.

What is the performance difference between hardware and software RAID?
>From what I see via Google, it depends on the scenario.  Filesystem, fs
options, hardware, etc.  Since LVM is on top of dmcrypt which is on the
RAID, I'm guessing I need to optimize either the dmcrypt partition or
the LVM on top of that.  And cryptsetup doesn't really seem to give any
options for performance, so LVM?  All I see for that is
"physicalextentsize".  Maybe I should even optimize the fs on top, ext3.
What about stride=stripe-size?  The man page says "Configure the
filesystem  for  a  RAID  array with stripe-size filesystem blocks per
stripe."  Would that apply when the ext3 fs is on top of LVM?

Thanks.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

Reply via email to