Hi,

> The budget is miniscule - and the performance demands
> (bandwidth and latency) are completely non-challenging.

This IMHO pretty much rules out any kind of server-class hardware, which
tends to be both costly and power-hungry. If you're thinking about
buying used stuff, be sure to factor in the cost and difficulty of
finding spares in some years' time.

Given the point above I would also stick with software RAID. True HW
RAID controllers are quite expensive and generally come with a x8 PCIe
interface which will require a server motherboard -- x16 PCIe video card
slots in commodity boards are usually only certified for x16 and x1
operation, so don't expect them to work reliably with other bus widths.

Linux software RAID also has the advantage that the kernel is not tied
to any specific piece of hardware. In case of a failure, your volumes
will be readable on any other Linux system -- provided the disks
themselves are not toast.

If reliability is your primary concern, I would go for a simple RAID1
setup; if your volumes need to be bigger than a physical disk you can
build a spanned volume over multiple mirrored pairs. Network throughput
will mostly likely be your primary bottleneck, so I'd avoid striping as
it would offer little in the way of performace at the expense of making
data recovery extremely difficult in case the worst should happen.

As for availability, I think the best strategy with a limited budget is
to focus on reducing downtime: make sure your data can survive the
failure of any single component, and choose hardware that you can get
easily and for a reasonable price. Sh*t happens, so make it painless to
clean up.

Network protocol:

If you do not need data sharing (i.e. if your volumes are only mounted
by one client at a time), the simplest solution is to completely avoid
having a FS on the storage server side -- just export the raw block
device via iSCSI, and do everything on the client. In my experience this
also works very well with Windows clients using the free MS iSCSI initiator.

Alternatively, you can consider good old NFS, which performs decently
and tends to behave a bit better -- especially when used over UDP -- in
case of network glitches, like accidentally powering off a switch,
yanking cables, losing wireless connectivity...

CIFS should be avoided at all costs if your clients are not Windows
machines.

File systems: avoid complexity. As technically superior as it might be,
in this kind of setup ZFS is only going to be resource hog and a
maintenance headache; your priority should be having a rock-solid
implementation and a reliable set of diagnostic/repair tools in case
disaster strikes. Tried-and-true ext3 fits the bill nicely if you ask
me; just remember to tune it properly according to your planned use --
eg. if a volume is going to be used to host huge VM disk images, be sure
to create its filesystem with -T largefile4.

Just my 2 cents,

Andrea

Reply via email to