I've been happy running a home NAS since 2001ish on Solaris, Opensolaris,
Linux.

The best thing I did was switch from mdadm/LVM to ZFS so I could change
"partitions" on the fly.  Auto snapshotting every hour/day/week/month was a
nice addition I missed from Netapp.  The ECC & self correction of ZFS is
very important to me.  ZFS has survived power hits, losing a core on a dual
core CPU (!) and bad ZoL upgrades (early CentOS versions).  I used to make
the OS use its own RAID1 (not ZFS), but I don't need the uptime vs power.
I can easily reinstall the OS again.

I had RAIDZ and upgraded my disks from gigabytes up to terabytes a few
times.  I now use 2 disk RAID1 blocks of 4TB or 6TB drives.  When I want to
upgrade, I only need to buy 2 drives at a time.  4TB has been a sweet
spot for *me*.  $/GB, availability of non-SMR drives and needing only 1
parity to keep 4TB safe for the data.

Initially, I put drives into the system.  I found SATA cages that put 4
drives into where the floppies would go are better.

When I ran out of space, you can run regular SATA cables outside the box to
a drive.  No special eSATA needed.  I used an old PC chassis w/ its own
power supply to power the drives.  I've since found cages that have fans
and use them w/ an external power supply.  There are SATA cards w/ the
single connector to 4 sata ports that cut down clutter.

I share filesystems as NFS, SMB and a web server.  My chromebook or android
can use those or SFTP mount to get to things.
I have KVM to run a music server, jellyfin, search engine and ssh gateway.
I can move those VMs to a different system and NFS/SMB mount the NAS.

jellyfin replaces plex & does DLNA/uPNP.  I'm planning on paperless-ng and
a photo organizer in VMs on another system.

I have a 3 GHz sandy lake quad cpu with 24 GB RAM which was an upgrade from
an athlon dual core w/ 8GB and a bad core :-) that worked well for years
with only the SSH gateway VM.  I have a UPS that will auto shutdown after a
5 minute power loss.  Because that's what a UPS is for: to ensure a clean
shutdown if a generator or the power grid isn't supplying power.

btrfs looks good (only with RAID1 IMO) but I'll stick with ZFS.  When I
installed Fedora on my 10yr old i5 with 8GB RAM, it chose btrfs.  It was
much slower than the previous Fedora with ext4 so I reinstalled it with
ext4.  I don't see much point in using btrfs/zfs on a single drive.

On Wed, Feb 23, 2022 at 11:26 AM Ben Scott <dragonh...@gmail.com> wrote:

> Hi all,
>
> We haven't had a really good flamewar ^W discussion on here in far too
> long...
>
> SUMMARY
>
> Btfrs vs ZFS. I was wondering if others would like to share their
> opinions on either or both?  Or something else entirely?  (Maybe you
> just don't feel alive if you're not compiling your kernel from
> patches?)  Especially cool would be recent comparisons of two or more.
>
> I'll provide an info dump of my plans below, but I do so mainly as
> discussion-fodder.  Don't feel obligated to address my scenario in
> particular.  Of course, commentary on anything in particular that
> seems like a good/bad/cool idea is still welcome.
>
> RECEIVED WISDOM
>
> This is the stuff every article says.  I rarely find anything that goes
> deeper.
>
> - ZFS has been around/stable/whatever longer
> - btfrs has been on Linux longer
> - btfrs is GPL, ZFS is CDDL or whatever
> - Licensing kept ZFS off Linux for a while
> - ZFS is available on major Linux distros now
> - People say one is faster, but disagree on which one
> - Oracle is a bag of dicks
> - ZFS is easier to pronounce
>
> For both, by coupling the filesystem layer and the block layer, we get
> a lot of advantages, especially for things like snapshots and
> deduplication.  The newcomers also get you things like checksums for
> every block, fault-tolerance over heterogenous physical devices, more
> encryption and compression options.  Faster, bigger, longer, lower,
> wider, etc., etc.  More superlatives than any other filesystem.
>
> MY SCENARIO
>
> I'm going to be building a new home server soon.  Historically I've
> used Linux RAID and LVM and EXT2/3/4/5/102, but all the cool kids are
> using smarter filesystems these days.  I should really get with the
> times.  They do seem to confer a lot of advantages, at least on paper.
>
> USE CASES
>
> User community is me and my girlfriend and a motley collection of
> computing devices from multiple millenia.  Administrator community is
> me.
>
> Mostly plain old network file storage.  Mixed use within that.  I'm a
> data hoarder.
>
> All sorts of stuff I've downloaded over the years, some not even from
> the Internet (ZMODEM baby!).  So large numbers of large write-once
> files.  "Large" has changed over the years, from something that fills
> a floppy diskette to something that fills a DVD, but they don't change
> once written.  ISO images, tarballs, music and photo collections
> (FLAC, MP3, JPEG).
>
> Also large numbers of small write-once files.  I've got 20 GB of mail
> archives in maildir format, one file per message, less than 4K per
> file for the old stuff (modern HTML mail is rather bloated).  These
> generally don't change once written either, but there are lots of
> them.  Some single directories have over 200 K files.
>
> Backups of my user systems.  Currently accomplished via rsnapshot and
> rsync (or ROBOCOPY for 'doze).  So small to medium-small files, but
> changing and updating and hardlinking and moving a lot.  With a
> smarter filesystem I can likely dispense with rsnapshot, but I doubt
> I'm going to move away from plain-old-files-as-backup-storage any time
> soon.  (rsync might conceivably be replaced with a smarter network
> filesystem someday, but likely not soon.)
>
> ANTI USE CASES
>
> Not a lot of mass-market videos -- the boob tube is one area where I
> let others do it for me.  (Roku, Netflix, Blu-ray, etc.)
>
> No plans to network mount home directories for my daily-driver PCs.
> For laptops especially that's problematic (and sorting apps
> (particularly browsers) that can copy with a distributed filesystem
> seems unlikely to pay off).
>
> Not planning on any serious hosting of VMs or containers or complex
> application software on this box.  I can't rule it out entirely for
> (especially as an experiment), but this is mainly intended to be a
> NAS-type server.  It will run NFS, Samba, SSH, rsync.  It might run
> some mail daemons (SMTP, IMAP) just to make accessing archives easier,
> but it won't be the public-facing MX for anything.
>
> It's unlikely to run any point-and-drool administration (web) GUIs.  I
> have a set of config files I've been carrying around with me since I
> kept them on floppy diskette, and they've served me well.  Those that
> like them, more power to you, but they're not for me.  I inevitably
> bump into their limitations and have to go outside them anyway.
>
> I've tried a few consumer NAS appliances and have generally been
> disappointed.  It's the same as the GUI thing above, except I hit the
> limits sooner and in more ways.  Some of them have really disgusting
> software internals.  (A shame, because some of the hardware is
> appealing, especially in terms of watts and price.)
>
> I don't want to put this on somebody else's computer.
>
> HARDWARE
>
> I'm shooting for a super compact PC chassis, mini-ITX mainboard, 4 x
> 3.5-inch hot swap bays, SATA interfaces, x86-64 processor.  Initially
> it will be two spinning disks.  Somewhere in the neighborhood of 3 to
> 6 TB effective.  The disks will be relatively slow, favoring lower
> price-per-GB and less heat over performance.  This is bulk data
> storage.  The user PCs have SSDs.  If fancy filesystems weren't a
> thing, it would start with two mirrored drives, with plans to expand
> to RAID 10 (stripes across mirrors), and multiple LVM logical volumes.
>
> Off-site off-line backup will be accomplished with one or more
> physical disks attached to the system, sync'ed at some level (be it
> rsync or filesystem or whatever).  Initially it will be a bare disk
> and a hot swap bay, with options for eSATA or USB in the future.
>
> Specific processor and RAM are undecided.  I'm not looking to run 40
> VMs, and lower watts would be nice.  At the same time, I want it to be
> able to handle what I throw at it, and I know the fancy filesystems
> can be more demanding, plus I keep meaning to set up plain text
> indexing/search.
>
> -- Ben
> _______________________________________________
> gnhlug-discuss mailing list
> gnhlug-discuss@mail.gnhlug.org
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>
_______________________________________________
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/

Reply via email to