On Sat, Jul 31, 2021 at 11:05 PM William Kenworthy <bi...@iinet.net.au> wrote:
>
> On 31/7/21 9:30 pm, Rich Freeman wrote:
> >
> > I'd love server-grade ARM hardware but it is just so expensive unless
> > there is some source out there I'm not aware of.  It is crazy that you
> > can't get more than 4-8GiB of RAM on an affordable arm system.
> Checkout the odroid range.  Same or only slightly $$$ more for a much
> better unit than a pi (except for the availability of 8G ram on the pi4)

Oh, they have been on my short list.

I was opining about the lack of cheap hardware with >8GB of RAM, and I
don't believe ODROID offers anything like that.  I'd be happy if they
just took DDR4 on top of whatever onboard RAM they had.

My SBCs for the lizardfs cluster are either Pi4s or RockPro64s.  The
Pi4 addresses basically all the issues in the original Pis as far as
I'm aware, and is comparable to most of the ODroid stuff I believe (at
least for the stuff I need), and they're still really cheap.  The
RockPro64 was a bit more expensive but also performs nicely - I bought
that to try playing around with LSI HBAs to get many SATA drives on
one SBC.

I'm mainly storing media so capacity matters more than speed.  At the
time most existing SBCs either didn't have SATA or had something like
1-2 ports, and that means you're ending up with a lot of hosts.  Sure,
it would perform better, but it costs more. Granted, at the start I
didn't want more than 1-2 drives per host anyway until I got up to
maybe 5 or so hosts just because that is where you see the cluster
perform well and have decent safety margins, but at this point if I
add capacity it will be to existing hosts.

> Tried ceph - run away fast :)

Yeah, it is complex, and most of the tools for managing it created
concerns that if something went wrong they could really mess the whole
thing up fast.  The thing that pushed me away from it was reports that
it doesn't perform well only a few OSDs and I wanted something I could
pilot without buying a lot of hardware.  Another issue is that at
least at the time I was looking into it they wanted OSDs to have 1GB
of RAM per 1TB of storage.  That is a LOT of RAM.  Aside from the fact
that RAM is expensive, it basically eliminates the ability to use
low-power cheap SBCs for all the OSDs, which is what I'm doing with
lizardfs.  I don't care about the SBCs being on 24x7 when they pull a
few watts each peak, and almost nothing when idle.  If I want to
attach even 4x14TB hard drives to an SBC though it would need 64GB of
RAM per the standards of Ceph at the time.  Good luck finding a cheap
low-power ARM board that has 64GB of RAM - anything that even had DIMM
slots was something crazy like $1k at the time and at that point I
might as well build full PCs.

It seems like they've backed off on the memory requirements, maybe,
but I'd want to check on that.  I've seen stories of bad things
happening when the OSDs don't have much RAM and you run into a
scenario like:
1. Lose disk, cluster starts to rebuild.
2. Lose another disk, cluster queues another rebuild.
3. Oh, first disk comes back, cluster queues another rebuild to
restore the first disk.
4. Replace the second failed disk, cluster queues another rebuild.

Apparently at least in the old days all the OSDs had to keep track of
all of that and they'd run out of RAM and basically melt down, unless
you went around adding more RAM to every OSD.

With LizardFS the OSDs basically do nothing at all but pipe stuff to
disk.  If you want to use full-disk encryption then there is a CPU hit
for that, but that is all outside of Lizardfs and dm-crypt at least is
reasonable.  (zfs on the other hand does not hardware accelerate it on
SBCs as far as I can tell and that hurts.)

> I improved performance and master memory
> requirements considerably by pushing the larger data sets (e.g., Gib of
> mail files) into a container file stored on MFS and loop mounted onto
> the mailserver lxc instance.  Convoluted but very happy with the
> improvement its made.

Yeah, I've noticed as you described in the other email memory depends
on number of files, and it depends on having it all in RAM at once.
I'm using it for media storage mostly so the file count is modest.  I
do use snapshots but only a few at a time so it can handle that.
While the master is running on amd64 with plenty of RAM I do have
shadow masters set up on SBCs and I do want to be able to switch over
to one if something goes wrong, so I want RAM use to be acceptable.
It really doesn't matter how much space the files take up - just now
many inodes you have.

-- 
Rich

Reply via email to