2GB ram is gonna be really tight, probably. However, I do something similar
at home with a bunch of rock64 4gb boards, and it works well. There are
sometimes issues with the released ARM packages (frequently crc32 doesn;'t
work, which isn't great), so you may have to build your own on the board
you're targeting or on something like scaleway, YMMV.

On Fri, Sep 6, 2019 at 6:16 PM Cranage, Steve <scran...@deepspacestorage.com>
wrote:

> I use those HC2 nodes for my home Ceph cluster, but my setup only has to
> support the librados API, my software does HSM between regular XFS file
> systems and the RADOS api so I don’t need the other MDS and the rest so I
> can’t tell you if you’ll be happy in your configuration.
>
>
>
> Steve Cranage
>
> Principal Architect, Co-Founder
>
> DeepSpace Storage
>
> 719-930-6960
>
>
> ------------------------------
> *From:* ceph-users <ceph-users-boun...@lists.ceph.com> on behalf of
> William Ferrell <wil...@gmail.com>
> *Sent:* Friday, September 6, 2019 3:16:30 PM
> *To:* ceph-users@lists.ceph.com <ceph-users@lists.ceph.com>
> *Subject:* [ceph-users] Ceph for "home lab" / hobbyist use?
>
> Hello everyone!
>
> After years of running several ZFS pools on a home server and several
> disk failures along the way, I've decided that my current home storage
> setup stinks. So far there hasn't been any data loss, but
> recovering/"resilvering" a ZFS pool after a disk failure is a
> nail-biting experience. I also think the way things are set up now
> isn't making the best use of all the disks attached to the server;
> they were acquired over time instead of all at once, so I've got 4
> 4-disk raidz1 pools, each in their own enclosures. If any enclosure
> dies, all that pool's data is lost. Despite having a total of 16 disks
> in use for storage, the entire system can only "safely" lose one disk
> before there's a risk of a second failure taking a bunch of data with
> it.
>
> I'd like to ask the list's opinions on running a Ceph cluster in a
> home environment as a filer using cheap, low-power systems. I don't
> have any expectations for high performance (this will be built on a
> gigabit network, and just used for backups and streaming videos,
> music, etc. for two people); the main concern is resiliency if one or
> two disks fail, and the secondary concern is having a decent usable
> storage capacity. Being able to slowly add capacity to the cluster one
> disk at a time is a very appealing bonus.
>
> I'm interested in using these things as OSDs (and hopefully monitors
> and metadata servers):
> https://www.hardkernel.com/shop/odroid-hc2-home-cloud-two/
>
> They're about $50 each, can boot from MicroSD or eMMC flash (basically
> an SSD with a custom connector), and have one SATA port. They have
> 8-core 32-bit CPUs, 2GB of RAM and a gigabit ethernet port. Four of
> them (including disks) can run off a single 12V/8A power adapter
> (basically 100 watts per set of 4). The obvious appeal is price, plus
> they're stackable so they'd be easy to hide away in a closet.
>
> Is it feasible for these to work as OSDs at all? The Ceph hardware
> recommendations page suggests OSDs need 1GB per TB of space, so does
> this mean these wouldn't be suitable with, say, a 4TB or 8TB disk? Or
> would they work, but just more slowly?
>
> Pushing my luck further (assuming the HC2 can handle OSD duties at
> all), is that enough muscle to run the monitor and/or metadata
> servers? Should monitors and MDS's be run separately, or can/should
> they piggyback on hosts running OSDs?
>
> I'd be perfectly happy with a setup like this even if it could only
> achieve speeds in the 20-30MB/sec range.
>
> Is this a dumb idea, or could it actually work? Are there any other
> recommendations among Ceph users for low-end hardware to cobble
> together a working cluster?
>
> Any feedback is sincerely appreciated.
>
> Thanks!
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to