Re: Built for Failure

2020-12-30 Thread Todd Cole via PLUG-discuss
I was incorrect it is a Thecus N5200 On Wed, Dec 30, 2020 at 9:04 PM Stephen Partington via PLUG-discuss < plug-discuss@lists.phxlinux.org> wrote: > What model? > > On Wed, Dec 30, 2020, 6:30 PM Todd Cole via PLUG-discuss < > plug-discuss@lists.phxlinux.org> wrote: > >> I have been using ZFS almo

Re: Built for Failure

2020-12-30 Thread Stephen Partington via PLUG-discuss
What model? On Wed, Dec 30, 2020, 6:30 PM Todd Cole via PLUG-discuss < plug-discuss@lists.phxlinux.org> wrote: > I have been using ZFS almost two years on servers and it is easy to deal > with drive replacements. I have even switched my desktops and laptops to > ubuntu 20.04 ZFS for the snapshots

Re: Built for Failure

2020-12-30 Thread Seabass via PLUG-discuss
At least 3 drives in a machine. I have two machines, but I don't know how to spread my storage across them. Original Message On Dec 30, 2020, 5:59 PM, Stephen Partington wrote: > How many drive are you looking to spin up at one time? Across how many > machines? > > On Wed, Dec

Re: Built for Failure

2020-12-30 Thread Todd Cole via PLUG-discuss
I have been using ZFS almost two years on servers and it is easy to deal with drive replacements. I have even switched my desktops and laptops to ubuntu 20.04 ZFS for the snapshots. Issues are learning ZFS and Raids they are not real hard to learn but then replacing drives as they fail = Time vs mo

Re: Built for Failure

2020-12-30 Thread Stephen Partington via PLUG-discuss
How many drive are you looking to spin up at one time? Across how many machines? On Wed, Dec 30, 2020, 5:39 PM Seabass via PLUG-discuss < plug-discuss@lists.phxlinux.org> wrote: > That is a good question. > Probably not, though. > > Have a software raid version? I need to check what these have, b

Re: Built for Failure

2020-12-30 Thread Seabass via PLUG-discuss
That is a good question. Probably not, though. Have a software raid version? I need to check what these have, but I don't think there is much beyond raid1 and raid0. Original Message On Dec 30, 2020, 4:02 PM, Rusty Ramser wrote: > Hi, Seabass. > > RAID-6 comes to mind, since i

Re: Built for Failure

2020-12-30 Thread Ed via PLUG-discuss
Sounds like you are looking for a RAID case - and time to keep it up and running. I would see it as an opportunity to try out ZFS https://openzfs.org/wiki/Main_Page zfs is ideally suited to working on a bunch of "about to fail" drives - and it might be the easiest at replacing drive while not riski

Re: Built for Failure

2020-12-30 Thread Brian Cluff via PLUG-discuss
How many drives are you talking about using.  If you have a bunch of them, like 6 to 9 drivers, you could combine them into 2 or 3 groups of roughly equal size and then make a each chunk a RAID 0 and then RAID those chunks up with either RAID1 or RAID5/6 depending on how much redundancy you want

Built for Failure

2020-12-30 Thread Seabass via PLUG-discuss
Weird question: I can get a bunch of ancient (~2013) HDDs. Each have varying amounts of space, and few (if any) are ever the same size. These were marked to be disposed, though that is just because of age or having plenty that are better. Thus I can take them. However, them being this old, and