On Mon, Jun 24, 2019 at 04:58:58PM +0100, U'll Be King Of The Stars wrote: > > > On 24 June 2019 09:38:17 BST, tlaro...@polynum.com wrote: > >RAID1 puts an equal burden on both disks > > Is it really equal? It doesn't depend on implementation? > > >you will spend your time replacing disks praying that they > >not both die at the same moment. > > This is one argument for not mirroring SSD's. I'm not sure how useful it is > or if this problem really happens. I know one person who says it does and > has experienced it. > > Maybe with two SSD's RAID0 is better? Or four SSD's with RAID10? > > I am pretty sure schemes like the ones you have decided have been > investigated. Maybe they already exist in some volume management systems? > > I would suggest asking around in some technical, storage-related mailing > lists. Linux has a lot of these. > > This is a very complicated problem. Not only that, but I'm not sure NetBSD > is a great OS for this. What are your options for filesystems, disregarding > the RAID part? > > Illumos or FreeBSD might be good. > > I still don't understand the problem you are trying to solve. Is it to > improve longevity of RAIDed disks? Or do you need a fast local tier that is > a subset of your full data set?
I'm discarding RAID altogether. In my view, it is the wrong solution to the problem---the fact that the 'I' has lost rapidly the "Inexpensive" meaning is a sure sign that there is something gone wrong. And I have tried "cheap" RAID1 appliance and it has been a disaster precisely because they have "cheap" disks, that do not handle correctly the workload, the mirroring stressing both disks for nothing, since it does not deliver the data faster and you lost the backup in the same time---one appliance was such that, even from the very first weeks, one disk of the array will not be available and you will need to reboot (and not having the hand on what was going on, you had too) and when rebooting, the other disk was found and the previous one not, the "magical" RAID1 then copying the data from the disk that was disconnected previously, thus loosing the data that was only on the second. Fantastic! There are some things I search in distributed filesystems (see my answer about Coda) but I do not need, for now, a distributed filesystem. I need only a robust fileserver---this can be achieved---but the ability to backup a subset of files on "cheap" disks that are minimally used (for writing, when backup is launched) and only read when they served as fallback. I have given a sketchy description of the solution I have in mind when answering about Coda (see this sub-thread) and I think it would unfortunately take less time for me to prototype something (the majority of time coming from the discovery of the kernel sources to see where and how to put what I need) than to try to search something like that in existing distributed filesystem solutions. Precision again: I speak about serving files, the way they are stored (the filesystem) is another topic. The Plan9 like deduplicating block oriented WORM is my prefered solution, but there is nothing like that available (at least open source) in Unix systems for now. So I put this aside for the moment. -- Thierry Laronde <tlaronde +AT+ polynum +dot+ com> http://www.kergis.com/ http://www.sbfa.fr/ Key fingerprint = 0FF7 E906 FBAF FE95 FD89 250D 52B1 AE95 6006 F40C