Re: [gentoo-user] NAS and replacing with larger drives

2022-12-21 Thread Wol

On 21/12/2022 20:40, Frank Steinmetzger wrote:

Yes? In a mirror setup, all member drives of a mirror have the same content
(at least in ZFS).

Raid 10 distributes its content across several mirrors. This is the cause
for its increased performance. So when one of the mirrors (not single drive,
but a whole set of mirrored drives) fails, the pool is gone.


Linux will happily give you a 2-copy mirror across 3 drives - 3x6TB drives
will give you 9TB useful storage ...



I admit, I’ve never head of that. (Though it sounds like raid-5 to me.)


Raid 5 has a parity drive (or rather, raid 4 has a parity drive. Raid 5 
smears parity across all disks). It does not store duplicate copies. 
Raid 10 has duplicate data and no parity.



Read up on linux raid-10. It is NOT raid-1+0.

Drive   sda sdb sdc

Blocks   1   1   2
 2   3   3
 4   4   5
 5   6   6

etc ...

https://raid.wiki.kernel.org/index.php/What_is_RAID_and_why_should_you_want_it%3F

(Disclaimer - I either wrote or heavily edited it.)

Cheers,
Wol



Re: [gentoo-user] NAS and replacing with larger drives

2022-12-21 Thread Frank Steinmetzger
Am Wed, Dec 21, 2022 at 08:03:36PM + schrieb Wol:
> On 21/12/2022 06:19, Frank Steinmetzger wrote:
> > Am Wed, Dec 21, 2022 at 05:53:03AM + schrieb Wols Lists:
> > 
> > > On 21/12/2022 02:47, Dale wrote:
> > > > I think if I can hold out a little while, something really nice is going
> > > > to come along.  It seems there is a good bit of interest in having a
> > > > Raspberry Pi NAS that gives really good performance.  I'm talking a NAS
> > > > that is about the same speed as a internal drive.  Plus the ability to
> > > > use RAID and such.  I'd like to have a 6 bay with 6 drives setup in
> > > > pairs for redundancy.  I can't recall what number RAID that is.
> > > > Basically, if one drive fails, another copy still exists.  Of course,
> > > > two independent NASs would be better in my opinion.  Still, any of this
> > > > is progress.
> > > 
> > > That's called either Raid-10 (linux), or Raid-1+0 (elsewhere). Note that 
> > > 1+0
> > > is often called 10, but linux-10 is slightly different.
> > 

> > In layman’s term, a stripe of mirrors. Raid-1 is the mirror, Raid-0 a (JBOD)
> > pool. So mirror + pool = mirrorpool, hence the 1+0 → 10.
> 
> Except raid-10 is not a stripe of mirrors.
> It's each block is saved to two different drives. (Or 3, or more, so long
> as you have more drives than mirrors.)

Yes? In a mirror setup, all member drives of a mirror have the same content
(at least in ZFS).

Raid 10 distributes its content across several mirrors. This is the cause
for its increased performance. So when one of the mirrors (not single drive,
but a whole set of mirrored drives) fails, the pool is gone.

> Linux will happily give you a 2-copy mirror across 3 drives - 3x6TB drives
> will give you 9TB useful storage ...

I admit, I’ve never head of that. (Though it sounds like raid-5 to me.)

> > If I wanted to increase my capacity, I’d have to replace *all* drives with
> > bigger ones. With a mirror, only the drives in one of the mirrors need
> > replacing. And the rebuild process would be quicker and less painful, as
> > each drive will only be read once to rebuild its partner, and there is no
> > parity calculation involved. In a RAID, each drive is replaced one by one,
> > and each replacement requires a full read of all drives’ payload.
>
> If you've got a spare SATA connection or whatever, each replacement does not
> need a full read of all drives. "mdadm --add /dev/sdx --replace /dev/sdy".
> That'll stream sdy on to sdx, and only hammer the other drives if sdy
> complains ...

Strange that I didn’t think of that, even though it’s a perfectly clear
concept. In ZFS there is also a replace function which would do just that.
Currently I plan on keeping my old drives (who would want to buy them off of
me anyways) and just reorganise them in Z1 over Z2. I’ll just have to move
all data off to temprary external drives.

> > With older
> > drives, this is cause for some concern whether the disks may survive that.
> > That’s why, with increasing disk capacities, raid-5 is said to be obsolete.
> > Because if another drive fails during rebuild, you are officially screwed.
> > 
> > Fun, innit?
> > 
> They've always said that. Just make sure you don't have multiple drives from
> the same batch, then they're less likely statistically to fail at the same
> time. I'm running raid-5 over 3TB partitions ...

Yeah, I bought my drives from different shops back then for that reason.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

When the going gets tough, the tough get going.
... and so do I. – Alf


signature.asc
Description: PGP signature


Re: [gentoo-user] NAS and replacing with larger drives

2022-12-21 Thread Wol

On 21/12/2022 06:19, Frank Steinmetzger wrote:

Am Wed, Dec 21, 2022 at 05:53:03AM + schrieb Wols Lists:


On 21/12/2022 02:47, Dale wrote:

I think if I can hold out a little while, something really nice is going
to come along.  It seems there is a good bit of interest in having a
Raspberry Pi NAS that gives really good performance.  I'm talking a NAS
that is about the same speed as a internal drive.  Plus the ability to
use RAID and such.  I'd like to have a 6 bay with 6 drives setup in
pairs for redundancy.  I can't recall what number RAID that is.
Basically, if one drive fails, another copy still exists.  Of course,
two independent NASs would be better in my opinion.  Still, any of this
is progress.


That's called either Raid-10 (linux), or Raid-1+0 (elsewhere). Note that 1+0
is often called 10, but linux-10 is slightly different.


In layman’s term, a stripe of mirrors. Raid-1 is the mirror, Raid-0 a (JBOD)
pool. So mirror + pool = mirrorpool, hence the 1+0 → 10.


Except raid-10 is not a stripe of mirrors. It's each block is saved to 
two different drives. (Or 3, or more, so long as you have more drives 
than mirrors.)


Linux will happily give you a 2-copy mirror across 3 drives - 3x6TB 
drives will give you 9TB useful storage ...



I'd personally be inclined to go for raid-6. That's 4 data drives, 2 parity
(so you could have an "any two" drive failure and still recover).
A two-copy 10 or 1+0 is vulnerable to a two-drive failure. A three-copy is
vulnerable to a three-drive failure.


At first, I had only two drives in my 4-bay NAS, which were of course set up
as a mirror. After a year, when it became full, I bought the second pair of
drives and had long deliberations by then, what to choose. I went for raid-6
(or RaidZ2 in ZFS parlance). With only four disks, it has the same net
capacity as a pair of mirrors, but at the advantage that *any* two drives
may fail, not just two particular ones. A raid of mirrors has performance
benefits over a parity raid, but who cares for a simple Gbit storage device.

With increasing number of disks, a mirror setup is at a disadvantage with
storage efficiency – it’s always 50 % or less, if you mirror over more than
two disks. But with only four disks, that was irrelevant in my case. On the
plus-side, each mirror can have a different physical disk size, so you can
more easily mix’n’match what you got lying around, or do upgrades in smaller
increments.

If I wanted to increase my capacity, I’d have to replace *all* drives with
bigger ones. With a mirror, only the drives in one of the mirrors need
replacing. And the rebuild process would be quicker and less painful, as
each drive will only be read once to rebuild its partner, and there is no
parity calculation involved. In a RAID, each drive is replaced one by one,
and each replacement requires a full read of all drives’ payload.


If you've got a spare SATA connection or whatever, each replacement does 
not need a full read of all drives. "mdadm --add /dev/sdx --replace 
/dev/sdy". That'll stream sdy on to sdx, and only hammer the other 
drives if sdy complains ...



With older
drives, this is cause for some concern whether the disks may survive that.
That’s why, with increasing disk capacities, raid-5 is said to be obsolete.
Because if another drive fails during rebuild, you are officially screwed.

Fun, innit?

They've always said that. Just make sure you don't have multiple drives 
from the same batch, then they're less likely statistically to fail at 
the same time. I'm running raid-5 over 3TB partitions ...


Cheers,
Wol



Re: [gentoo-user] NAS and replacing with larger drives

2022-12-21 Thread Mark Knecht
On Tue, Dec 20, 2022 at 11:52 PM Dale  wrote:

> This is why at some point, I'd like to have two sets of backups.  RAID
> or not.

Amazon Snowball? :-) ;-)

MArk


[gentoo-user] Preparing for video card upgrade.

2022-12-21 Thread Alan Grimes
Hey, I am getting ready to retire the GTX 980 Ti in this computer. 
(replace with Titan RTX which is due to be replaced with a 4090)


The thing is my /X11 directory looks like:


atg@tortoise /etc/X11 $ ls -l
total 52
-rwxr-xr-x 1 root root 1192 Nov 16 19:03 chooser.sh
drwxr-xr-x 2 root root 4096 Sep 13 18:25 mwm
drwxr-xr-x 2 root root 4096 Nov 16 19:17 Sessions
lrwxrwxrwx 1 root root   16 Nov 16 18:49 startDM.sh -> /usr/bin/startDM
drwxr-xr-x 4 root root 4096 Nov 16 19:03 xinit
-rw-r--r-- 1 root root 2862 Apr 24  2018 xorg.conf
-rw-r--r-- 1 root root 2805 Oct  3  2017 xorg.conf.backup
drwxr-xr-x 2 root root 4096 Mar  7  2020 xorg.conf.d
-rw-r--r-- 1 root root 2490 Oct 21  2010 xorg.conf.good
-rw-r--r-- 1 root root 2446 May 31  2010 xorg.conf.new
-rw-r--r-- 1 root root 2815 Feb  3  2015 xorg.conf.nvidia_last
-rw-r--r-- 1 root root 1853 Feb 11  2015 xorg.conf.nvidia-xconfig-original
-rw-r--r-- 1 root root 2815 Jan 21  2015 xorg.conf.original-0
-rw-r--r-- 1 root root   13 Dec  4  2016 XvMCConfig
atg@tortoise /etc/X11 $

So I literally have not touched this crap since 2015... Ideally the new card 
will just boot up but
then I am using DVI for one of my monitors here and will need to find an 
adaptor in my junk pile
before doing this... (currently only using one such adaptor...)


--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.