Duplicate, please ignore. Apologies for the noise.
On Fri, 20 Aug 2021 at 06:34 +0200, Pouya Tafti wrote:
> After a recent drive failure in my primary zfs pool, I set
> up a secondary pool on a cgd(4) device on a single new sata
> hdd (zfs on gpt on cgd on gpt on a 4TB Seagate Ironwolf
> hdd) to
On Fri, 20 Aug 2021 at 06:13 -, Michael van Elst wrote:
[snip]
> Yes. It could be the drive itself, but I'd suspect the
> backplane or cables. The PSU is also a possible candidate.
Thanks. Retrying the replication in another bay now before
opening up the box.
After a recent drive failure in my primary zfs pool, I set
up a secondary pool on a cgd(4) device on a single new sata
hdd (zfs on gpt on cgd on gpt on a 4TB Seagate Ironwolf
hdd) to back up the primary.
I initialy scrubbed the entire disk without apparent
incident using a temporary cryptographic
On Fri, 13 Aug 2021 at 11:48 +0100, David Brownlee wrote:
> How does the rate of change in data compare to upload bandwidth? In my
> case I bootstrapped the remote backup boxes by having them connected
> to the same network for a few days until everything was up to date,
> then transported them to
pouya+lists.net...@nohup.io (Pouya Tafti) writes:
Your disk controller gives the error reason:
>[ 57131.573806] mpii0: physical device removed from slot 7
>Apart from the drive, I have also little faith in the
>backplate, cables, SAS controller (which I reflashed), RAM,
>etc., although here it