Re: [PLUG] External drive issue

2023-03-24 Thread Ted Mittelstaedt
Air gapped backups are super important.   One of my clients once was gunned by 
an attacker that erased the backups off their NAS.  I had pushed for them to 
have air gapped backups and they did but they didn't really believe in them.  
That made them a believer!

Ted

-Original Message-
From: PLUG  On Behalf Of Michael Ewan
Sent: Friday, March 24, 2023 3:50 PM
To: Portland Linux/Unix Group 
Subject: Re: [PLUG] External drive issue

I generally keep at least three copies of important stuff, the original, the 
rsync backup, and another rsync on a different machine (that one also has cloud 
backup). For really important stuff I have an air gap off site disk with a copy 
of everything (the problem there is remembering to refresh it).

On Fri, Mar 24, 2023 at 11:44 AM Rich Shepard 
wrote:

> On Fri, 24 Mar 2023, Michael Ewan wrote:
>
> > Note, mkfs.xfs is done on the raid1 disk set, not on the individual
> drives.
>
> Michael,
>
> Got it. I'm seriouly considering formatting both with xfs, installing 
> dirvish on /dev/sde1 then using rsync to keep /dev/sdf1 as a mirror. 
> Not being combined into a logical volume I shouldn't again lose all 
> backups because the lv failed and wiped both disks.
>
> Thanks,
>
> Rich
>


Re: [PLUG] External drive issue

2023-03-24 Thread Michael Ewan
I generally keep at least three copies of important stuff, the original,
the rsync backup, and another rsync on a different machine (that one also
has cloud backup). For really important stuff I have an air gap off site
disk with a copy of everything (the problem there is remembering to refresh
it).

On Fri, Mar 24, 2023 at 11:44 AM Rich Shepard 
wrote:

> On Fri, 24 Mar 2023, Michael Ewan wrote:
>
> > Note, mkfs.xfs is done on the raid1 disk set, not on the individual
> drives.
>
> Michael,
>
> Got it. I'm seriouly considering formatting both with xfs, installing
> dirvish on /dev/sde1 then using rsync to keep /dev/sdf1 as a mirror. Not
> being combined into a logical volume I shouldn't again lose all backups
> because the lv failed and wiped both disks.
>
> Thanks,
>
> Rich
>


Re: [PLUG] External drive issue

2023-03-24 Thread Rich Shepard

On Fri, 24 Mar 2023, Ted Mittelstaedt wrote:


I try to run my disks less than 70% full otherwise you get too much
fragmentation, so if you are rsyncing you are essentially backing up empty
disk, unless of course your disks are very full. I use external USB docks
and bare drives I can plug in and do filesystem backups to those. Some of
these setups I can fit 2 or 3 backups on the external disks depending on
how modern the dock is and if is compatible with the USB chip in the
server it's plugged into.


Ted,

Dirvish does incremental backups after the initial one. Until last year when
I set up the LV I had always backed up the directories I wanted onto a
single external drive. Never ran out of room.

I don't recall how full the LV disks were, but there's plenty of room on a
2T drive for my backups.

Thanks,

Rich


Re: [PLUG] External drive issue

2023-03-24 Thread Ted Mittelstaedt
I try to run my disks less than 70% full otherwise you get too much 
fragmentation, so if you are rsyncing you are essentially backing up empty 
disk, unless of course your disks are very full.  I use external USB docks and 
bare drives I can plug in and do filesystem backups to those.  Some of these 
setups I can fit 2 or 3 backups on the external disks depending on how modern 
the dock is and if is compatible with the USB chip in the server it's plugged 
into.

Ted

-Original Message-
From: PLUG  On Behalf Of Rich Shepard
Sent: Friday, March 24, 2023 11:41 AM
To: Portland Linux/Unix Group 
Subject: Re: [PLUG] External drive issue

On Fri, 24 Mar 2023, Ted Mittelstaedt wrote:

> I never depend on RAID either RAID5 or mirroring for backup purposes.

Ted,

Since having 2 drives in a RAID1 array and mirrored in a logical volumne for 
backup, and both drives were somehow wiped, I'm thinking of using only one disk 
(with xfs installed) for /media/backup and using a cron job (running after the 
daily backup at 00:30) to rsync to the other disk. That way I have two mirrored 
backups independent of each other.

I'll ponder this overnight. Having separate hard drives containing the same 
backups (one written by dirvish the other rsync'd) seems to me to be less 
likely to both fail at the same time.

Regards,

Rich


Re: [PLUG] External drive issue

2023-03-24 Thread Rich Shepard

On Fri, 24 Mar 2023, Michael Ewan wrote:


Note, mkfs.xfs is done on the raid1 disk set, not on the individual drives.


Michael,

Got it. I'm seriouly considering formatting both with xfs, installing
dirvish on /dev/sde1 then using rsync to keep /dev/sdf1 as a mirror. Not
being combined into a logical volume I shouldn't again lose all backups
because the lv failed and wiped both disks.

Thanks,

Rich


Re: [PLUG] External drive issue

2023-03-24 Thread Rich Shepard

On Fri, 24 Mar 2023, Ted Mittelstaedt wrote:


I never depend on RAID either RAID5 or mirroring for backup purposes.


Ted,

Since having 2 drives in a RAID1 array and mirrored in a logical volumne for
backup, and both drives were somehow wiped, I'm thinking of using only one
disk (with xfs installed) for /media/backup and using a cron job (running
after the daily backup at 00:30) to rsync to the other disk. That way I have
two mirrored backups independent of each other.

I'll ponder this overnight. Having separate hard drives containing the same
backups (one written by dirvish the other rsync'd) seems to me to be less
likely to both fail at the same time.

Regards,

Rich


Re: [PLUG] External drive issue

2023-03-24 Thread Michael Ewan
Note, mkfs.xfs is done on the raid1 disk set, not on the individual drives.

On Fri, Mar 24, 2023 at 11:10 AM Rich Shepard 
wrote:

> On Fri, 24 Mar 2023, Michael Ewan wrote:
>
> > Do not use EXT4, it will cause you problems down the road.  Use xfs
> > instead, it has higher reliability and better performance.
>
> Michael,
>
> Okay. I can install xfs on those two drives.
>
> Thanks,
>
> Rich
>


Re: [PLUG] External drive issue

2023-03-24 Thread Rich Shepard

On Fri, 24 Mar 2023, Randy Bush wrote:


sorry. was not meant as a suggestion for you. meant as a recco of lvm/md0
in general.

i also use it as raid 1. on new kit, when i can get it, i have boot/root
on m2/nvme pice using raid 1, and then an ssd array with another md raid
10.


randy,

I'll take Michael's reccomendation to use xfs on those two drives, rebuild
the LV from scratch, then re-install divish for all 8 vaults.

Thanks,

Rich


Re: [PLUG] External drive issue

2023-03-24 Thread Ted Mittelstaedt
I never depend on RAID either RAID5 or mirroring for backup purposes.

RAID's usefulness is if in the middle of the day (or in the middle of a backup) 
a hard drive fails then the server does not unceremoniously shut down.

Instead I can take a final backup of the server then do whatever to replace the 
disk

With hardware RAID cards that may simply mean swapping the failed disk and the 
hardware card takes care of rebuilding the array by itself.

With software that may mean the array is scotched.  I have attempted in the 
past to replace disks on software arrays.  Sometimes it works sometimes it does 
not.  Sometimes in the process of rebuilding the primary disk craps out.

With small "desktop" servers I use mirroring simply because disks nowadays are 
cheap, and I'll use whatever RAID is available.   Often it's so-called 
"fakeraid" because that way the server will boot off the "raid" array.

Since the disks in these are purchased at the same time, when 1 goes I usually 
just replace both if they are out of warranty (and they usually are)  recreate 
the array and restore from backup.  Often I'll regen the entire server.

I have several servers with "fakeraid" chips in them and they are 1U with only 
2 slots for disks, so they have mirrored disks in them, and the process to get 
the OS installed and working so that the system will boot off the "raid" array 
is so cumbersome that it isn't even possible to upgrade the OS.  Ubuntu's 
developers in particular hate fakeraid with a passion and in every new version 
are constantly screwing with the drivers so you have to find new ways to set 
them up.  Some fakeraid chips write metadata to the end of the disk and GPT 
tables will overwrite those so I have to setup the disks as MBR (and no larger 
than 2TB of course)

It's usually a lot of fun to update to a new version on these.  But, the 1U 
server form factor is pretty restrictive in terms of what disks you can use.  
For cost most of the time I use 3.5" SATA disks.   I have not found SAS drives 
to be worth the money, I'll spec em for customers since they will lay out the 
cash for them but I use disks I can buy over the counter for my personal stuff.

It is a constant battle with disk drive makers who seem to have forgotten that 
the I in raid means INEXPENSIVE drives, not "independent" drives.  They have 
prices jacked up sky high for anything that they think isn't going to retail.

Ted

-Original Message-
From: PLUG  On Behalf Of Rich Shepard
Sent: Friday, March 24, 2023 9:37 AM
To: Portland Linux/Unix Group 
Subject: Re: [PLUG] External drive issue

On Fri, 24 Mar 2023, Rich Shepard wrote:

> Checking cfdisk for both /dev/sde and /dev/sdf shows both having free 
> space for the entire disk.

A question for the professional sysadmins: having both disks in a mirrored
RAID1 array as a logical volumn fail, does it make sense to rebuild the RAID, 
vgs and lv?

Since a mirrored copy didn't save my backup history, perhaps I should use only 
one disk for backp and save the other as a spare.

Your professional opinion?

TIA,

Rich


Re: [PLUG] External drive issue

2023-03-24 Thread Randy Bush
>> fwiw, i use lvm/md0 raid10 under debian on nodes on our ganeti clusters.
>> quite happy with it. i hate hardware raid.
> 
> Since RAID 10 requires four disks to mirror two pairs of striped disks
> and two of my external drives are otherwise occupied, I don't have the
> capability of using it.
> 
> Thanks for the suggestion,

sorry.  was not meant as a suggestion for you.  meant as a recco of
lvm/md0 in general.

i also use it as raid 1.  on new kit, when i can get it, i have
boot/root on m2/nvme pice using raid 1, and then an ssd array with
another md raid 10.

randy


Re: [PLUG] External drive issue

2023-03-24 Thread Rich Shepard

On Fri, 24 Mar 2023, Michael Ewan wrote:


Do not use EXT4, it will cause you problems down the road.  Use xfs
instead, it has higher reliability and better performance.


Michael,

Okay. I can install xfs on those two drives.

Thanks,

Rich


Re: [PLUG] External drive issue

2023-03-24 Thread Rich Shepard

On Fri, 24 Mar 2023, Randy Bush wrote:


fwiw, i use lvm/md0 raid10 under debian on nodes on our ganeti clusters.
quite happy with it. i hate hardware raid.


randy,

Since RAID 10 requires four disks to mirror two pairs of striped disks and
two of my external drives are otherwise occupied, I don't have the
capability of using it.

Thanks for the suggestion,

Rich


Re: [PLUG] External drive issue

2023-03-24 Thread Rich Shepard

On Fri, 24 Mar 2023, Michael Ewan wrote:


In my experience, using lvm2 to mirror your disks should work for you and
be the easiest to recover since each disk has a copy of the vg
description, then vgscan will find your configuration for you. Skip using
md unless you need multipath access to a device.


Michael,

Theory says that the disks mirror each other, yet somehow I lost the
contents of both disks. There are no vertual disks since /dev/sde and
/dev/sdf both have no filesystem installed; cfdisk shows each having free
space across the entire disk.

Thanks,

Rich


Re: [PLUG] External drive issue

2023-03-24 Thread Randy Bush
fwiw, i use lvm/md0 raid10 under debian on nodes on our ganeti clusters.
quite happy with it.  i hate hardware raid.

randy


Re: [PLUG] External drive issue

2023-03-24 Thread Michael Ewan
In my experience, using lvm2 to mirror your disks should work for you and
be the easiest to recover since each disk has a copy of the vg
description, then vgscan will find your configuration for you.  Skip using
md unless you need multipath access to a device.

On Fri, Mar 24, 2023 at 9:36 AM Rich Shepard 
wrote:

> On Fri, 24 Mar 2023, Rich Shepard wrote:
>
> > Checking cfdisk for both /dev/sde and /dev/sdf shows both having free
> space
> > for the entire disk.
>
> A question for the professional sysadmins: having both disks in a mirrored
> RAID1 array as a logical volumn fail, does it make sense to rebuild the
> RAID, vgs and lv?
>
> Since a mirrored copy didn't save my backup history, perhaps I should use
> only one disk for backp and save the other as a spare.
>
> Your professional opinion?
>
> TIA,
>
> Rich
>


Re: [PLUG] External drive issue

2023-03-24 Thread Michael Ewan
Do not use EXT4, it will cause you problems down the road.  Use xfs
instead, it has higher reliability and better performance.

On Fri, Mar 24, 2023 at 9:24 AM Rich Shepard 
wrote:

> On Thu, 23 Mar 2023, Rich Shepard wrote:
>
> > I turned off the desktop and removed the power cord to replace the video
> > card. The MediaSonic Probox also powered down. When I plugged in the
> desktop
> > and turned it on, along with the Probox, the two single drives in the
> Probox
> > (/data2 and /data3) automatically mounted, but the 2-drive logical
> volume,
> > /dev/md0 did not mount. When I try to mount it manually I find that it
> > /doesn't exist.
>
> Checking cfdisk for both /dev/sde and /dev/sdf shows both having free space
> for the entire disk. I interpret this to mean that I need to re-format each
> with ext4, re-build the raid1 array, then recreate virtual groups and the
> logical partition (as /dev/md0) and mount it as /media/backup. Sigh. I've
> no
> idea how it became FUBAR'd.
>
> Rich
>


Re: [PLUG] External drive issue

2023-03-24 Thread Rich Shepard

On Fri, 24 Mar 2023, Rich Shepard wrote:


Checking cfdisk for both /dev/sde and /dev/sdf shows both having free space
for the entire disk.


A question for the professional sysadmins: having both disks in a mirrored
RAID1 array as a logical volumn fail, does it make sense to rebuild the
RAID, vgs and lv?

Since a mirrored copy didn't save my backup history, perhaps I should use
only one disk for backp and save the other as a spare.

Your professional opinion?

TIA,

Rich


Re: [PLUG] External drive issue

2023-03-24 Thread Rich Shepard

On Thu, 23 Mar 2023, Rich Shepard wrote:


I turned off the desktop and removed the power cord to replace the video
card. The MediaSonic Probox also powered down. When I plugged in the desktop
and turned it on, along with the Probox, the two single drives in the Probox
(/data2 and /data3) automatically mounted, but the 2-drive logical volume,
/dev/md0 did not mount. When I try to mount it manually I find that it
/doesn't exist.


Checking cfdisk for both /dev/sde and /dev/sdf shows both having free space
for the entire disk. I interpret this to mean that I need to re-format each
with ext4, re-build the raid1 array, then recreate virtual groups and the
logical partition (as /dev/md0) and mount it as /media/backup. Sigh. I've no
idea how it became FUBAR'd.

Rich


Re: [PLUG] External drive issue

2023-03-24 Thread Rich Shepard

On Fri, 24 Mar 2023, Ted Mittelstaedt wrote:


Does
Mdadm -Q /dev/data2   or  mdadm -Q /dev/sdX  (whatever th actual disk is) show 
the disk is part of an array?


Ted,

No. There are four WD RED 2T drives in there: /dev/sdc, /dev/sdd, /dev/sde,
and /dev/sdf. The LV is the latter two.


cat /proc/mdstat   does that show the array is reassembling?


# ls /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] 
unused devices: 



Take a look at the commands here:
https://www.digitalocean.com/community/tutorials/how-to-manage-raid-arrays-with-mdadm-on-ubuntu-22-04


mdadm does nothing because there is no /dev/md0


I've never used a probox but it appears to have no intelligence and merely
acts as a USB drive enclosure for multiple disks, so I'm assuming your
disks show up as individual USB disks


That's true. It's not a NAS, but a four-bay external USB enclosure.

Thanks,

Rich


Re: [PLUG] External drive issue

2023-03-24 Thread Ted Mittelstaedt
Does

Mdadm -Q /dev/data2   or  mdadm -Q /dev/sdX  (whatever th actual disk is) show 
the disk is part of an array?

cat /proc/mdstat   does that show the array is reassembling?

Take a look at the commands here:

https://www.digitalocean.com/community/tutorials/how-to-manage-raid-arrays-with-mdadm-on-ubuntu-22-04

I've never used a probox but it appears to have no intelligence and merely acts 
as a USB drive enclosure for multiple disks, so I'm assuming your disks show up 
as individual USB disks

Ted

-Original Message-
From: PLUG  On Behalf Of Rich Shepard
Sent: Thursday, March 23, 2023 2:12 PM
To: plug@pdxlinux.org
Subject: [PLUG] External drive issue

I turned off the desktop and removed the power cord to replace the video card. 
The MediaSonic Probox also powered down. When I plugged in the desktop and 
turned it on, along with the Probox, the two single drives in the Probox
(/data2 and /data3) automatically mounted, but the 2-drive logical volume,
/dev/md0 did not mount. When I try to mount it manually I find that it /doesn't 
exist.

I've gone through this video card swapping several times this week with the LV 
mounting, either automatically of when I do so manually.

Where do I start looking for the reason it's not now seen?

Rich



[PLUG] External drive issue

2023-03-23 Thread Rich Shepard

I turned off the desktop and removed the power cord to replace the video
card. The MediaSonic Probox also powered down. When I plugged in the desktop
and turned it on, along with the Probox, the two single drives in the Probox
(/data2 and /data3) automatically mounted, but the 2-drive logical volume,
/dev/md0 did not mount. When I try to mount it manually I find that it
/doesn't exist.

I've gone through this video card swapping several times this week with the
LV mounting, either automatically of when I do so manually.

Where do I start looking for the reason it's not now seen?

Rich