Re: Getting "Boot error" after replacing a disk in softraid
a bit late, but fwiw - I have 7.5 on an optiplex 990 on SSD, no issues. Not quite a 980, but I honestly doubt that this matters given my experience on these boxes. (I also have 7.5 on a 960, and have had it on a 955 and 945. There’s nothing special about the Dells that I have and booting MBR with SSD.) AHCI vs IDE shouldnt matter for booting (but IDE is slower), but if you had the ports in RAID mode it might. # dmesg|egrep 'OpenBSD|Opti|INTEL' OpenBSD 7.5 (GENERIC.MP) #82: Wed Mar 20 15:48:40 MDT 2024 bios0: Dell Inc. OptiPlex 990 sd0 at scsibus1 targ 0 lun 0: naa.55cd2e404b451004 > On Apr 23, 2024, at 2:08 PM, Martin wrote: > > Turns out this machine, for some reason, simply cannot boot of SSDs with > neither OpenBSD or FreeBSD on the box. Only spinning drives work. > > It's an old Dell Inc. OptiPlex 980. > > I suspect there is some issue with the BIOS of the machine and the BSD > bootloaders as Linux with GRUB works on SSDs. >
Re: Getting "Boot error" after replacing a disk in softraid [SOLVED]
On 2024-04-25, Chris Petrik wrote: > Remember softraid isn't the same as hw raid and I will always chose hw over > soft this includes zfs. There are advantages and disadvantages for both. e.g. a software setup with multiple disk controllers can, if there's support for data error detection[1], protect against (or at least detect) errors on the hardware->cpu path which might not be noticable on a single hw raid controller. And in the event of controller/motherboard failure it might not be possible to attach drives to another machine (unless a suitable spare is at hand) with hw raid, whereas with sw they could often just be moved. [1] yes I know OpenBSD softraid(4) doesn't do it, but some others do
Re: Getting "Boot error" after replacing a disk in softraid [SOLVED]
> Hello, > > Remember softraid isn't the same as hw raid and I will always chose hw over > soft this includes zfs. > > Chris I am sorry, but what relevance does your personal preferences have to anything regarding this issue? FWIW, I have seen more than one example of some really crappy hardware raid controllers that I wouldn't hesitate a split second to replace with ZFS.
Re: Getting "Boot error" after replacing a disk in softraid [SOLVED]
Hello, Remember softraid isn't the same as hw raid and I will always chose hw over soft this includes zfs. Chris Sent from Proton Mail Android Original Message On 4/25/24 3:14 PM, Martin wrote: > > On Thu, Apr 25, 2024 at 09:12:47AM +0200, Stefan Sperling wrote: > > > > > I checked, the softraid manual page already has an example installboot > > > invocation in EXAMPLES, which should be clear enough. > > > > > > Regardless, I've tweaked the wording a bit. Hopefully more clear now. > > Indeed :) Thank you very much! > >
Re: Getting "Boot error" after replacing a disk in softraid [SOLVED]
> On Thu, Apr 25, 2024 at 09:12:47AM +0200, Stefan Sperling wrote: > > > I checked, the softraid manual page already has an example installboot > > invocation in EXAMPLES, which should be clear enough. > > > Regardless, I've tweaked the wording a bit. Hopefully more clear now. Indeed :) Thank you very much!
Re: Getting "Boot error" after replacing a disk in softraid [SOLVED]
On Thu, Apr 25, 2024 at 09:12:47AM +0200, Stefan Sperling wrote: > I checked, the softraid manual page already has an example installboot > invocation in EXAMPLES, which should be clear enough. Regardless, I've tweaked the wording a bit. Hopefully more clear now.
Re: Getting "Boot error" after replacing a disk in softraid [SOLVED]
On Thu, Apr 25, 2024 at 03:27:29AM +, Martin wrote: > I eventually found out what was going on. > > The FreeBSD boot problem was not related at all. > > Long story short and for future reference, installboot needs > to be run on the softraid volume, NOT on the physical disk. And this > has to be repeated after a softraid volume rebuild in order for the new > disk to be bootable too. > > This cannot be done from the boot media, but one can boot from media > and then mount the softraid with the working disk and then chroot into > that and run 'installboot sd2' (or whatever device name the softraid > volume has). > > This was not obvious to me. Perhaps because with GRUB one has to install > the bootloader and boot code on each single disk in a mdadm volume and > not on the volume itself. Ah, indeed. Thanks for reporting back with the solution. This should be obvious but it's understandable that new users might miss it. I checked, the softraid manual page already has an example installboot invocation in EXAMPLES, which should be clear enough.
Re: Getting "Boot error" after replacing a disk in softraid [SOLVED]
I eventually found out what was going on. The FreeBSD boot problem was not related at all. Long story short and for future reference, installboot needs to be run on the softraid volume, NOT on the physical disk. And this has to be repeated after a softraid volume rebuild in order for the new disk to be bootable too. This cannot be done from the boot media, but one can boot from media and then mount the softraid with the working disk and then chroot into that and run 'installboot sd2' (or whatever device name the softraid volume has). This was not obvious to me. Perhaps because with GRUB one has to install the bootloader and boot code on each single disk in a mdadm volume and not on the volume itself.
Re: Getting "Boot error" after replacing a disk in softraid
> That seems... unusual. Do you have an (old) IDE compatibility option turned > on in > the BIOS? I would have expected it to attach via AHCI: > sd0 at scsibus1 targ 2 lun 0:
Re: Getting "Boot error" after replacing a disk in softraid
On Tue, Apr 23, 2024 at 08:26:40PM -0500, Brian Conway wrote: > > wd0 at pciide0 channel 0 drive 0: > > That seems... unusual. Do you have an (old) IDE compatibility option turned > on in the BIOS? I would have expected it to attach via AHCI: > > sd0 at scsibus1 targ 2 lun 0: > naa.5002538e304456ac > > Brian Conway I don't see any special settings in the BIOS. Two of the pertinent screenshots from the BIOS settings are at: https://www.eskimo.com/~joji/optiplex745/ Not sure why the wd(4) driver was chosen over the sd(4) driver.
Re: Getting "Boot error" after replacing a disk in softraid
On 2024-04-24, Brian Conway wrote: >> wd0 at pciide0 channel 0 drive 0: > > That seems... unusual. Do you have an (old) IDE compatibility option turned > on in the BIOS? I would have expected it to attach via AHCI: Optiplex 980 is from ~2010, similar age to the HP N54L microserver etc. Disks connecting by wd(4) and BIOS bugs relating to certain types of storage device were pretty common back then.
Re: Getting "Boot error" after replacing a disk in softraid
> RAID replicates the data in the RAIDed area, yes? > > Do you have some reason to believe that the boot information (MBR, etc) is > _inside_ the RAID area, because I do not believe that. Really feels like > installboot needs to be run on this drive to, uh, install the proper boot > info. > > Philip Guenther installboot has been run.
Re: Getting "Boot error" after replacing a disk in softraid
> wd0 at pciide0 channel 0 drive 0: That seems... unusual. Do you have an (old) IDE compatibility option turned on in the BIOS? I would have expected it to attach via AHCI: sd0 at scsibus1 targ 2 lun 0: naa.5002538e304456ac Brian Conway
Re: Getting "Boot error" after replacing a disk in softraid
RAID replicates the data in the RAIDed area, yes? Do you have some reason to believe that the boot information (MBR, etc) is _inside_ the RAID area, because I do not believe that. Really feels like installboot needs to be run on this drive to, uh, install the proper boot info. Philip Guenther On Tue, Apr 23, 2024 at 8:19 AM wrote: > Also, if I boot from a USB stick, with only the new SSD attached, the > softraid is registered as degraded (as the other old disk is missing), so > it has been populated, and the partition is also marked with an asterisk > for boot, but I still cannot boot from that drive. > >
Re: Getting "Boot error" after replacing a disk in softraid
> FWIW, my current desktop which is a Dell OptiPlex 745 is booting off an SSD. > > joji@surya$ dmesg | grep -iE "optiplex|Samsung" > bios0: Dell Inc. OptiPlex 745 > wd0 at pciide0 channel 0 drive 0: > > > joji@surya$ uname -a > OpenBSD surya 7.5 GENERIC.MP#82 amd64 > > Don't know if your OptiPlex 980 is newer than mine. I find it strange that this isn't working, but I have tried just going for a standard OpenBSD install and also tested a standard FreeBSD install on two different SSDs - ignoring my original softraid setup, but no matter what I do, it just will not boot with either BSDs on either of those disks on this machine. I have tried placing each disk in each different SATA plug, also just to eliminate an issue with a specific SATA port, but it's the same result, no matter what port is set to boot from. I wiped the disks, installed Devuan Linux with GRUB on the same pair of disks, just to test, and it boots fine from either disk. So, for the moment I have given up running OpenBSD on SSDs on this box.
Re: Getting "Boot error" after replacing a disk in softraid
On Tue, Apr 23, 2024 at 09:08:26PM +, Martin wrote: > Turns out this machine, for some reason, simply cannot boot of SSDs with > neither OpenBSD or FreeBSD on the box. Only spinning drives work. > > It's an old Dell Inc. OptiPlex 980. > > I suspect there is some issue with the BIOS of the machine and the BSD > bootloaders as Linux with GRUB works on SSDs. FWIW, my current desktop which is a Dell OptiPlex 745 is booting off an SSD. joji@surya$ dmesg | grep -iE "optiplex|Samsung" bios0: Dell Inc. OptiPlex 745 wd0 at pciide0 channel 0 drive 0: joji@surya$ uname -a OpenBSD surya 7.5 GENERIC.MP#82 amd64 Don't know if your OptiPlex 980 is newer than mine.
Re: Getting "Boot error" after replacing a disk in softraid
Turns out this machine, for some reason, simply cannot boot of SSDs with neither OpenBSD or FreeBSD on the box. Only spinning drives work. It's an old Dell Inc. OptiPlex 980. I suspect there is some issue with the BIOS of the machine and the BSD bootloaders as Linux with GRUB works on SSDs.
Re: Getting "Boot error" after replacing a disk in softraid
Also, if I boot from a USB stick, with only the new SSD attached, the softraid is registered as degraded (as the other old disk is missing), so it has been populated, and the partition is also marked with an asterisk for boot, but I still cannot boot from that drive.
Re: Getting "Boot error" after replacing a disk in softraid
> I suspect this error comes from your BIOS/UEFI rather than the OpenBSD > boot loader. Did you check how boot drives are configured in firmware? I already tested that by moving the new disk to another box and boot it from that, unfortunately I get the same error.
Re: Getting "Boot error" after replacing a disk in softraid
On Tue, Apr 23, 2024 at 12:51:41AM +, i...@protonmail.com wrote: > I have a softraid mirror setup with two old spinning disks. I have detached > one of the disks from the mirror and attached a new SSD. I then wanted to > rebuild the mirror, using one old spinning drive and the new SSD, and then > afterwards, remove the old spinning drive and replace with yet another SSD, > ending up with a mirror of two new SSDs. > > After I attached the new SSD to the box, I did: > > fdisk -iy sd1 (the new disk) > > Then I cloned the layout of the old drive onto the new: > > disklabel sd0 > layout > disklabel -R sd1 layout > > Then I used installboot: > > installboot sd1 > > And started rebuilding the mirror: > > bioctl -R /dev/sda1 sd2 (sd2 being the RAID device) > > This worked fine and the mirror is up. > > However, when I now dettach the old drive and boot from only the new SSD, I > get "Boot error". > > What am I missing? I suspect this error comes from your BIOS/UEFI rather than the OpenBSD boot loader. Did you check how boot drives are configured in firmware?
Getting "Boot error" after replacing a disk in softraid
I have a softraid mirror setup with two old spinning disks. I have detached one of the disks from the mirror and attached a new SSD. I then wanted to rebuild the mirror, using one old spinning drive and the new SSD, and then afterwards, remove the old spinning drive and replace with yet another SSD, ending up with a mirror of two new SSDs. After I attached the new SSD to the box, I did: fdisk -iy sd1 (the new disk) Then I cloned the layout of the old drive onto the new: disklabel sd0 > layout disklabel -R sd1 layout Then I used installboot: installboot sd1 And started rebuilding the mirror: bioctl -R /dev/sda1 sd2 (sd2 being the RAID device) This worked fine and the mirror is up. However, when I now dettach the old drive and boot from only the new SSD, I get "Boot error". What am I missing?
RAID5 softraid inside VMM unable to read disklabel
I am practicing setting up RAID5 inside a virtual machine running OpenBSD 7.5 in VMM on OpenBSD 7.4. I created 3 disks sd0, sd1, sd2, and sd3, and 4 disk devices (the fourth to represent the RAID array itself): Welcome to the OpenBSD/amd64 7.5 installation program. (I)nstall, (U)pgrade, (A)utoinstall or (S)hell? s # cd /dev/ # sh MAKEDEV sd0 sd1 sd2 sd3 # fdisk -iy sd0 Writing MBR at offset 0. # fdisk -iy sd1 Writing MBR at offset 0. # fdisk -iy sd2 Writing MBR at offset 0. # disklabel -E sd0 Label editor (enter '?' for help at any prompt) sd0> a a offset: [64] size: [41942976] * FS type: [4.2BSD] RAID sd0*> w sd0> q No label changes. # disklabel sd0 > layout # disklabel -R sd1 layout # disklabel -R sd2 layout # rm layout # bioctl -c 5 -l sd0a,sd1a,sd2a softraid0 sd3 at scsibus4 targ 1 lun 0: sd3: 40959MB, 512 bytes/sector, 83884800 sectors softraid0: RAID 5 volume attached as sd3 # dd if=/dev/zero of=/dev/rsd3c bs=1m count=1 1+0 records in 1+0 records out 1048576 bytes transferred in 0.028 secs (37044791 bytes/sec) And I verified the RAID5 array is online: # bioctl sd3 Volume Status Size Device softraid0 0 Online42949017600 sd3 RAID5 0 Online21474533376 0:0.0 noencl 1 Online21474533376 0:1.0 noencl 2 Online21474533376 0:2.0 noencl The rest of the OpenBSD installation proceeds as usual using sd3 as the installation disk, but upon reboot, I run into this error: >> OpenBSD/amd64 BOOT 3.65 open(sr0a:/etc/boot.conf): can't read disk label boot> cannot open sr0a:/etc/random.seed: can't read disk label booting sr0a:/bsd: open sr0a:/bsd: can't read disk label failed(100). will try /bsd RAID1 worked fine, it's just RAID5 throwing this error at me. -- jrmu IRCNow (https://ircnow.org)
Re: RAID5 softraid inside VMM unable to read disklabel
Please ignore, sibiria on IRC clarified to me that boot support is limited to only RAID1, crypto, and RAID1c disciplines. -- jrmu IRCNow (https://ircnow.org) On Tue, Apr 09, 2024 at 03:50:19PM -0700, jrmu wrote: > I am practicing setting up RAID5 inside a virtual machine running > OpenBSD 7.5 in VMM on OpenBSD 7.4. > > I created 3 disks sd0, sd1, sd2, and sd3, and 4 disk devices (the fourth to > represent the RAID array itself): > > Welcome to the OpenBSD/amd64 7.5 installation program. > (I)nstall, (U)pgrade, (A)utoinstall or (S)hell? s > # cd /dev/ > # sh MAKEDEV sd0 sd1 sd2 sd3 > # fdisk -iy sd0 > Writing MBR at offset 0. > # fdisk -iy sd1 > Writing MBR at offset 0. > # fdisk -iy sd2 > Writing MBR at offset 0. > # disklabel -E sd0 > Label editor (enter '?' for help at any prompt) > sd0> a a > offset: [64] > size: [41942976] * > FS type: [4.2BSD] RAID > sd0*> w > sd0> q > No label changes. > # disklabel sd0 > layout > # disklabel -R sd1 layout > # disklabel -R sd2 layout > # rm layout > # bioctl -c 5 -l sd0a,sd1a,sd2a softraid0 > sd3 at scsibus4 targ 1 lun 0: > sd3: 40959MB, 512 bytes/sector, 83884800 sectors > softraid0: RAID 5 volume attached as sd3 > # dd if=/dev/zero of=/dev/rsd3c bs=1m count=1 > 1+0 records in > 1+0 records out > 1048576 bytes transferred in 0.028 secs (37044791 bytes/sec) > > And I verified the RAID5 array is online: > > # bioctl sd3 > Volume Status Size Device > softraid0 0 Online42949017600 sd3 RAID5 > 0 Online21474533376 0:0.0 noencl > 1 Online21474533376 0:1.0 noencl > 2 Online21474533376 0:2.0 noencl > > The rest of the OpenBSD installation proceeds as usual using sd3 as the > installation disk, but upon reboot, I run into this error: > > >> OpenBSD/amd64 BOOT 3.65 > open(sr0a:/etc/boot.conf): can't read disk label > boot> > cannot open sr0a:/etc/random.seed: can't read disk label > booting sr0a:/bsd: open sr0a:/bsd: can't read disk label > failed(100). will try /bsd > > RAID1 worked fine, it's just RAID5 throwing this error at me. > > -- > jrmu > IRCNow (https://ircnow.org)
issues booting with single softraid raid 1 drive
Hi misc, I'm practicing data recovery scenarios with a RAID 1 array on softraid0. optiplex# bioctl -i softraid0 Volume Status Size Device softraid0 0 Online64022953984 sd0 RAID1 0 Online64022953984 0:0.0 noencl 1 Online64022953984 0:1.0 noencl When I remove one or the other drive, only the M4 (wd0) will boot on its own. I've run installboot on both drives separately. However my machine says No OS when I attempt to boot with only the CT250MX (wd1) in the system, no matter which sata port. I did have this array on two identical M4 drives, which I think might be how wd1a came to be first in that list, but I'm working on migrating the array to a pair of CT250MX drives. I had the same issue with the other M4 drive. Only the one still in the sytem boots. I still have that other drive with the data intact if it's somehow useful. How can I get the CT250MX to boot so I can remove the one remaining M4 and replace it with the other CT250MX, as well as ensure a bootable system if the wrong drive fails? Thank you very much for your time. disklabel, fdisk, and dmesg to follow optiplex# disklabel wd0 # /dev/rwd0c: type: ESDI disk: SCSI disk label: M4-CT064M4SSD2 duid: ca557cab281e33f2 flags: bytes/sector: 512 sectors/track: 63 tracks/cylinder: 255 sectors/cylinder: 16065 cylinders: 7783 total sectors: 125045424 boundstart: 64 boundend: 125045424 16 partitions: #size offset fstype [fsize bsize cpg] a:125045360 64RAID c:1250454240 unused optiplex# disklabel wd1 # /dev/rwd1c: type: ESDI disk: ESDI/IDE disk label: CT250MX500SSD1 duid: 56e90ff073a823cb flags: bytes/sector: 512 sectors/track: 63 tracks/cylinder: 255 sectors/cylinder: 16065 cylinders: 30401 total sectors: 488397168 boundstart: 64 boundend: 488397168 16 partitions: #size offset fstype [fsize bsize cpg] a:125045360 64RAID c:4883971680 unused optiplex# fdisk wd0 Disk: wd0 geometry: 7783/255/63 [125045424 Sectors] Offset: 0 Signature: 0xAA55 Starting Ending LBA Info: #: id C H S - C H S [ start:size ] --- 0: 00 0 0 0 - 0 0 0 [ 0: 0 ] Unused 1: 00 0 0 0 - 0 0 0 [ 0: 0 ] Unused 2: 00 0 0 0 - 0 0 0 [ 0: 0 ] Unused *3: A6 0 1 2 - 7783 182 63 [ 64: 125045360 ] OpenBSD optiplex# fdisk wd1 Disk: wd1 geometry: 30401/255/63 [488397168 Sectors] Offset: 0 Signature: 0xAA55 Starting Ending LBA Info: #: id C H S - C H S [ start:size ] --- 0: 00 0 0 0 - 0 0 0 [ 0: 0 ] Unused 1: 00 0 0 0 - 0 0 0 [ 0: 0 ] Unused 2: 00 0 0 0 - 0 0 0 [ 0: 0 ] Unused *3: A6 0 1 2 - 30401 80 63 [ 64: 488397104 ] OpenBSD OpenBSD 7.4 (GENERIC.MP) #2: Fri Dec 8 15:39:04 MST 2023 r...@syspatch-74-amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/ GENERIC.MP real mem = 4173328384 (3979MB) avail mem = 4027121664 (3840MB) random: good seed from bootblocks mpath0 at root scsibus0 at mpath0: 256 targets mainbus0 at root bios0 at mainbus0: SMBIOS rev. 2.5 @ 0xf0450 (82 entries) bios0: vendor Dell Inc. version "A03" date 02/13/2010 bios0: Dell Inc. OptiPlex 780 acpi0 at bios0 acpi0: TCPA checksum error: ACPI 3.0 acpi0: sleep states S0 S3 S4 S5 acpi0: tables DSDT FACP SSDT APIC BOOT ASF! MCFG HPET TCPA SLIC acpi0: wakeup devices VBTN(S4) PCI0(S5) PCI4(S5) PCI3(S5) PCI1(S5) PCI5(S5) PCI6(S5) USB0(S3) USB1(S3) USB2(S3) USB3(S3) USB4(S3) USB5(S3) acpitimer0 at acpi0: 3579545 Hz, 24 bits acpimadt0 at acpi0 addr 0xfee0: PC-AT compat cpu0 at mainbus0: apid 0 (boot processor) cpu0: Intel(R) Core(TM)2 Quad CPU Q9400 @ 2.66GHz, 2660.05 MHz, 06-17-0a, patch 0a0b cpu0: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,SSE4.1,XSAVE,NXE,LONG,LAHF,PERF,SENSOR,MELTDOWN cpu0: 32KB 64b/line 8-way D-cache, 32KB 64b/line 8-way I-cache, 3MB 64b/line 12-way L2 cache cpu0: smt 0, core 0, package 0 mtrr: Pentium Pro MTRR support, 7 var ranges, 88 fixed ranges cpu0: apic clock running at 332MHz cpu0: mwait min=64, max=64, C-substates=0.2.2.2.2, IBE cpu1 at mainbus0: apid 1 (application processor) cpu1: Intel(R) Core(TM)2 Quad CPU Q9400 @ 2.66GHz, 2660.05 MHz, 06-17-0a, patch 0a0b cpu1: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,
Re: softraid encrypting discipline
Hi, On Friday, November 17th, 2023 at 08:26, D.A. wrote: > What encryption algorithm does softraid? According to the presentation titled "softraid(4) boot" by Stefan Sperling at EuroBSDCon 2015, AES with 256-bit key(s) in XTS mode. (You can check sys/dev/softraid_crypto.c in sys.tar.gz if desired.) Best, Masanori
softraid encrypting discipline
What encryption algorithm does softraid? --
Re: Creating a softraid mirror from a regular OpenBSD disk
> >> If you are on >> sticks copy machine by three slots are also a solution. > > Running an OpenBSD system entirely from USB sticks, and using a copy machine > to make backups is not a good suggestion for general usage. Indeed, and also depends on their size. > P.S. Daniele, please fix your mailer's reply-to: header. Beside the joke it is a true email forward. Just implemented for the truth. Give me the time to check with my provider if it is all fine. It immediately appears very good like a deterrent ;-). However, sorry for any inconvinience.. -- Daniele Bonini
Re: Creating a softraid mirror from a regular OpenBSD disk
On Mon, Nov 13, 2023 at 10:15:30AM +0100, Daniele B. wrote: > In few words, when the matter is saving the data of one 1 disk the best > solution is adopt a backup strategy for that purpose. Yes, this is true. I answered the OPs direct question about how to create the raid mirror, but that was _not_ intended as an endorsement of using raid to solve any particular problem. In very simple terms, a raid-1 mirror is useful for protecting data which is often changing, (E.G. busy mail spools), where backups would immediately become out-dated, (although you should make regular backups of that data anyway, in addition to having the mirror). Raid-1 is also useful in some situations where you need high availability of data even though that data is unchanging and easily restored from a backup. An example of high availability of a large and mostly unchanging database might be a music repository for a radio station that is using digital playout, where they cannot risk going silent just because one hard disk broke, but the actual data would be easily restored from backups if a complete system failure did occur. Also note that using multiple disks instead of a single disk can and often does _increase_ the risk of any particular failure. Any one of the disks could develop a fault which takes the host controller out or even makes the PSU explode. Any one of the disks could have firmware issues that cause silent data corruption, and the more disks you have the more likely it is that one of them will suffer such a problem. > If you are on > sticks copy machine by three slots are also a solution. Running an OpenBSD system entirely from USB sticks, and using a copy machine to make backups is not a good suggestion for general usage. Just do a normal installation on a normal hard disk and make normal backups. And don't just use $HOME as an endless rubbish dump for old data. $HOME is for work in progress. Finished stuff should be archived elsewhere and taken out of the regular backup loop. P.S. Daniele, please fix your mailer's reply-to: header.
Re: Creating a softraid mirror from a regular OpenBSD disk
The argument has already been touched recently in other threads. In few words, when the matter is saving the data of one 1 disk the best solution is adopt a backup strategy for that purpose. You can have a backup strategy that involve one or more spare disks. If you are on sticks copy machine by three slots are also a solution. Involving 1 more disk in raid 1 is never a good solution for different reasons the most important one: against a disk failure you put at risk the full raid set; then softraid is never running properly and never good for your disk life beside slowing down your system. The advise is a good backup strategy also against the possibility to adopt other kind of raid involving more disks, increasing your own expense at the important cost of losing a direct touch on your data.
Re: Creating a softraid mirror from a regular OpenBSD disk
On Mon, Nov 13, 2023 at 02:05:34AM +0100, i...@tutanota.com wrote: > Is that possible or does the first disk needs to be reformattet and > repartitioned before adding a second disk? You will need to copy your data elsewhere, then re-partition and create the raid mirror, then copy your data back. Currently there is no tool that will convert a plain volume into one part of a raid mirror set with the data 'in place'.
Creating a softraid mirror from a regular OpenBSD disk
I have an OpenBSD box running with a single drive. I wanted to add a second drive and then run the two in a softraid mirror in order for the first disk to not be a single point of failure in the box. Is that possible or does the first disk needs to be reformattet and repartitioned before adding a second disk? Thanks.
Re: OT: Running SOFTRAID on PCEngine APU2 via mPCIe to M.2 convertor board for NVME 2230 or 2242
Just a follow up on this for general interest. I got boards made in Hong Kong from the design done by Tobias Schramm generously made available on github. I received the board a few days ago, I ordered then the nvme 2230 to test and received it today and here we are. The following tests are done on an APU1 as the others are in use now and I had this one available so I used it. Put the mPCIE broad in the mPCIe1 and put the nvme on the board and it worked right away. I will do the tests on the APU2 soon as well when I get the additional nvme boards I order. Just FYI, the tests below are done on NIXOS as that's what I had running on the APU1 now testing stuff, so I used that. If there is a need for the tests on OpenBSD I can do that later if anyone interested. The ONLY thing I am not sure is on the APU2, the line on the mPCIe schematics for J14 pins 23 and 25 are reverse compare to the mPCIe J13. There is a note on the schematics for that. Why that is I can't say but the APU1 doesn't. Based on Tobias, he never said that the nvme didn't work in both slots, so I will find out somehow. In any case, the mini,um order was 5 boards and the difference in price was pretty small that I order 25 instead, so if anyone might be interested, I would be happy to ship some if needed. I d the board made as I couldn't find some and only 3 company made them, two of them were out, or none available the third one in China, I didn't order there. This is nvme M-Key for either 2230 or 2242. It doesn't support bigger one at all. No space in the APU for it. Just also remember that the mPCIe connectors in the APU use only one lane, not 4. So 1x if you want. But still the results are pretty good. 10x speed compare to mSATA in there., both dogfish one, so fare comparison I guess. The third drive is an SSD SanDisk one. So if you want to make a little NAS out of an APU I guess you can and it would be decent I suppose. If you want to know more, fell free to contact me off list, unless more here want to know more. I used fio a standard benchmark tests. I only did the write test on the nvme as my other two drives have data and I didn't want to loose it! (; I did the test on the raw device to eliminate anything else that could affect it and hopefully give a more real results. Same tests on all 3 different drives in the same box. The number speak for themselves. And that's MBytes, not Mbits speed. I can only imagine if I had 4 PCIE lanes... Really not bad for the small APU's = NVME (READ) 401MB/sec = [nix-shell:~]# fio --filename=/dev/nvme0n1 --rw=read --direct=1 --bs=1M --ioengine=libaio --runtime=60 --numjobs=1 --time_based --group_reporting --name=seq_read --iodepth=16 seq_read: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=16 fio-3.33 Starting 1 process Jobs: 1 (f=1): [R(1)][100.0%][r=383MiB/s][r=383 IOPS][eta 00m:00s] seq_read: (groupid=0, jobs=1): err= 0: pid=1383: Fri Jun 9 17:54:30 2023 read: IOPS=382, BW=382MiB/s (401MB/s)(22.4GiB/60042msec) slat (usec): min=110, max=4089, avg=156.26, stdev=68.23 clat (usec): min=13238, max=78390, avg=41671.32, stdev=4830.72 lat (usec): min=13494, max=80091, avg=41827.59, stdev=4827.39 clat percentiles (usec): | 1.00th=[21103], 5.00th=[40109], 10.00th=[41157], 20.00th=[41681], | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], | 99.00th=[64226], 99.50th=[68682], 99.90th=[71828], 99.95th=[71828], | 99.99th=[77071] bw ( KiB/s): min=339968, max=394475, per=100.00%, avg=391725.72, stdev=4887.40, samples=119 iops: min= 332, max= 385, avg=382.43, stdev= 4.77, samples=119 lat (msec) : 20=0.93%, 50=96.11%, 100=2.96% cpu : usr=1.25%, sys=7.95%, ctx=22974, majf=0, minf=4108 IO depths: 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=99.9%, 32=0.0%, >=64=0.0% submit: 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=22954,0,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 Run status group 0 (all jobs): READ: bw=382MiB/s (401MB/s), 382MiB/s-382MiB/s (401MB/s-401MB/s), io=22.4GiB (24.1GB), run=60042-60042msec Disk stats (read/write): nvme0n1: ios=91589/0, merge=0/0, ticks=3717080/0, in_queue=3717080, util=100.00% == NVME (WRITE) 363MB/sec == [nix-shell:~]# fio --filename=/dev/nvme0n1 --rw=write --direct=1 --bs=1M --ioengine=libaio --runtime=60 --numjobs=1 --time_based --group_reporting --name=seq_read --iodepth=16 seq_read: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=16 fio-3.33 Starting 1 pr
Re: softdep / softraid RAID1 issue?
On Mon, Jun 5, 2023 at 3:07 AM Nick Holland wrote: > > =- > PREVIOUS=(find previous backup) > TODAY=(today's date) > OLDEST=(find oldest backup in the set) > REMOTE=(machine we are backing up) > > # remove oldest backup > rm -r $OLDEST & > > mkdir $TODAY > > # make new backup > rsync --link-dest $PREVIOUS $REMOTE $TODAY > =- > > [REDACTED] > > Here's where it gets weird -- removing the '&' after the rm -r $OLDEST > line seems to have FIXED THE PROBLEM. No problems in 18 days, which is > a pretty good record. > Just spitballing here... you were running the removal of the oldest in the background while bringing in new data for the backup. Maybe it was hitting an I/O ceiling of some kind under those conditions? May still warrant investigation since it could still hit this "ceiling" under a big enough I/O load. -- Aaron Mason - Programmer, open source addict I've taken my software vows - for beta or for worse
softdep / softraid RAID1 issue?
Hiya. tl;dr version: multiple machines with softraid RAID1 & softdep have file systems freeze when doing lots of I/O, possibly involving adding and removing links from the same files at the same time. Workaround found. Need help finding better diagnostic information. long version: = I have a couple systems which have had an issue for a long time where suddenly, disk activity would just ... stop. No message on console, no panic. Usually, I can still log in, but if I touch the effected file systems, my SSH session (or console login) freezes. Always happens during some intense disk activity (more on that in a moment). When it happens, I can not reboot the system without a hard reset or power cycle (and these systems have multi-TB file systems on them, so doing this is painful). umount on the impacted file systems hangs. I'm not really sure if I lose all file systems or just some. Most of the file systems are very "static". Some things seem to work for a while, but it could well be just cached data. I tried swapping computers, replacing disks, and even doing weekly reboots, all to seemingly no impact. Problem has occurred for well over a year, maybe longer. Upgraded frequently to most recent snaphot, no change seen (I often use hangs as an opporunity to upgrade). Recently, however, I think I caught it in mid-failure. Disk activity was still going on, but it was very slow. 'top' showed "WAIT" on softdep (or something similar). The jobs that should have been long done was still running according to the logs (new files being added to the list of files backed up), but very very slowly. And then it came to a hard stop, as before. This may have been an unusual event, or I might just have happened to look in right place to see something was still happening. These machines are used for rsync --link-dest backup. The short version of the algorithm is something like this: =- PREVIOUS=(find previous backup) TODAY=(today's date) OLDEST=(find oldest backup in the set) REMOTE=(machine we are backing up) # remove oldest backup rm -r $OLDEST & mkdir $TODAY # make new backup rsync --link-dest $PREVIOUS $REMOTE $TODAY =- This backup process basically makes a hard link for files that haven't changed, and copies over files that did change. After first backup, all future backups are incrementals, both in time and additional disk usage. A bunch of these will typically be running at the same time, maybe five to ten of them (adjusting this number didn't seem to have any effect). When it fails, usually a few succeed, and the rest just never complete. Here's where it gets weird -- removing the '&' after the rm -r $OLDEST line seems to have FIXED THE PROBLEM. No problems in 18 days, which is a pretty good record. SPECULATION: the rm and rsync processes running at the same time can potentially be putting both new links and removing old links from the same file at the same time (well...multi-tasking definition of "same time"). Maybe something is having a problem with this. I have another machine running the same backup process, which has not had a problem. It has been running happily for 123 days now (yeah, I kinda forgot about it). It is a little laptop, so only one hard disk, but I am using a softraid encrypted disk. So yes, also using softraid, but only one media to read/write to. So maybe associated with either RAID1 or multiple disk I/Os. So, my problem is worked around, but I suspect there's still a bug there. I am happy to put the '&' back and gather more information next time it happens...if someone tells me what info to gather. Nick. Machine that has had problems, but fixed by no longer backgrounding the rm -r $OLDEST backup: OpenBSD 7.3-current (GENERIC.MP) #1175: Wed May 3 08:19:33 MDT 2023 dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP real mem = 5872685056 (5600MB) avail mem = 5675061248 (5412MB) random: good seed from bootblocks mpath0 at root scsibus0 at mpath0: 256 targets mainbus0 at root bios0 at mainbus0: SMBIOS rev. 2.7 @ 0xeb240 (42 entries) bios0: vendor AMI version "7.16" date 01/18/2012 bios0: Hewlett-Packard p6-2108p acpi0 at bios0: ACPI 4.0 acpi0: sleep states S0 S3 S4 S5 acpi0: tables DSDT FACP APIC MCFG SLIC HPET SSDT SSDT DBGP acpi0: wakeup devices SBAZ(S4) P0PC(S4) UHC1(S3) UHC2(S3) USB3(S3) UHC4(S3) USB5(S3) UHC6(S3) UHC7(S3) XHC0(S3) XHC1(S3) PE20(S4) PE21(S4) PE22(S4) PE23(S4) BR15(S4) [...] acpitimer0 at acpi0: 3579545 Hz, 32 bits acpimadt0 at acpi0 addr 0xfee0: PC-AT compat cpu0 at mainbus0: apid 0 (boot processor) cpu0: AMD E2-3200 APU with Radeon(tm) HD Graphics, 2395.89 MHz, 12-01-00 cpu0: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,HTT,SSE3,MWAIT,CX16,POPCNT,NXE,MMXX,FFXSR,PAGE1GB,RDTSCP,LONG,3DNO
Re: Overwriting softraid keys
On Thu, May 25, 2023 at 09:35AM Stefan Sperling wrote: > On Wed, May 24, 2023 at 04:37:00PM +, Francesco Toscan wrote: > Hi misc@, > >> I'm going to migrate a FreeBSD ZFS-based fileserver to a OpenBSD 7.3 >> UFS-based one. >> In order to comply with regulations, part of data must be encrypted; >> regulations also dictate that I have to be able to destroy the encryption >> keys. [...] >> To "destroy" the keys I think it could be sufficient to use dd and overwrite >> the first megabyte of the softraid chunk with random data. > Yes, indeed. There is only one section of meta-data at the beginning of the > chunk and if this meta-data is lost then the decryption key is gone as well. [...] Thank you for the detailed explaination, much appreciated. For the record, bioctl and the stack do comply. > It is not yet possible to encrypt a key disk with a passphrase, which would > provide two-factor authentication. There is no technical reason which would > prevent this from being implemented, it just hasn't been done. >From a user perspective, a user who is not able to help coding, I can just say that encrypting a key disk with a passphrase would be great. Thanks for your time, f
Re: Overwriting softraid keys
On Wed, May 24, 2023 at 04:37:00PM +, Francesco Toscan wrote: > Hi misc@, > > I'm going to migrate a FreeBSD ZFS-based fileserver to a OpenBSD 7.3 > UFS-based one. > In order to comply with regulations, part of data must be encrypted; > regulations also dictate that I have to be able to destroy the encryption > keys. > > So, I want to split data into multiple partitions, mounted read-only (it's > "cold" data, there's no point in mounting rw); one of them, of about 50GB, > will be a chunk dedicated to softraid. The volume will be assembled by hand > and the on-disk encryption key will be encrypted with a user supplied > password (right, regulations). > If I understand correctly the 2010 paper by Marco Peereboom, he designed the > crypto softraid discipline so the encrypted keys would be saved in a variable > part of softraid medatata, stored at the beginning of the chosen chunk, after > an offset of 512 bytes. > To "destroy" the keys I think it could be sufficient to use dd and overwrite > the first megabyte of the softraid chunk with random data. > Am I missing something? > > Thanks, > f > > Yes, indeed. There is only one section of meta-data at the beginning of the chunk and if this meta-data is lost then the decryption key is gone as well. The password ("mask key") read from user input by 'bioctl -cC' will not be written to disk. It will be used to encrypt the volume key which is stored on disk. As such, bioctl should comply with your regulations out of the box. There is also an option to use a key disk which stores a randomly generated mask key in unencrypted form on an arbitrary RAID disklabel partition, which would usually sit on an external USB drive or SD card. This replaces the passphrase with a physical object but otherwise behaves the same way. Destroying the first 1MB of the softraid chunk would likewise render the key disk useless. It is not yet possible to encrypt a key disk with a passphrase, which would provide two-factor authentication. There is no technical reason which would prevent this from being implemented, it just hasn't been done.
Overwriting softraid keys
Hi misc@, I'm going to migrate a FreeBSD ZFS-based fileserver to a OpenBSD 7.3 UFS-based one. In order to comply with regulations, part of data must be encrypted; regulations also dictate that I have to be able to destroy the encryption keys. So, I want to split data into multiple partitions, mounted read-only (it's "cold" data, there's no point in mounting rw); one of them, of about 50GB, will be a chunk dedicated to softraid. The volume will be assembled by hand and the on-disk encryption key will be encrypted with a user supplied password (right, regulations). If I understand correctly the 2010 paper by Marco Peereboom, he designed the crypto softraid discipline so the encrypted keys would be saved in a variable part of softraid medatata, stored at the beginning of the chosen chunk, after an offset of 512 bytes. To "destroy" the keys I think it could be sufficient to use dd and overwrite the first megabyte of the softraid chunk with random data. Am I missing something? Thanks, f
Re: OT: Running SOFTRAID on PCEngine APU2 via mPCIe to M.2 convertor board for NVME 2230 or 2242
On Sun, 21 May 2023 07:28:25 -0400 Daniel Ouellet wrote: > Hi, > > Anyone ever was able to find a mPCIe to M.2 convertor board on Amazon > that works for using M.2 NVME 2230 or 2242 drives or even M.2 SATA > (NGFF) in the APU2 like this: > > https://github.com/TobleMiner/M.2-NVMe-SSD-to-miniPCIe-adapter > > Scroll to the end and see the picture of the drives inside the APU2. > > The mSATA goes in the J12 slot as explained below (URL), but the J13 > and J14 are mPCIe slot, so it should be possible with the proper > adapter to also have an M.2 drives in this small box. > > https://github.com/pcengines/apu2-documentation/blob/master/docs/APU_mPCIe_capabilities.md > > Then may be I can run softraid on my OpenBSD APU2. > > I would very much appreciated if anyone happen to know the model that > they use or know that is working. > > Amazon have a very long list, but the description isn't to useful and > describe for use with USB, or wireless card and there is so many > different keys type, etc. > > Many thanks for your time. > > Daniel > It is not amazon, but here you can find one on ali express: https://www.aliexpress.com/item/1005004641053693.html?spm=a2g0o.detail.114.6.3a686a9ashFQUL&gps-id=pcDetailBottomMoreOtherSeller&scm=1007.40050.281175.0&scm_id=1007.40050.281175.0&scm-url=1007.40050.281175.0&pvid=ece8abbc-347c-47df-9c76-c292adefc4c1&_t=gps-id:pcDetailBottomMoreOtherSeller,scm-url:1007.40050.281175.0,pvid:ece8abbc-347c-47df-9c76-c292adefc4c1,tpp_buckets:668%232846%238109%231935&pdp_npi=3%40dis%21SEK%2166.65%2151.34%21%21%21%21%21%402101c5a716846710151784711e0835%211229942424882%21rec%21SE%211684149259
OT: Running SOFTRAID on PCEngine APU2 via mPCIe to M.2 convertor board for NVME 2230 or 2242
Hi, Anyone ever was able to find a mPCIe to M.2 convertor board on Amazon that works for using M.2 NVME 2230 or 2242 drives or even M.2 SATA (NGFF) in the APU2 like this: https://github.com/TobleMiner/M.2-NVMe-SSD-to-miniPCIe-adapter Scroll to the end and see the picture of the drives inside the APU2. The mSATA goes in the J12 slot as explained below (URL), but the J13 and J14 are mPCIe slot, so it should be possible with the proper adapter to also have an M.2 drives in this small box. https://github.com/pcengines/apu2-documentation/blob/master/docs/APU_mPCIe_capabilities.md Then may be I can run softraid on my OpenBSD APU2. I would very much appreciated if anyone happen to know the model that they use or know that is working. Amazon have a very long list, but the description isn't to useful and describe for use with USB, or wireless card and there is so many different keys type, etc. Many thanks for your time. Daniel
Re: Encrypted softraid - Operational question
Thanks man. Will use it. On 2023-05-01 11:39, Thomas Bohl wrote: Hi In a server with an encrypted root - server boots with key in USB stick, not passphrase. Can I remove the USB stick with the key, after the server is up and running? Yes Will I have any problems doing that? No. Though not at the moment, I used such a setup for years. Only inserting the stick for reboots.
Re: Encrypted softraid - Operational question
Hi In a server with an encrypted root - server boots with key in USB stick, not passphrase. Can I remove the USB stick with the key, after the server is up and running? Yes Will I have any problems doing that? No. Though not at the moment, I used such a setup for years. Only inserting the stick for reboots.
Encrypted softraid - Operational question
Hi misc, In a server with an encrypted root - server boots with key in USB stick, not passphrase. Can I remove the USB stick with the key, after the server is up and running? Will I have any problems doing that? I know that in the case of a reboot, it will be necessary to go and re-insert the USB stick holding the encryption key. I plan to use a good UPS/batteries to avoid that. Thanks in advance.
Re: Softraid crypto metadata backup
>On Sat, Jan 07, 2023 at 02:33:31PM +, Nathan Carruth wrote: >>The way I see it, this depends on one's use case. >>There certainly are cases where it is important to be able >>to irrevocably destroy all data in an instant. But there are >>also use cases where one is only interested in making sure >>that the average person couldn’t access one’s data if one lost >>one’s laptop/external drive. >> >>I still think that anyone with the second use case could benefit >>from more documentation as I suggested, but I get the feeling >>this opinion is in the distinct minority here. > >If you're part of that supposed minority: count me in. If it's true >that the headers of encrypted disks on Openbsd are set up in a similar >way as on e.g. Linux, then it's actually a good idea to be able to >have precise knowledge about how to backup that header on OBSD. > From what I learned the preparation for that failure includes first a >copy of the data on that encrypted disk to a second or - even better - >a third encrypted one. > >But copying back a whole disk because on the original broken one it's >just the header that went south might be an effort that is a least >avoidable. And this when part two of disaster preparation might be >helpful: the so-called header backup. > >I don't know how often, and if, such header corruptions on encrypted >disks happen on OBSD, but on LUKS/cryptsetup encrypted disks on >Linux this does not seem to be that unusual - from the LUKS FAQ: > >"By far the most questions on the cryptsetup mailing list are from > people that managed to damage the start of their LUKS partitions, > i.e. the LUKS header. In most cases, there is nothing that can be done > to help these poor souls recover their data. Make sure you understand > the problem and limitations imposed by the LUKS security model BEFORE > you face such a disaster! In particular, make sure you have a current > header backup before doing any potentially dangerous operations." > >https://gitlab.com/cryptsetup/cryptsetup/-/blob/main/FAQ.md > >And so, that no one is getting me wrong: I don't expect anyone to >create the documentation for me. Definitively not. But if disk header >corruption can be a problem on Openbsd, too, then this very >possibility helps at least to understand the point the OP was trying >to make when starting this thread. > >Regards, >Wolfgang I wasn't going to say anything more, but after reading this I figured I ought to at least suggest an update to the documentation. Since I'm definitely not qualified to document the details, all I can really do is something like what is posted below (these should be diffs to the current versions of src/share/man/man4/softraid.4 and www/faq/faq14.html on CVS). Thanks, Nathan For softraid.4: @@ -270,0 +271,4 @@ +.Pp +The CRYPTO discipline emphasizes confidentiality over integrity. +In particular, corruption of the on-disk metadata will render all encrypted +data permanently inaccessible. For faq14.html: @@ -835,0 +836,9 @@ +Note + +Decryption requires the use of on-disk metadata documented in the +https://cvsweb.openbsd.org/src/sys/dev/softraidvar.h";>source. +Corruption of this metadata can easily render all encrypted data +permanently inaccessible. +There is at present no automated way to backup this metadata. +
Re: Softraid crypto metadata backup
On Sat, Jan 07, 2023 at 02:33:31PM +, Nathan Carruth wrote: The way I see it, this depends on one's use case. There certainly are cases where it is important to be able to irrevocably destroy all data in an instant. But there are also use cases where one is only interested in making sure that the average person couldn’t access one’s data if one lost one’s laptop/external drive. I still think that anyone with the second use case could benefit from more documentation as I suggested, but I get the feeling this opinion is in the distinct minority here. If you're part of that supposed minority: count me in. If it's true that the headers of encrypted disks on Openbsd are set up in a similar way as on e.g. Linux, then it's actually a good idea to be able to have precise knowledge about how to backup that header on OBSD. I followed this thread. But kept silent so far - simply because I basically don't know much about how disk headers are organized on Openbsd. So — thanks to everyone for the answers, I’m signing off this question now. Take care and stay secure, Nathan Nathan Carruth writes: permanently and irrevocably destroy all data on your entire disk”. This is a feature. Can be a feature in some situations, yes. But that's just one part of the story, if I understand correctly. More so, it's the very point in an encrypted filesystem. If you haven't planned for this failure scenario The OP was trying to prepare exactly for this very scenario. That was obviously the whole purpose of starting the thread. From what I learned the preparation for that failure includes first a copy of the data on that encrypted disk to a second or - even better - a third encrypted one. But copying back a whole disk because on the original broken one it's just the header that went south might be an effort that is a least avoidable. And this when part two of disaster preparation might be helpful: the so-called header backup. I don't know how often, and if, such header corruptions on encrypted disks happen on OBSD, but on LUKS/cryptsetup encrypted disks on Linux this does not seem to be that unusual - from the LUKS FAQ: "By far the most questions on the cryptsetup mailing list are from people that managed to damage the start of their LUKS partitions, i.e. the LUKS header. In most cases, there is nothing that can be done to help these poor souls recover their data. Make sure you understand the problem and limitations imposed by the LUKS security model BEFORE you face such a disaster! In particular, make sure you have a current header backup before doing any potentially dangerous operations." https://gitlab.com/cryptsetup/cryptsetup/-/blob/main/FAQ.md And so, that no one is getting me wrong: I don't expect anyone to create the documentation for me. Definitively not. But if disk header corruption can be a problem on Openbsd, too, then this very possibility helps at least to understand the point the OP was trying to make when starting this thread. Regards, Wolfgang then what are you doing using a device which *by design* can irrevocably trash its contents in an instant?
Softraid crypto metadata backup
The way I see it, this depends on one's use case. There certainly are cases where it is important to be able to irrevocably destroy all data in an instant. But there are also use cases where one is only interested in making sure that the average person couldn’t access one’s data if one lost one’s laptop/external drive. I still think that anyone with the second use case could benefit from more documentation as I suggested, but I get the feeling this opinion is in the distinct minority here. So — thanks to everyone for the answers, I’m signing off this question now. Take care and stay secure, Nathan >Nathan Carruth writes: >> permanently and irrevocably destroy all data on your entire disk”. > >This is a feature. More so, it's the very point in an encrypted >filesystem. If you haven't planned for this failure scenario then >what are you doing using a device which *by design* can irrevocably >trash its contents in an instant? > >Matthew
Re: 回复: 回复: 回复: 回复: Softraid crypto metadata backup
Nathan Carruth writes: > permanently and irrevocably destroy all data on your entire disk”. This is a feature. More so, it's the very point in an encrypted filesystem. If you haven't planned for this failure scenario then what are you doing using a device which *by design* can irrevocably trash its contents in an instant? Matthew
回复: 回复: 回复: 回复: Softraid crypto metadata backup
None of those issues are of the form “a hundred bad bytes will permanently and irrevocably destroy all data on your entire disk”. Unless I am mistaken, crypto header corruption is. On Jan 05 22:22:44, n.carr...@alum.utoronto.ca wrote: > Given that one of the goals of the OpenBSD project is to produce > reliable documentation, I would have expected that this kind of potential > corruption would have been at least mentioned > somewhere. Surely we don’t expect every user to read the code for > all the software they use to be sure there are no well-known but > undocumented data holes? If a ffs's superblocks get corrupted, the fs will be unusable. If a file's inode gets corrupted, the file will bu unusable. Should this be mentioned in the respective manpages? Also, libc.so corruption will break all dynamicaly linked binaries. And if /bsd gets corrupted, the system will be unbootable. Are these undocumented data holes? Are you distressed to find so potentially huge an issue completely undocumented? Jan > Even just a line like this would be useful: > > “Note: bioctl(8) writes header information (such as salt values for > crypto volumes) at the start of the original partition. See [relevant source > file] for details. If this information should become corrupted, the > softraid(4) > volume will become unusable.” > > Thanks! > Nathan > > PS I have been using OpenBSD since 2010. I like it very much in many > ways, but I am distressed to find so potentially huge an issue completely > undocumented. > >
Re: 回复: 回复: 回复: Softraid crypto metadata backup
On Jan 05 22:22:44, n.carr...@alum.utoronto.ca wrote: > Given that one of the goals of the OpenBSD project is to produce > reliable documentation, I would have expected that this kind of potential > corruption would have been at least mentioned > somewhere. Surely we don’t expect every user to read the code for > all the software they use to be sure there are no well-known but > undocumented data holes? If a ffs's superblocks get corrupted, the fs will be unusable. If a file's inode gets corrupted, the file will bu unusable. Should this be mentioned in the respective manpages? Also, libc.so corruption will break all dynamicaly linked binaries. And if /bsd gets corrupted, the system will be unbootable. Are these undocumented data holes? Are you distressed to find so potentially huge an issue completely undocumented? Jan > Even just a line like this would be useful: > > “Note: bioctl(8) writes header information (such as salt values for > crypto volumes) at the start of the original partition. See [relevant source > file] for details. If this information should become corrupted, the > softraid(4) > volume will become unusable.” > > Thanks! > Nathan > > PS I have been using OpenBSD since 2010. I like it very much in many > ways, but I am distressed to find so potentially huge an issue completely > undocumented. > >
回复: 回复: 回复: Softraid crypto metadata backup
> On 2023-01-05, Nathan Carruth wrote: >> Thank you for your response. >> >> To clarify: I am not asking about backups proper >> (though I appreciate the suggestions). My only >> question is how to make a copy of the crypto metadata. > >dd the start of the partition, it's stored 16 blocks (8k) into the partition >and for the current version of softraid it's 64 blocks (32k) long. > >But it's useless without the data so unless you are doing unsupported things >like poking at the softraid partition size, etc, and want to make a backup >before doing that then I don't see how it helps you. (And if you *are* doing >that then I'd hope you don't have to ask how to back it up first). > >And unless you detach the softraid device first (or don't attach in the first >place) it will be marked dirty. Thank you, this is exactly what I was looking for. For the record: I want a way to save the metadata for restoration in case of accidental corruption. Security concerns aside, I don’t see why this is any different from backing up partition and disklabel information as Nick suggested. I understand both GELI and cgd provide standard and documented ways of doing this. When I first learned about header corruption in LUKS I was relieved that it wasn’t an issue in OpenBSD. Then a year later I suddenly learned otherwise — from a non-OpenBSD source. Given that one of the goals of the OpenBSD project is to produce reliable documentation, I would have expected that this kind of potential corruption would have been at least mentioned somewhere. Surely we don’t expect every user to read the code for all the software they use to be sure there are no well-known but undocumented data holes? Even just a line like this would be useful: “Note: bioctl(8) writes header information (such as salt values for crypto volumes) at the start of the original partition. See [relevant source file] for details. If this information should become corrupted, the softraid(4) volume will become unusable.” Thanks! Nathan PS I have been using OpenBSD since 2010. I like it very much in many ways, but I am distressed to find so potentially huge an issue completely undocumented.
Re: Probable error in softraid(4) documentation
Namaste Stuart, Tobias, > Sent: Thursday, January 05, 2023 at 4:58 PM > From: "Tobias Fiebig" > To: misc@openbsd.org > Subject: Re: Probable error in softraid(4) documentation > > Heho, > > On Wed, 2023-01-04 at 00:04 +, Stuart Henderson wrote: > > stacking would refer to creating one softraid (say a raid1 mirror) > > and then creating a separate softraid device (say a crypto volume) > > using the first softraid disk as a component. > > Incidentally, if you happen to have a thing for doing _really_ stupid > things, you _can_ actually do that; I ran RAID10 that way for some > time... but then again I would not recommend anybody else doing it for > data they are in some form even mildly attached to... ;-) > > I think the important part here is the meaning of 'supported'; In the > context of the man page i read it as: > > 'By holding it a certain way, you _could_ make stacking "work". > However, it will not integrate well into the system, you will have to > jump through interesting hoops, there is not even a remote suggestion > of standard operations working (say, replacing a disk), and you are > more likely than not to encounter unexpected behavior which will > ultimately migrate the data stored on your disks to an unstructured and > unusable pile of mangled bits; Furthermore, in case that _does_ happen, > please do not annoy people with your stacking attempt; We told you not > to do it, we told you it would hurt, and you did it anyway.' > > So... i'd say there is no error in softraid(4), even if you can > technically make stacking "work". > > With best regards, > Tobias > > Dhanyavaad for helping me understand that RAID 1C is a single discipline. I had incorrectly understood what stacking meant. I apologize for my mistake. Dhanyavaad, Dharma Artha Kama Moksha
Re: Probable error in softraid(4) documentation
Heho, On Wed, 2023-01-04 at 00:04 +, Stuart Henderson wrote: > stacking would refer to creating one softraid (say a raid1 mirror) > and then creating a separate softraid device (say a crypto volume) > using the first softraid disk as a component. Incidentally, if you happen to have a thing for doing _really_ stupid things, you _can_ actually do that; I ran RAID10 that way for some time... but then again I would not recommend anybody else doing it for data they are in some form even mildly attached to... ;-) I think the important part here is the meaning of 'supported'; In the context of the man page i read it as: 'By holding it a certain way, you _could_ make stacking "work". However, it will not integrate well into the system, you will have to jump through interesting hoops, there is not even a remote suggestion of standard operations working (say, replacing a disk), and you are more likely than not to encounter unexpected behavior which will ultimately migrate the data stored on your disks to an unstructured and unusable pile of mangled bits; Furthermore, in case that _does_ happen, please do not annoy people with your stacking attempt; We told you not to do it, we told you it would hurt, and you did it anyway.' So... i'd say there is no error in softraid(4), even if you can technically make stacking "work". With best regards, Tobias
Re: 回复: 回复: Softraid crypto metadata backup
On 2023-01-05, Nathan Carruth wrote: > Thank you for your response. > > To clarify: I am not asking about backups proper > (though I appreciate the suggestions). My only > question is how to make a copy of the crypto metadata. dd the start of the partition, it's stored 16 blocks (8k) into the partition and for the current version of softraid it's 64 blocks (32k) long. But it's useless without the data so unless you are doing unsupported things like poking at the softraid partition size, etc, and want to make a backup before doing that then I don't see how it helps you. (And if you *are* doing that then I'd hope you don't have to ask how to back it up first). And unless you detach the softraid device first (or don't attach in the first place) it will be marked dirty.
Re: ??????: Softraid crypto metadata backup
Hi, Please fix your email client to correctly attribute quotes in list mail that you reply to. On Thu, Jan 05, 2023 at 02:13:53PM +, Nathan Carruth wrote: > Thank you for your response (apologies that I just saw this). > > I will have a look at the file you mentioned. > > I am curious what you mean by this: > > ??? Backing up, restoring or > otherwise messing with the softraid metadata without using the standard tools > is an advanced subject??? > > as far as I know there aren???t any standard tools for doing any > of this? If there is, it is probably all I need. The standard tools, (basically bioctl), allow you to create softraid volumes, change the passphrase and do a few other tasks. I published a separate program to resize crypto volumes. This is all that most users need. You are asking about and trying to do something that is completely outside the scope of being 'supported'. It's not recommended, nor considered to be useful.
回复: Softraid crypto metadata backup
Thank you for your response (apologies that I just saw this). I will have a look at the file you mentioned. I am curious what you mean by this: “ Backing up, restoring or otherwise messing with the softraid metadata without using the standard tools is an advanced subject” as far as I know there aren’t any standard tools for doing any of this? If there is, it is probably all I need. On Thu, Jan 05, 2023 at 05:13:05AM +, Nathan Carruth wrote: > I presume that OpenBSD also writes on-disk metadata of the > same sort somewhere. Where? Look at /usr/src/sys/dev/softraidvar.h. The structures that contain the softraid metadata are defined there. There is general softraid metadata, and crypto specific metadata. These are stored near the beginning of the RAID partition as defined in the disklabel. In fact, they are SR_META_OFFSET blocks from the start, which is currently 8192 bytes. You can also look at this on your own disk with dd and hexdump to familiarise yourself with what the layout looks like, (useful for future reference). Or read my article about resizing softraid volumes for some examples. > I know I could dig this out of > the source code The source code is the definitive reference. And it can change. > As it stands, the documentation gives no hint that softraid > crypto gives any additional risk of data loss. Just about any additional layer on top of a storage volume increases the complexity of the system, which some people might regard as 'additional risk'. This is in no way specific to softraid crypto. > If there are in > fact e.g. salt values written in an unknown location on the > disk It's not unknown, it's documented quite clearly in the source code. > whose loss renders THE ENTIRE DISK cryptographically > inaccessible, surely this ought to be documented somewhere? By definition, losing the salt value used with any effective crypto system _should_ make it inaccessible! This is even considered a feature, because you can effectively erase the disk just by destroying the metadata. > While I agree with you that there are > definite security risks in backing up such metadata, surely > the decision as to what to do ought to be left to the end user, > rather than being enforced by lack of documentation? The source code is the definitive documentation. Backing up, restoring or otherwise messing with the softraid metadata without using the standard tools is an advanced subject, so it's quite reasonable to expect anybody wanting to do this to read and understand the source rather than having it spelt out in a manual page or other documentation. If it was documented elsewhere, that documentation would have to be kept up to date with the current source, otherwise it could end up causing more problems than it solves. In any case, what you are proposing to do, (back up the softraid crypto metadata), is almost certainly a waste of time, as it is extremely unlikely that you will ever be in a situation where such a backup would be useful. Additionally, if you _do_ decide to go ahead with this, then in the very unlikely event that you corrupt the metadata on the main disk and want to restore it from a backup, please do your research _before_ trying to restore it. It would be very easy to corrupt the disk further by dd'ing the wrong data to the wrong place. There have been a lot of posts to the mailing lists in the past by people who have tried to fix disk partitioning problems by themselves and made the situation worse. What you are proposing sounds to me like a foot gun.
回复: 回复: Softraid crypto metadata backup
Thank you for your response. To clarify: I am not asking about backups proper (though I appreciate the suggestions). My only question is how to make a copy of the crypto metadata. On 2023-01-03, Nathan Carruth wrote: > I am with you 100% on backups. My real question was, How > does one backup crypto volume metadata? Given that > it can be backed up, clearly it should be, but there is no > information in any of the cited documentation as to where > the metadata is or how to back it up. There doesn't seem to be much point in backing up *just* the metadata, how will that help you get data back? I think your options to backup the encrypted data are: - backup from the mounted filesystem using some backup tool that has a way to encrypt. There's a wide choice of backup software, including borg and restic, that do this in a nice way, that will also allow cross-OS restores if needed, restoring individual files, etc. You can also use e.g. dump or tar, piping the output through an encryption tool before storing. - backup the whole disk partition/s e.g. with dd. This might be a bit impractical as to keep things consistent I think you'll want to unmount, detach the softraid device, backup, reattach, remount. Restoration is much more fiddly too. Additionally this isn't designed as an archival format; it would seem that it will be much more fragile.
Re: 回复: Softraid crypto metadata backup
On 2023-01-03, Nathan Carruth wrote: > I am with you 100% on backups. My real question was, How > does one backup crypto volume metadata? Given that > it can be backed up, clearly it should be, but there is no > information in any of the cited documentation as to where > the metadata is or how to back it up. There doesn't seem to be much point in backing up *just* the metadata, how will that help you get data back? I think your options to backup the encrypted data are: - backup from the mounted filesystem using some backup tool that has a way to encrypt. There's a wide choice of backup software, including borg and restic, that do this in a nice way, that will also allow cross-OS restores if needed, restoring individual files, etc. You can also use e.g. dump or tar, piping the output through an encryption tool before storing. - backup the whole disk partition/s e.g. with dd. This might be a bit impractical as to keep things consistent I think you'll want to unmount, detach the softraid device, backup, reattach, remount. Restoration is much more fiddly too. Additionally this isn't designed as an archival format; it would seem that it will be much more fragile.
Re: Softraid crypto metadata backup
On Thu, Jan 05, 2023 at 05:13:05AM +, Nathan Carruth wrote: > Perhaps I should have clarified my use case. I have data which > is potentially legally privileged and which I also cannot afford > to lose. Thus an unencrypted backup is out of the question, and > my first thought was to use full-disk encryption for the backup. For this particular use-case, you can just pipe tar through the openssl command line utility and write the backup to removable media either directly or as a file on a non-encrypted filesystem. I wrote in detail about this method in an article about doing encrypted backups to blu-ray disc: https://research.exoticsilicon.com/articles/crystal_does_optical
Re: Softraid crypto metadata backup
On Thu, Jan 05, 2023 at 05:13:05AM +, Nathan Carruth wrote: > I presume that OpenBSD also writes on-disk metadata of the > same sort somewhere. Where? Look at /usr/src/sys/dev/softraidvar.h. The structures that contain the softraid metadata are defined there. There is general softraid metadata, and crypto specific metadata. These are stored near the beginning of the RAID partition as defined in the disklabel. In fact, they are SR_META_OFFSET blocks from the start, which is currently 8192 bytes. You can also look at this on your own disk with dd and hexdump to familiarise yourself with what the layout looks like, (useful for future reference). Or read my article about resizing softraid volumes for some examples. > I know I could dig this out of > the source code The source code is the definitive reference. And it can change. > As it stands, the documentation gives no hint that softraid > crypto gives any additional risk of data loss. Just about any additional layer on top of a storage volume increases the complexity of the system, which some people might regard as 'additional risk'. This is in no way specific to softraid crypto. > If there are in > fact e.g. salt values written in an unknown location on the > disk It's not unknown, it's documented quite clearly in the source code. > whose loss renders THE ENTIRE DISK cryptographically > inaccessible, surely this ought to be documented somewhere? By definition, losing the salt value used with any effective crypto system _should_ make it inaccessible! This is even considered a feature, because you can effectively erase the disk just by destroying the metadata. > While I agree with you that there are > definite security risks in backing up such metadata, surely > the decision as to what to do ought to be left to the end user, > rather than being enforced by lack of documentation? The source code is the definitive documentation. Backing up, restoring or otherwise messing with the softraid metadata without using the standard tools is an advanced subject, so it's quite reasonable to expect anybody wanting to do this to read and understand the source rather than having it spelt out in a manual page or other documentation. If it was documented elsewhere, that documentation would have to be kept up to date with the current source, otherwise it could end up causing more problems than it solves. In any case, what you are proposing to do, (back up the softraid crypto metadata), is almost certainly a waste of time, as it is extremely unlikely that you will ever be in a situation where such a backup would be useful. Additionally, if you _do_ decide to go ahead with this, then in the very unlikely event that you corrupt the metadata on the main disk and want to restore it from a backup, please do your research _before_ trying to restore it. It would be very easy to corrupt the disk further by dd'ing the wrong data to the wrong place. There have been a lot of posts to the mailing lists in the past by people who have tried to fix disk partitioning problems by themselves and made the situation worse. What you are proposing sounds to me like a foot gun.
回复: 回复: Softraid crypto metadata backup
Thank you for your response. Perhaps I should have clarified my use case. I have data which is potentially legally privileged and which I also cannot afford to lose. Thus an unencrypted backup is out of the question, and my first thought was to use full-disk encryption for the backup. Perhaps header is the correct term rather than metadata. Both LUKS (see, e.g., https://gitlab.com/cryptsetup/cryptsetup/-/wikis/LUKS-standard/on-disk-format.pdf, p. 2) as well as FreeBSD’s GELI (see the man page, e.g. https://www.freebsd.org/cgi/man.cgi?query=geli&apropos=0&sektion=0&manpath=FreeBSD+13.1-RELEASE+and+Ports&arch=default&format=html) use on-disk metadata (for LUKS, I understand this includes e.g. salt values, I presume it is similar for GELI), and both systems provide explicit methods for backing up this metadata. I presume that OpenBSD also writes on-disk metadata of the same sort somewhere. Where? I know I could dig this out of the source code but I don’t have time right now. As it stands, the documentation gives no hint that softraid crypto gives any additional risk of data loss. If there are in fact e.g. salt values written in an unknown location on the disk, whose loss renders THE ENTIRE DISK cryptographically inaccessible, surely this ought to be documented somewhere? I understand that there are use cases where it would be better for the data to be lost entirely than to be disclosed to the wrong people. But this is not true in every use case. While I agree with you that there are definite security risks in backing up such metadata, surely the decision as to what to do ought to be left to the end user, rather than being enforced by lack of documentation? Thanks! Nathan On 1/2/23 23:54, Nathan Carruth wrote: > Thank you for the response. > > I am with you 100% on backups. My real question was, How > does one backup crypto volume metadata? Given that > it can be backed up, clearly it should be, but there is no > information in any of the cited documentation as to where > the metadata is or how to back it up. There appears to be no intended way to backup this crypto metadata you are worried about. Not that I'd really want extra copies of anything related to a crypto disk floating around anywhere if I could help it. Not sure what you are hoping to get "backed up", but it sure sounds like something useful to the wrong people. Encrypted disks are supposed to "fail closed". If that scares you, your backup sucks or you shouldn't be running encrypted drives. (well...you COULD # dd bs=1m if=/dev/rsd0c of=/mnt/someotherdevice/disk.img and that would get your meta data, your data, your microdata your macrodata and possibly your first born). So let me offer you this, instead. A backup of potentially someday useful disk data: for DISK in $(sysctl -n hw.disknames|tr ',' ' '); do D=$(echo $DISK| cut -f1 -d:) print print "== $DISK ===" fdisk $D disklabel $D done (note: this script is surely missing edge and special cases. It has been run on three different machines. I do not wish to talk about how much time I've spent making it look prettier. I guarantee it is worth about what you paid for it and nothing more). Run that periodically, redirect the output to a file, get that file to another place, and you have full info about your disk partitions, both fdisk and disklabel, in case you overwrite them someday. Far more likely than a crypto failure that can be recovered by some crypto metadata backup. And the cool thing since the prefix "meta" basically boils down to "sounds cool, no idea what it means", we can call this metadata. :) (Yes, the disklabel info is stored by security(8), ... kinda. Spot checking two of my systems right now, I see both are missing drives...and I'm not sure why, I suspect there's a good reason. But fdisk output is NOT there, and I'd rather prefer it be there too on fdisk platforms). Nick. > > Thanks! > Nathan > >> Does a softraid(4) crypto volume require metadata backup? (I am >> running amd64 OpenBSD 6.9 if it is relevant, will probably >> upgrade in the next few months.) >> >> I understand FreeBSD GELI (e.g.) requires such a backup to protect >> against crypto-related metadata corruption rendering the encrypted >> volume inaccessible. >> >> Neither the OpenBSD disk FAQ nor the man pages for softraid(4) or >> bioctl(8) have anything to say about the matter. Web searches also >> turn up no relevant information. > > Storage requires backup. > Encrypted storage is (by design) more fragile than unencrypted storage. > Sounds like you are trying to protect against ONE form of storage > failure and avoid the solution you really need to have: a good backup > system, to deal with *all* forms of storage failure. &
Re: 回复: Softraid crypto metadata backup
On 1/2/23 23:54, Nathan Carruth wrote: Thank you for the response. I am with you 100% on backups. My real question was, How does one backup crypto volume metadata? Given that it can be backed up, clearly it should be, but there is no information in any of the cited documentation as to where the metadata is or how to back it up. There appears to be no intended way to backup this crypto metadata you are worried about. Not that I'd really want extra copies of anything related to a crypto disk floating around anywhere if I could help it. Not sure what you are hoping to get "backed up", but it sure sounds like something useful to the wrong people. Encrypted disks are supposed to "fail closed". If that scares you, your backup sucks or you shouldn't be running encrypted drives. (well...you COULD # dd bs=1m if=/dev/rsd0c of=/mnt/someotherdevice/disk.img and that would get your meta data, your data, your microdata your macrodata and possibly your first born). So let me offer you this, instead. A backup of potentially someday useful disk data: for DISK in $(sysctl -n hw.disknames|tr ',' ' '); do D=$(echo $DISK| cut -f1 -d:) print print "== $DISK ===" fdisk $D disklabel $D done (note: this script is surely missing edge and special cases. It has been run on three different machines. I do not wish to talk about how much time I've spent making it look prettier. I guarantee it is worth about what you paid for it and nothing more). Run that periodically, redirect the output to a file, get that file to another place, and you have full info about your disk partitions, both fdisk and disklabel, in case you overwrite them someday. Far more likely than a crypto failure that can be recovered by some crypto metadata backup. And the cool thing since the prefix "meta" basically boils down to "sounds cool, no idea what it means", we can call this metadata. :) (Yes, the disklabel info is stored by security(8), ... kinda. Spot checking two of my systems right now, I see both are missing drives...and I'm not sure why, I suspect there's a good reason. But fdisk output is NOT there, and I'd rather prefer it be there too on fdisk platforms). Nick. Thanks! Nathan Does a softraid(4) crypto volume require metadata backup? (I am running amd64 OpenBSD 6.9 if it is relevant, will probably upgrade in the next few months.) I understand FreeBSD GELI (e.g.) requires such a backup to protect against crypto-related metadata corruption rendering the encrypted volume inaccessible. Neither the OpenBSD disk FAQ nor the man pages for softraid(4) or bioctl(8) have anything to say about the matter. Web searches also turn up no relevant information. Storage requires backup. Encrypted storage is (by design) more fragile than unencrypted storage. Sounds like you are trying to protect against ONE form of storage failure and avoid the solution you really need to have: a good backup system, to deal with *all* forms of storage failure. I'd suggest a good backup system...to deal with ALL forms of data loss. Yes, encrypted storage implies a certain care has to be taken with the backups as well, you need to pick a solution that is appropriate for your needs -- or accept that yeah, stuff will go bye-bye someday. I don't see a benefit to trying to protect against some single failure mode when all the other failure modes still exist. If you have good backups, you are good. If you don't, dealing with a 1% problem isn't going to change much. Nick.
Re: Probable error in softraid(4) documentation
On 2023-01-03, Puru Shartha wrote: > Namaste misc, > > The softraid(4) documentation states the following in the CAVEAT > section [1]: > > "Stacking disciplines (CRYPTO on top of RAID 1, for example) is not > supported at this time." > > Based on my limited understanding, RAID 1C seems to allow the above. raid1c is a _single_ discipline combining mirroring and crypto. stacking would refer to creating one softraid (say a raid1 mirror) and then creating a separate softraid device (say a crypto volume) using the first softraid disk as a component. -- Please keep replies on the mailing list.
Probable error in softraid(4) documentation
Namaste misc, The softraid(4) documentation states the following in the CAVEAT section [1]: "Stacking disciplines (CRYPTO on top of RAID 1, for example) is not supported at this time." Based on my limited understanding, RAID 1C seems to allow the above. Also, Dhanyavaad Stefan for RAID 1C. Dhanyavaad Dharma Artha Kama Moksha [1] - https://man.openbsd.org/softraid.4#CAVEATS
回复: Softraid crypto metadata backup
Thank you for the response. I am with you 100% on backups. My real question was, How does one backup crypto volume metadata? Given that it can be backed up, clearly it should be, but there is no information in any of the cited documentation as to where the metadata is or how to back it up. Thanks! Nathan > Does a softraid(4) crypto volume require metadata backup? (I am > running amd64 OpenBSD 6.9 if it is relevant, will probably > upgrade in the next few months.) > > I understand FreeBSD GELI (e.g.) requires such a backup to protect > against crypto-related metadata corruption rendering the encrypted > volume inaccessible. > > Neither the OpenBSD disk FAQ nor the man pages for softraid(4) or > bioctl(8) have anything to say about the matter. Web searches also > turn up no relevant information. Storage requires backup. Encrypted storage is (by design) more fragile than unencrypted storage. Sounds like you are trying to protect against ONE form of storage failure and avoid the solution you really need to have: a good backup system, to deal with *all* forms of storage failure. I'd suggest a good backup system...to deal with ALL forms of data loss. Yes, encrypted storage implies a certain care has to be taken with the backups as well, you need to pick a solution that is appropriate for your needs -- or accept that yeah, stuff will go bye-bye someday. I don't see a benefit to trying to protect against some single failure mode when all the other failure modes still exist. If you have good backups, you are good. If you don't, dealing with a 1% problem isn't going to change much. Nick.
Re: Softraid crypto metadata backup
On 1/2/23 22:22, Nathan Carruth wrote: Does a softraid(4) crypto volume require metadata backup? (I am running amd64 OpenBSD 6.9 if it is relevant, will probably upgrade in the next few months.) I understand FreeBSD GELI (e.g.) requires such a backup to protect against crypto-related metadata corruption rendering the encrypted volume inaccessible. Neither the OpenBSD disk FAQ nor the man pages for softraid(4) or bioctl(8) have anything to say about the matter. Web searches also turn up no relevant information. Storage requires backup. Encrypted storage is (by design) more fragile than unencrypted storage. Sounds like you are trying to protect against ONE form of storage failure and avoid the solution you really need to have: a good backup system, to deal with *all* forms of storage failure. I'd suggest a good backup system...to deal with ALL forms of data loss. Yes, encrypted storage implies a certain care has to be taken with the backups as well, you need to pick a solution that is appropriate for your needs -- or accept that yeah, stuff will go bye-bye someday. I don't see a benefit to trying to protect against some single failure mode when all the other failure modes still exist. If you have good backups, you are good. If you don't, dealing with a 1% problem isn't going to change much. Nick.
Softraid crypto metadata backup
Does a softraid(4) crypto volume require metadata backup? (I am running amd64 OpenBSD 6.9 if it is relevant, will probably upgrade in the next few months.) I understand FreeBSD GELI (e.g.) requires such a backup to protect against crypto-related metadata corruption rendering the encrypted volume inaccessible. Neither the OpenBSD disk FAQ nor the man pages for softraid(4) or bioctl(8) have anything to say about the matter. Web searches also turn up no relevant information. Thanks, Nathan Carruth
Re: softraid disk read error
On 10/18/22 09:35, se...@0x.su wrote: I have raid1 volume (one of two on PC) with 2 disks. # disklabel sd5 # /dev/rsd5c: type: SCSI disk: SCSI disk label: SR RAID 1 duid: 7a03a84165b3d165 flags: bytes/sector: 512 sectors/track: 63 tracks/cylinder: 255 sectors/cylinder: 16065 cylinders: 243201 total sectors: 3907028640 boundstart: 0 boundend: 3907028640 drivedata: 0 16 partitions: #size offset fstype [fsize bsize cpg] a: 39070286080 4.2BSD 8192 65536 52270 # /home/vmail c: 39070286400 unused Recently I got an error in dmesg mail# dmesg | grep retry sd5: retrying read on block 767483392 (This happened during copying process) and system marked volume as degraded mail# bioctl sd5 Volume Status Size Device softraid0 1 Degraded2000398663680 sd5 RAID1 0 Online 2000398663680 1:0.0 noencl 1 Offline 2000398663680 1:1.0 noencl I tried to reread this sector (and a couple around) with dd to make sure the sector is unreadable: mail# dd if=/dev/rsd3c of=/dev/null bs=512 count=16 skip=767483384 16+0 records in 16+0 records out 8192 bytes transferred in 0.025 secs (316536 bytes/sec) mail# dd if=/dev/rsd5c of=/dev/null bs=512 count=16 skip=767483384 16+0 records in 16+0 records out 8192 bytes transferred in 0.050 secs (161303 bytes/sec) but error did not appeared. Are there any methods to check if sector is bad (preferably on the fly)? If this is not a disk error (im going to replace cables just in case) should i just get disk back online with bioctl -R /dev/sd3a sd5 ? You made some assumptions about the math that the disk uses vs. the math dd uses, and I'm not sure I agree with them. I'd suggest doing a dd read of the entire disk (rsd3c), rather than trying to read just the one sector. Remember, there's an offset between the sectors of sd5 (the softraid drive) and sd2 & sd3 where sd5 lives. So I'd kinda expect your sd3 check to pass because you missed the bad spot, and I'd expect your sd5 check to pass because the bad drive is locked out of the array and no longer a problem. IF you are a cheap *** or the machine is in another country, you might want to try dd'ing zeros and 0xff's over the entire disk before putting it back in the array. That sometimes triggers a discovery of a bad spot and locks it out and replaces it with a spare. I've had some success with this process, actually, though it's a bad idea. :) Nick.
softraid disk read error
I have raid1 volume (one of two on PC) with 2 disks. # disklabel sd5 # /dev/rsd5c: type: SCSI disk: SCSI disk label: SR RAID 1 duid: 7a03a84165b3d165 flags: bytes/sector: 512 sectors/track: 63 tracks/cylinder: 255 sectors/cylinder: 16065 cylinders: 243201 total sectors: 3907028640 boundstart: 0 boundend: 3907028640 drivedata: 0 16 partitions: #size offset fstype [fsize bsize cpg] a: 39070286080 4.2BSD 8192 65536 52270 # /home/vmail c: 39070286400 unused Recently I got an error in dmesg mail# dmesg | grep retry sd5: retrying read on block 767483392 (This happened during copying process) and system marked volume as degraded mail# bioctl sd5 Volume Status Size Device softraid0 1 Degraded2000398663680 sd5 RAID1 0 Online 2000398663680 1:0.0 noencl 1 Offline 2000398663680 1:1.0 noencl I tried to reread this sector (and a couple around) with dd to make sure the sector is unreadable: mail# dd if=/dev/rsd3c of=/dev/null bs=512 count=16 skip=767483384 16+0 records in 16+0 records out 8192 bytes transferred in 0.025 secs (316536 bytes/sec) mail# dd if=/dev/rsd5c of=/dev/null bs=512 count=16 skip=767483384 16+0 records in 16+0 records out 8192 bytes transferred in 0.050 secs (161303 bytes/sec) but error did not appeared. Are there any methods to check if sector is bad (preferably on the fly)? If this is not a disk error (im going to replace cables just in case) should i just get disk back online with bioctl -R /dev/sd3a sd5 ?
Re: Swap on SSD's (with softraid 1+C)
On 9/7/22 09:05, Erling Westenvik wrote: Hello, ... My question is: Should I let swap be outside RAID altogether? Like "directly" on the physical disks as in sd0b and sd1b? I mean, why have softraid waste CPU cycles making swap content (if any) redundant? What do you people do? 1) if you are using swap, you are doing it wrong. The additional processor load of encrypting swap twice is going to be lost in the screams of horror and malcontent from your users and you. Really is a case of optimizing the positioning of the deck chairs on the Titanic. Things are going down, people are unhappy. They won't notice the tiny difference. 2) Swap on softraid means you can re-use the swap space for other things when you decide "I never use swap, but I wish I made my /var partition a big bigger". It is difficult to now create a new softraided space on an unencrypted part of the drive. 3) you said "Softraid 1+C", so having non-redundant swap isn't going to accomplish what you want when a disk fails. IF you are using swap and the swap disk fails, I'm pretty sure your system is going to have a bad day. (Follow up question as for swap sizing: In the age of 32+ GB RAM, do you people really follow the recommendations on having swap at least twice the amount of RAM? I'm hoping for 72GB RAM and that would steal 144GB of my 525GB disks, something that seems ridiculous.) depends. Using 525GB of disk if you are building a firewall system and only need 20G is also ridiculous. But yeah 2xRAM dates back to ...well, I can't think of a time when it was ever a good idea (well, in an academic environment, I once used an IBM mainframe with 16MB RAM and two 16MB RAM disks for swap, the swap was ALMOST like regular RAM. That might have made some sense, but I never got to see how the swap actually was used and worked on the thing). As mentioned above, the advantage of allocating 144G RAM to swap is you now have 140G you could reallocate to something else if you later decide 144G was massively overkill for swap, but you didn't make a big enough /tmp or /var partition or need a separate /var/www. If you need all the 500GB of SSD, you probably should get a bigger disk. If you don't need it, leave a good chunk unallocated. Swap is kinda unallocated, right? :) So in short: I see no real disadvantage to swap on RAID1+C, and some potential advantage. You might wish you did, you are unlikely to wish you didn't. Nick.
Re: Swap on SSD's (with softraid 1+C)
> (Follow up question as for swap sizing: In the age of 32+ GB RAM, do > you people really follow the recommendations on having swap at least > twice the amount of RAM? I'm hoping for 72GB RAM and that would steal > 144GB of my 525GB disks, something that seems ridiculous.) That advice is ridiculous for such a machine, yes. Depending on if you want to have a full crash dump done to swap and/or hibernate to swap, you might be forced to have it at RAM-size plus some extra, but for the ordinary run of the machine it should not be needed to have a large swap at all, unless you run 40+G worth of applications all the time. If you did have 72G swap and actually used half of it, waiting for a normal drive to un-swap that amount would be sad and boring. -- May the most significant bit of your life be positive.
Swap on SSD's (with softraid 1+C)
Hello, I'm making the transition from SATA to SSD. A late bloomer.. My setup for years have been a semi-FDE softraid on two physical disks, sd0 and sd1, where the sd0a and sd1a chunks make up a RAID 1 volume sd2. sd2 contains an unencrypted root partition, sd2a, and the remainder of the filesystems reside in sd2p -- a CRYPTO partition that decrypts to sd3 and where sd3b constitutes swap (sysctl vm.swapencrypt.enable=0). (I don't encrypt root because I need to be able to reboot and decrypt the machine from remote locations but lack any sort of KVM, thus I have a somewhat elaborous setup involving a statically compiled sshd "daemon" that is invoked from /etc/rc.) My question is: Should I let swap be outside RAID altogether? Like "directly" on the physical disks as in sd0b and sd1b? I mean, why have softraid waste CPU cycles making swap content (if any) redundant? What do you people do? (Follow up question as for swap sizing: In the age of 32+ GB RAM, do you people really follow the recommendations on having swap at least twice the amount of RAM? I'm hoping for 72GB RAM and that would steal 144GB of my 525GB disks, something that seems ridiculous.) Kind regards Erling
Re: Softraid on NVMe
On 5/6/22 9:03 AM, Proton wrote: Hi, I'm using softraid 1C on my remote dedicated server, built on two NVMe disks. It works really well from performance perspective and provide some data protection, but there is no way to check device health status because SMART doesn’t work. I guess bioctl will tell me only if devices are ‚online’, but nothing more? well....a softraid device isn't a physical device, so, I'm not sure what you would get that you couldn't get out of bioctl. I have: bioctl softraid0 in my /etc/daily.local, and I also have a backup system that checks softraid status on all systems (hey, as long as I'm in the neighborhood and doing stuff as root...) You can look at the SMART status of the underlying physical devices in the softraid set exactly as you would non-softraid drives. So, if you put a lot of faith in SMART (I don't), what are you missing? Are there any "poor man’s” methods for checking state of devices you would suggest to perform periodically - like ‚cat /dev/rsd0c > /dev/null’ + ‚cat /dev/rsd1c > /dev/null’? Will potential I/O errors or timeouts be reported to stderr or to some system log file? doing read tests like that over the entire underlying drives seems like a good idea to me. Haven't implemented it so I can't say how it would respond to real problems, but I can think of only one good way to find out. (from experience: how things act when a drive fails are hard to predict and really hard to test. So even a dozen "this is how it behaved" results doesn't tell you what happens for the NEXT failure) I would definitely want to put some rate limiting on it so you don't kill performance overall. As last method I can reboot to linux rescue from time to time, but this would be not very convenient. Should I forget about NVMe and use other option - LSI MegaRaid HW with SSD disks attached? what would you gain there? Now you could only access what the controller thinks of the drive's state through bioctl (which you seemed to think was inadequate for softraid). In the HW vs. SW RAID argument, I'm firmly in the "either way" camp, but if I understand your query, you are LOSING info here. (I've also heard stories about SSDs and HW RAID not playing well together, but I'm not prepared to defend or refute that statement. On the other hand, I've seen SSDs work differently enough from what HW and SW expect that ... nothing would surprise me). Nick.
Re: openbsd, softraid recovery (I have password)
On 2022-04-03, Nick Holland wrote: > If you are going to find your data, you need to recreate the disklabel > partitions exactly as they were on the encrypted FFS from OpenBSD. > scan_ffs(8) may help. OoenBSD's scan_ffs only supports FFS1, the OS defaults to FFS2.
Re: openbsd, softraid recovery (I have password)
Am 02.04.22 18:56 schrieb harold: > Hello, > > Today I take a little breath to try to get some help about a little problem > I have since weeks. > I lost data due to misunderstanding of formatting rdsc1 softraid partition > on openbsd. > > I tell you my little story in the attached document, because I lost data > and would really like to recover it.. > > could you help me please? >From my experience with this list I am sure you can get help on such topic if you respect https://www.openbsd.org/mail.html
Re: openbsd, softraid recovery (I have password)
Well, I am very surprised to see an email going into OpenBSD vs all on the list, and even more surprised to see it's author it a good veteran of OpenBSD. But I guess things change with time ... Yes, trying multiboot without backups is a real problem calling. Most of the email is pure hype. Not a single OS to these days is able to handle multiboot correctly. Not a single OS to these days is able to handle any disk correctly. And this is not because of the OS only, too damn many standards.
Re: openbsd, softraid recovery (I have password)
On 4/3/22 9:44 AM, harold wrote: ... Hi everybody, good day everyone, ladies and gentlemen.. I’m looking for a way to retrieve back my data vanished accidentally .. I tell you more : a/ I had windows and linux mint 18 (gpt/efi) b/ I add openbsd to these double systems. Now three. Grub2 manages it. "I have a problem, I'll use grub to fix it" ...now you have two problems. [various thrashings I couldn't reproduce with the info given, but our problem count is much more than two now] L/ I just think I just have to reinstall openbsd. I do it, with the wise attention to do not affect mounting point to the raid/home slice, then I can easily get back my data. A bit like not formatting /home under linux. I install openbsd. Hint: when you say "like ... linux", you are almost certainly doing (or about to do) something wrong. M/ like a newb after installation I start openbsd and mount back my softraid partition. It asks for my password, recognize it. Slice looks empty. Df shows only few kb files. Data is gone. No backup. Two points : *looks like it’s not possible to reinsall openbsd without formatting everything. A bit sad, where it’s not said, and I carefuly took attention to not format the softraid slice. But it did anyway. No. That's not true. Don't define a mount point for a file system on install, it won't get formatted. Add it to fstab post-install. Trivial. Not done like Linux -- which is wonderful. OpenBSD has done it this way for at least 25 years now. Linux can't say that about much of anything. *how a softraid slice could be formatted and my password recognized ? If you understand how this works, your question becomes meaningless. Softraid creates imaginary DISKS. -> Unlocking a Softraid encrypted DISK requires a password. Disks have partitions. Partitions hold file systems. File systems are formatted. Not "disks". Got it? If not, read through https://www.openbsd.org/faq/faq14.html top-to-bottom until it's understood. *I tried with testdisk, then photorec, impossible to get a hand of my data. I didnt used openbsd since, to hope recover my data. I did not wrote on it since, to avoid writing/erasing on it. Well, since BSD FFS has almost zero support in the Linux world (I'm presuming that's because a solid, stable guest file system with 30 years of history would make Linux's "File System of the Month" look bad), yeah, I don't think you will see anything. And think about it a little bit... If a Linux tool could find data on an OpenBSD encrypted disk, pretty sure that would be an indicator of a really big flaw in the encryption. If you are going to find your data, you need to recreate the disklabel partitions exactly as they were on the encrypted FFS from OpenBSD. scan_ffs(8) may help. Boot from OpenBSD, unlock your encrypted disk and dig around in it. But if you formatted the partitions, you have made quite a mess of the file system. Blocks of data probably exist but reassembling them into useful files will be difficult. If somebody where able to help me to recover my data.. Honestly, not likely to happen. You aren't providing any hard information, just a lot of vague and "subject to interpretation" statements. I'm going to assume you have written too much where the old file systems were to pull 'em back from the format. Here's the thing -- Multibooting is COMPLICATED. You have to have a mastery of the boot process of ALL the OSs involved AND master all the tools used to accomplish multibooting. This is NOT a good way to "learn a new OS". It is a great way to lose all the data on a multi-booting machine. You really can't trust Linux tools for multibooting. They basically pretend nothing other than Linux and Windows exists. The more hand-holding a Linux system is, the more likely it is to be to make bad assumptions about what you are doing, and assume you are running Windows+Linux. Also...by design, real encrypted disks are fragile. Easy to mess up, almost impossible to recover if messed up. That's a feature, not a flaw. So you have a doubly unstable system. Backups are critical. You didn't have 'em. Nick.
Re: openbsd, softraid recovery (I have password)
Den sön 3 apr. 2022 kl 15:58 skrev harold : For anyone else that wants to experiment with dual/triple-booting: > I lost data due to misunderstanding > I tell you more : > a/ I had windows and linux mint 18 (gpt/efi) > b/ I add openbsd to these double systems. Now three. Grub2 manages it. [ skipping a bit in the middle ] > password, recognize it. Slice looks empty. Df shows only few kb files. > Data is gone. No backup. If you are doing weird triple OS-on-same-harddrive experiments, either 1) do not stash important data at all on any of them and just use it to learn something or 2) make very sure you have working backups of everything important to you There is very little in between, apart from tears when people skip this advice. 8-( No, I can't help get this data back, but I can at least hope to tell just one user more, that tested backups are very important, *especially* when doing experimental setups with the disk and partitions around it. -- May the most significant bit of your life be positive.
openbsd, softraid recovery (I have password)
Hello, Today I take a little breath to try to get some help about a little problem I have since weeks. I lost data due to misunderstanding of formatting rdsc1 softraid partition on openbsd. I tell you my little story in the attached document, because I lost data and would really like to recover it.. could you help me please? I thank you vm best regards harold sorry if it's a bit confusing... Just as follows, the copy/paste of the previously attached pdf : Hi everybody, good day everyone, ladies and gentlemen.. I’m looking for a way to retrieve back my data vanished accidentally .. I tell you more : a/ I had windows and linux mint 18 (gpt/efi) b/ I add openbsd to these double systems. Now three. Grub2 manages it. C/ after a problem regarding suspend str and poweroff, for openbsd (whom keeps always the computer in poweron back after a second), Im told that it’s required to update my bios. I see that an update is available, I totally forgotten. D/ I update bios/efi via windows e/ il restart, bios update, then panics about keys and secure boot (in efi), I right away go back to the bios, to disable secure boot. F/ computer starts then only on openbsd. Grub2 is totally vanished. No way to go back under windows nor mint. G/ I decide to launch a liveusb of refind (or refi) whom is able to launch back efi grub. Windows still vanished. H/ so I start ubuntu/mint, system crashes, seems broken. A little bit -definitely huge- angry, regarding what happens, since hours, I start then a crusade to all partition with e2fsck. Anyway, that tool knows how to do not act on non ext partition. That what I believed.. i/ at restart, openbsd doesnt starts anymore (imagine my peace of mind..) : I guess fsck ate a bit few seconds the openbsd area space on bootup files where stored. So it crashes immediately, not loading anything. J/ In a fury, I take a mint 18 liveusb and download lmde5, that I install after backed up my /home. K/ lmde starts well, I can add windows. Openbsd is still missing. L/ I just think I just have to reinstall openbsd. I do it, with the wise attention to do not affect mounting point to the raid/home slice, then I can easily get back my data. A bit like not formatting /home under linux. I install openbsd. M/ like a newb after installation I start openbsd and mount back my softraid partition. It asks for my password, recognize it. Slice looks empty. Df shows only few kb files. Data is gone. No backup. Two points : *looks like it’s not possible to reinsall openbsd without formatting everything. A bit sad, where it’s not said, and I carefuly took attention to not format the softraid slice. But it did anyway. *how a softraid slice could be formatted and my password recognized ? *I tried with testdisk, then photorec, impossible to get a hand of my data. I didnt used openbsd since, to hope recover my data. I did not wrote on it since, to avoid writing/erasing on it. If somebody where able to help me to recover my data.. maybe with offset of vnconfig ? But I dont know how to use those tools.. thank you for reading and your help ! Additional links : http://daemonforums.org/showthread.php?t=12034 https://si3t.ch/Logiciel-libre/OpenBSD/Chiffrer-home.xhtml
Re: openbsd, softraid recovery (I have password)
On 4/2/22 12:56 PM, harold wrote: ... I tell you my little story in the attached document, Just a thought, but you might want to reconsider your means of telling your story. For example, I am not in the habit of opening unsolicited PDF documents... Nick.
openbsd, softraid recovery (I have password)
Hello, Today I take a little breath to try to get some help about a little problem I have since weeks. I lost data due to misunderstanding of formatting rdsc1 softraid partition on openbsd. I tell you my little story in the attached document, because I lost data and would really like to recover it.. could you help me please? I thank you vm best regards harold sosdata_eng.pdf Description: Adobe PDF document
Re: Question how to delete somewhat encrypted partisions / softraid?
Thanks to all who responded. It was accurate, dd solved the problem. On Fri, Mar 25, 2022 at 7:13 PM pascal wrote: > > did you try to encrypt and install with MBR boot? > > Mar 25, 2022, 09:28 by soko.t...@gmail.com: > > > Hello list, > > > > I have tried to encrypt disk before the installation of OpenBSD 7.0 > > according to the instructions here > > https://www.openbsd.org/faq/faq14.html#softraid and managed to mess it. > > > > I have performed > > > > # cd /dev && sh MAKEDEV sd0 > > # fdisk -iy -g -b 960 sd0 > > # disklabel -E sd0 > > Label editor (enter '?' for help at any prompt) > > sd0> a a > > offset: [64] > > size: [39825135] * > > FS type: [4.2BSD] RAID > > sd0*> w > > sd0> q > > No label changes. > > # bioctl -c C -l sd0a softraid0 > > > > But I have failed to proceed before the installation with > > > > # cd /dev && sh MAKEDEV sd1 > > # dd if=/dev/zero of=/dev/rsd1c bs=1m count=1 > > > > So i ended up with unbootable install. > > > > The disk is shown > > > > # disklabel sd0 > > # /dev/rsd0c: > > type: SCSI > > disk: SCSI disk > > label: HGST HTS725050A7 > > duid: f62d9ae29f67d326 > > flags: > > bytes/sector: 512 > > sectors/track: 63 > > tracks/cylinder: 255 > > sectors/cylinder: 16065 > > cylinders: 60801 > > total sectors: 976773168 > > boundstart: 1024 > > boundend: 976773135 > > drivedata: 0 > > > > 16 partitions: > > #size offset fstype [fsize bsize cpg] > > a:976772111 1024RAID > > c:9767731680 unused > > i: 960 64 MSDOS > > > > #fdisk sd0 > > Disk: sd0 Usable LBA: 34 to 976773134 [976773168 Sectors] > > #: type [ start: size ] > > > > 0: EFI Sys [ 64: 960 ] > > 1: OpenBSD [1024:976772111 ] > > > > > > Is it safe to delete all somewhat encrypted partitions by > > # fdisk -iy sd0 > > ? > > > > Should I perhaps first delete somewhat encrypted partitions by > > > > # disklabel -E sd0 > > > > d a > > d i > > > > ? > > > > Thank you in advance for your answers. > > > >
Re: Question how to delete somewhat encrypted partisions / softraid?
On 3/25/22 5:28 AM, soko.tica wrote: Hello list, ... But I have failed to proceed before the installation with # cd /dev && sh MAKEDEV sd1 # dd if=/dev/zero of=/dev/rsd1c bs=1m count=1 So i ended up with unbootable install. I don't think that is cause and effect. If you want to start over from scratch (which I agree with others, this would be a good starting point), I'd just suggest zeroing the first 1MB of the physical disk. That will clear all OpenBSD structures from the physical disk, the softraid encrypted disk, and any (important) evidence there was a softraid disk there. I always recommend clearing the start of the physical disk whenever dealing with RAID because...well, deleting fdisk and disklabel tables looks good, there's often a lot of "structure" left on the disk which can sometimes be confusing to the user (or the OS!) when things suddenly pop back from the seeming dead. So ... dd /dev/zero over the first 1MB of sd0, start over and see what you get. But I think your real problem is the installation didn't go right for unknown reasons. You MAY want to start with a simple install, make sure your machine handles OpenBSD well without the encrypted disk, before jumping into the full disk encryption (OpenBSD installs are so fast and relatively painless, no reason to fret about getting everything "just so" on the first install!). Nick.
Re: Question how to delete somewhat encrypted partisions / softraid?
On Fri, Mar 25, 2022 at 10:28:55AM +0100, soko.tica wrote: > Hello list, > > I have tried to encrypt disk before the installation of OpenBSD 7.0 > according to the instructions here > https://www.openbsd.org/faq/faq14.html#softraid and managed to mess it. First of all, if this is a fresh install onto an otherwise clean disk, I see no reason not to restart everything from scratch. > > I have performed > > # cd /dev && sh MAKEDEV sd0 > # fdisk -iy -g -b 960 sd0 > # disklabel -E sd0 > Label editor (enter '?' for help at any prompt) > sd0> a a > offset: [64] > size: [39825135] * > FS type: [4.2BSD] RAID > sd0*> w > sd0> q > No label changes. > # bioctl -c C -l sd0a softraid0 > > But I have failed to proceed before the installation with > > # cd /dev && sh MAKEDEV sd1 > # dd if=/dev/zero of=/dev/rsd1c bs=1m count=1 > > So i ended up with unbootable install. There is some missing information here. How did the installation proceed? Did it go all the way to the end? Did the installer create a disklabel? And if so, on which disk? If you boot the RAMDISK, exit to a shell, build the crypto volume (bioctl), create the dev node (MAKEDEV sd1) and check its disklabel (disklabel sd1) does it show anything? If there are partitions on the sd1 disklabel, can they be mounted and do they have anything in them? Again, if the disk was empty to begin with (and the information below seems to indicate so), there is nothing that needs to be salvaged, just restart the whole process. > The disk is shown > > # disklabel sd0 > # /dev/rsd0c: > type: SCSI > disk: SCSI disk > label: HGST HTS725050A7 > duid: f62d9ae29f67d326 > flags: > bytes/sector: 512 > sectors/track: 63 > tracks/cylinder: 255 > sectors/cylinder: 16065 > cylinders: 60801 > total sectors: 976773168 > boundstart: 1024 > boundend: 976773135 > drivedata: 0 > > 16 partitions: > #size offset fstype [fsize bsize cpg] > a:976772111 1024RAID > c:9767731680 unused > i: 960 64 MSDOS > > #fdisk sd0 > Disk: sd0 Usable LBA: 34 to 976773134 [976773168 Sectors] >#: type [ start: size ] > >0: EFI Sys [ 64: 960 ] >1: OpenBSD [1024:976772111 ] > > > Is it safe to delete all somewhat encrypted partitions by > # fdisk -iy sd0 > ? > > Should I perhaps first delete somewhat encrypted partitions by > > # disklabel -E sd0 > > d a > d i > > ? > > Thank you in advance for your answers. --
Question how to delete somewhat encrypted partisions / softraid?
Hello list, I have tried to encrypt disk before the installation of OpenBSD 7.0 according to the instructions here https://www.openbsd.org/faq/faq14.html#softraid and managed to mess it. I have performed # cd /dev && sh MAKEDEV sd0 # fdisk -iy -g -b 960 sd0 # disklabel -E sd0 Label editor (enter '?' for help at any prompt) sd0> a a offset: [64] size: [39825135] * FS type: [4.2BSD] RAID sd0*> w sd0> q No label changes. # bioctl -c C -l sd0a softraid0 But I have failed to proceed before the installation with # cd /dev && sh MAKEDEV sd1 # dd if=/dev/zero of=/dev/rsd1c bs=1m count=1 So i ended up with unbootable install. The disk is shown # disklabel sd0 # /dev/rsd0c: type: SCSI disk: SCSI disk label: HGST HTS725050A7 duid: f62d9ae29f67d326 flags: bytes/sector: 512 sectors/track: 63 tracks/cylinder: 255 sectors/cylinder: 16065 cylinders: 60801 total sectors: 976773168 boundstart: 1024 boundend: 976773135 drivedata: 0 16 partitions: #size offset fstype [fsize bsize cpg] a:976772111 1024RAID c:9767731680 unused i: 960 64 MSDOS #fdisk sd0 Disk: sd0 Usable LBA: 34 to 976773134 [976773168 Sectors] #: type [ start: size ] 0: EFI Sys [ 64: 960 ] 1: OpenBSD [1024:976772111 ] Is it safe to delete all somewhat encrypted partitions by # fdisk -iy sd0 ? Should I perhaps first delete somewhat encrypted partitions by # disklabel -E sd0 d a d i ? Thank you in advance for your answers.
softraid crypto header backup
Hi, I did an encrypted install as per https://www.openbsd.org/faq/faq14.html#softraidFDE >From previously using linux with FDE I remember the recommendation to afterwards run something along the lines of cryptsetup luksHeaderBackup /dev/nvme0n1p3 --header-backup-file to recover in case the header gets corrupted somehow. Neither the manpages softraid(4), bioctl(8) nor a google search mention anything like that. Is there a reason why this wouldn't be necessary on OpenBSD or did I just not read the documentation thoroughly enough? Thanks in advance. Best regards, Alexander
Re: proper way to grow softraid partition
29.10.2021 15:33, Nick Holland пишет: On 10/27/21 1:11 PM, kasak wrote: Hello misc! I want to replace my two 2TB hdd, joined in raid1. I have two 4TB drives, and I want to replace smaller drives with them. it wouldn't be a problem, if i had some spare sata ports, but in my pc i have only one left. So, I can attach only one of this 4 tb drives at the same time. I think, maybe I can attach new 4 tb drive to old raid as a third volume, wait for it "repair", Unfortunately, unless something changed when I wasn't looking, you can't change the number of drives in a softraid RAID1 after creation. I really wish you could. and then remove 2 tb drives, add one more 4 tb and "repair" raid again. I don't know, will this operation actually grow my partition, or it is a bad idea from the beginning? nope, you would end up with a 2T RAID partition on a 4G drive. Which is fine, except you didn't achieve your goal. Alternate, can i create raid 1 volume from just one drive, rsync files between raids and after add another disk? Again, you can't change the number of drives in a softraid RAID1 set after creation. And you can't change the size of a softraid partition. What I would (and have) done is this, assuming this is your only computer available: * extract both your 2T drives. * insert both 4T drives, build a RAID1 set. * Insert ONE of the old 2T drives and ONE of the 4T drives into your system. On boot, you end up with two degraded arrays...but that will work for your purposes! * Copy the data from the old disks to the new disks * Change fstab * Remove the old 2T disk, and replace with the 4T disk left over, rebuild the degraded array onto the 4T disk. * DONE! Now...since you have ONE spare port still, I'd actually cheat and remove one 2T disk, and put both new disks in place, build the array, and copy over. Fix fstab, remove the old 2T disk, done. Thank you very much for detailed explanation! I will go this way! HOWEVER, something else to consider -- from later messages, sounds like you have a non-RAID boot drive and RAID data drives. I SUSPECT you could build out your new 4T array as a bootable softraid and move your boot drive data AND the 2T of old data all to the one 4T array and still have a lot of new space (a basic OpenBSD install is barely noticeable in a 4T disk!). Now you have redundancy in both boot and data, and one less disk, which will be a small power reduction, and one less point of failure. Nick.
Re: proper way to grow softraid partition
On 10/27/21 1:11 PM, kasak wrote: Hello misc! I want to replace my two 2TB hdd, joined in raid1. I have two 4TB drives, and I want to replace smaller drives with them. it wouldn't be a problem, if i had some spare sata ports, but in my pc i have only one left. So, I can attach only one of this 4 tb drives at the same time. I think, maybe I can attach new 4 tb drive to old raid as a third volume, wait for it "repair", Unfortunately, unless something changed when I wasn't looking, you can't change the number of drives in a softraid RAID1 after creation. I really wish you could. and then remove 2 tb drives, add one more 4 tb and "repair" raid again. I don't know, will this operation actually grow my partition, or it is a bad idea from the beginning? nope, you would end up with a 2T RAID partition on a 4G drive. Which is fine, except you didn't achieve your goal. Alternate, can i create raid 1 volume from just one drive, rsync files between raids and after add another disk? Again, you can't change the number of drives in a softraid RAID1 set after creation. And you can't change the size of a softraid partition. What I would (and have) done is this, assuming this is your only computer available: * extract both your 2T drives. * insert both 4T drives, build a RAID1 set. * Insert ONE of the old 2T drives and ONE of the 4T drives into your system. On boot, you end up with two degraded arrays...but that will work for your purposes! * Copy the data from the old disks to the new disks * Change fstab * Remove the old 2T disk, and replace with the 4T disk left over, rebuild the degraded array onto the 4T disk. * DONE! Now...since you have ONE spare port still, I'd actually cheat and remove one 2T disk, and put both new disks in place, build the array, and copy over. Fix fstab, remove the old 2T disk, done. HOWEVER, something else to consider -- from later messages, sounds like you have a non-RAID boot drive and RAID data drives. I SUSPECT you could build out your new 4T array as a bootable softraid and move your boot drive data AND the 2T of old data all to the one 4T array and still have a lot of new space (a basic OpenBSD install is barely noticeable in a 4T disk!). Now you have redundancy in both boot and data, and one less disk, which will be a small power reduction, and one less point of failure. Nick.
Re: proper way to grow softraid partition
27.10.2021 20:41, cho...@jtan.com пишет: It's easier by far not to muck about trying to resize partitions. If you can mount each drive (old and new) in an operating system that isn't using them then that's your best bet and that's not so hard to arrange. Mount the old partition structure in /old, create new larger partitions on the new drive mounted on /new and rsync -a /old/ /new/ (note the trailing /). After that you will need to install the boot code if the drive is used for booting which it probably is. I can't remember how to do that but it's no doubt performed in https://cvsweb.openbsd.org/src/distrib/miniroot/install.sub or its machine-dependent counterpart install.md (location varies). It's also in the manpages. installboot(8) looks promising. Sorry but I'm not going to provide instructions to do something I don't remember how to do and haven't tested. If you want to set up RAID or don't want to figure out how to install the boot blocks, install anew on the new larger possibly-RAIDed drives, install the same set of packages and copy /home and a few files from /etc to get a practically-identical installation. Matthew The raid is not used for booting. And the main goal that raid must stay. I think, I have third option, to remove one old drive, make raid from two new drives and rsync data from old drive. This is what you suggest i should do?
Re: proper way to grow softraid partition
It's easier by far not to muck about trying to resize partitions. If you can mount each drive (old and new) in an operating system that isn't using them then that's your best bet and that's not so hard to arrange. Mount the old partition structure in /old, create new larger partitions on the new drive mounted on /new and rsync -a /old/ /new/ (note the trailing /). After that you will need to install the boot code if the drive is used for booting which it probably is. I can't remember how to do that but it's no doubt performed in https://cvsweb.openbsd.org/src/distrib/miniroot/install.sub or its machine-dependent counterpart install.md (location varies). It's also in the manpages. installboot(8) looks promising. Sorry but I'm not going to provide instructions to do something I don't remember how to do and haven't tested. If you want to set up RAID or don't want to figure out how to install the boot blocks, install anew on the new larger possibly-RAIDed drives, install the same set of packages and copy /home and a few files from /etc to get a practically-identical installation. Matthew
proper way to grow softraid partition
Hello misc! I want to replace my two 2TB hdd, joined in raid1. I have two 4TB drives, and I want to replace smaller drives with them. it wouldn't be a problem, if i had some spare sata ports, but in my pc i have only one left. So, I can attach only one of this 4 tb drives at the same time. I think, maybe I can attach new 4 tb drive to old raid as a third volume, wait for it "repair", and then remove 2 tb drives, add one more 4 tb and "repair" raid again. I don't know, will this operation actually grow my partition, or it is a bad idea from the beginning? Alternate, can i create raid 1 volume from just one drive, rsync files between raids and after add another disk?
Re: Unconsistent two-level write speed bouncing on softraid RAID1 SSD's
I decided talking about my performance issue to the manufacturer's support (Crucial by Micron). I convinced them that the disks had a problem so they proposed me RMA for my two disks and initiated the procedure from their side. I hope this would help someone getting a similar issue. Hopping this would help someone facing a similar situation. Thanks all for your replies. Cheers PS: I was pleasently surprised Crucial's support did not forced me installing windows to run their diag tool and told they "Understood" I was running OpenBSD On Wed, 2021-06-09 at 03:45 +0200, xavie...@mailoo.org wrote: > Hello, There's a strange write speed bounce behavior on my SATA > softraid > RAID1 SSD (Crucial BX500 480GB 3D NAND). Sequential writes starts > high > (~450MB/s with dd and a bs of 1M) then after about 30s to 1:30 minute > it > falls to a low ~7MB/s for one minute, then bounce back to the high > speed > of 450MB/s and so forth. > > Maybe the problem come from my Crucial BX500 480GB 3D NAND SATA 2.5- > inch > SSD which are new. But I'm not 100% sure what's happening really. > Maybe > this would help someone facing a similar situation with this > particular > high / low write speed bounces. I also tried with a second softraid > on > the same machine but with spinning USB disks. No problems so far, the > write speed is constant. Read speed are fine and constant on SSD as > well. > > Please let me know if there something I should try to workaroud or > identify this > problem. > > Reproduction scenario: > > note: The test I made to show you used the default 512B block size > with dd (so > the high speed is limited to ~130MB/s and the low speed remains > around 7MB/s) > > - disabled pf and system logs > - dd if=/dev/zero of=testfile # on /home > - iostat -w1 sd0 sd1 sd6 # chunk0 chunk1 softraid_volume > > See iostat: for results > > mount: > /dev/sd6a on / type ffs (local, softdep) > /dev/sd6h on /home type ffs (local, nodev, nosuid, softdep) > /dev/sd6e on /tmp type ffs (local, nodev, nosuid, softdep) > /dev/sd6f on /usr type ffs (local, nodev, softdep) > /dev/sd6g on /var type ffs (local, nodev, nosuid, softdep) > > disklabel: > # /dev/rsd0c: > type: SCSI > disk: SCSI disk > label: CT480BX500SSD1 > duid: 808fe38d1751a671 > flags: > bytes/sector: 512 > sectors/track: 63 > tracks/cylinder: 255 > sectors/cylinder: 16065 > cylinders: 58369 > total sectors: 937703088 > boundstart: 64noatimenoatime > boundend: 937697985 > drivedata: 0 > > 16 partitions: > # size offset fstype [fsize bsize cpg] > a: 937697921 64 RAID > c: 937703088 0 unused > # /dev/rsd1c: > type: SCSI > disk: SCSI disk > label: CT480BX500SSD1 > duid: 33c950831897af57 > flags: > bytes/sector: 512 > sectors/track: 63 > tracks/cylinder: 255 > sectors/cylinder: 16065 > cylinders: 58369 > total sectors: 937703088 > boundstart: 64 > boundend: 937697985 > drivedata: 0 > > 16 partitions: > # size offset fstype [fsize bsize cpg] > a: 937697921 64 RAID > c: 937703088 0 unused > # /dev/rsd6c: > type: SCSI > disk: SCSI disk > label: SR RAID 1 > duid: 1266e4d9a58f149d > flags: > bytes/sector: 512 > sectors/track: 63 > tracks/cylinder: 255 > sectors/cylinder: 16065 > cylinders: 58368 > total sectors: 937697393 > boundstart: 64 > boundend: 937681920 > drivedata: 0 > > 16 partitions: > # size offset fstype [fsize bsize cpg] > a: 2104448 64 4.2BSD 2048 16384 12960 # / > b: 33768633 2104512 swap # none > c: 937697393 0 unused > d: 2104480 35873152 4.2BSD 2048 16384 12960 > e: 8402016 37977632 4.2BSD 2048 16384 12960 # /tmp > f: 62926592 46379648 4.2BSD 2048 16384 12960 # /usr > g: 62926624 109306240 4.2BSD 2048 16384 12960 # /var > h: 765449024 172232896 4.2BSD 4096 32768 26062 # /home > > bioctl: > Volume Status Size Device > softraid0 1 Online 1000170315776 sd7 RAID1 > 0 Online 1000170315776 1:0.0 noencl > 1 Online 1000170315776 1:1.0 noencl > > dd: > 23679552+0 records in > 679551+0 records out > 123930112 bytes transferred in 177.691 secs (68230103 bytes/sec) > > corresponding iostat: > sd0 sd1 sd6 > KB/t t/s MB/s KB/t t/s MB/s KB/t t/s MB/s > 30.06 31 0.92 3023679552+0 records in > 679551+0 records out > 123930112 bytes transferred in 177.691 secs (68230103 bytes/sec) .12 > 31 0.92 29.81 32 0.95 > 14.47 17 0.24 14.47 17 0.24 14.47 17 0.24 > 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 > 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 > 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 > 2.00 2 0.00 2.00 2 0.00 2.00 2 0.00 > 16.00 1 0.02 16.00 1 0.02 16.00 1 0.02 > 16.00 1 0.02 16.00 1 0.02 16.00 1 0.02 > 0.00 0 0.00 0.00 0 0.00 0.00 0
Re: Unconsistent two-level write speed bouncing on softraid RAID1 SSD's
I don't see how an SSD can be SMR or CMR as it's not spinning plates. But I can understand those SSD's quality can be part of the problem. On Thu, 2021-06-10 at 12:50 +, Kent Watsen wrote: > The Crucial BX500 SSD uses SMR technology, which is best used for > infrequent-write applications. > > For general-purpose, and especially NAS, applications, CMR technology > should be used. > > K. > > > On Jun 10, 2021, at 6:20 AM, Xavier Sanchez > > wrote: > > > > Hi ! not so surprising news: hardware is the problem > > > > I managed to get one of the two disks apart yesterday and I figured > > out > > that those disks was in cause. (both of them) > > > > Written from my laptop directly to the device and > > - good and constant read speed > > - bouncing 7MB/s to high write speed > > > > I did looked at the serial number, they're the same. > > > > Manufacturer's support suggests that if there's no trim, write > > speed > > may be impacted ( but so much ? ) and told to let the disk idle for > > 6 > > to 8 hours so the internal garbage collector could clean it. > > > > I tried that with no luck as well. > > > > Read somewhere that issuing a security erase could also help. So I > > tried issuing the following: > > > > # atactl sd0c secsetpass user high > > User password: > > Retype user password: > > atactl: ATA device returned error register 0 > > > > But any sec* command returned: > > atactl: ATA device returned error register 0 > > > > even after a coldboot ( non-frozen ), despite the devices supports > > the > > Security Mode feature set > > > > - Am I attempting to issue the security erase the wrong way ? > > > > To me it was 0) check if not frozen 2) set user pass 3) issue > > security > > erase command with password. > > > > # atactl sd0c > > Model: CT480BX500SSD1, Rev: M6CR022, Serial #: 2030E408CA88 > > Device type: ATA, fixed > > Cylinders: 16383, heads: 16, sec/track: 63, total sectors: > > 937703088 > > Device capabilities: > > ATA standby timer values > > IORDY operation > > IORDY disabling > > Device supports the following standards: > > ATA-3 ATA-4 ATA-5 ATA-6 ATA-7 ATA-8 ATA-9 ATA-10 > > Master password revision code 0xfffe > > Device supports the following command sets: > > NOP command > > READ BUFFER command > > WRITE BUFFER command > > Host Protected Area feature set > > Read look-ahead > > Write cache > > Power Management feature set > > Security Mode feature set > > SMART feature set > > Flush Cache Ext command > > Flush Cache command > > 48bit address feature set > > Advanced Power Management feature set > > DOWNLOAD MICROCODE command > > Device has enabled the following command sets/features: > > NOP command > > READ BUFFER command > > WRITE BUFFER command > > Host Protected Area feature set > > Read look-ahead > > Write cache > > Power Management feature set > > SMART feature set > > Flush Cache Ext command > > Flush Cache command > > 48bit address feature set > > DOWNLOAD MICROCODE command > > > > > > > On Wed, 2021-06-09 at 03:45 +0200, xavie...@mailoo.org wrote: > > > Hello, There's a strange write speed bounce behavior on my SATA > > > softraid > > > RAID1 SSD (Crucial BX500 480GB 3D NAND). Sequential writes starts > > > high > > > (~450MB/s with dd and a bs of 1M) then after about 30s to 1:30 > > > minute > > > it > > > falls to a low ~7MB/s for one minute, then bounce back to the > > > high > > > speed > > > of 450MB/s and so forth. > > > > > > Maybe the problem come from my Crucial BX500 480GB 3D NAND SATA > > > 2.5- > > > inch > > > SSD which are new. But I'm not 100% sure what's happening really. > > > Maybe > > > this would help someone facing a similar situation with this > > > particular > > > high / low write speed bounces. I also tried with a second > > > softraid > > > on > > > the same machine but with spinning USB disks. No problems so far, > &g
Re: Unconsistent two-level write speed bouncing on softraid RAID1 SSD's
All right, thanks for pointing out the details and the procedure, seems legit secfreeze is issued by default. On Thu, 2021-06-10 at 07:08 -0700, Bryan Linton wrote: > On 2021-06-10 11:49:59, Xavier Sanchez wrote: > > > > Read somewhere that issuing a security erase could also help. So I > > tried issuing the following: > > > > # atactl sd0c secsetpass user high > > User password: > > Retype user password: > > atactl: ATA device returned error register 0 > > > > But any sec* command returned: > > atactl: ATA device returned error register 0 > > > > even after a coldboot ( non-frozen ), despite the devices supports > > the > > Security Mode feature set > > > > - Am I attempting to issue the security erase the wrong way ? > > > > This is not possible on OpenBSD. It's actually a feature, not a > bug. OpenBSD issues the secfreeze command at the driver level > when disks attach. > > From atactl(8): > > secfreeze > Prevents changes to passwords until a following power > cycle. > The purpose of this command is to prevent password > setting > attacks on the security system. After command > completion any > other commands that update the device lock mode will be > aborted. > > > You can see in src/sys/dev/ata/atascsi.c:408 and > src/sys/dev/ata/wd.c:305 that the same command is issued to all > sd(4) and wd(4) drives as a security measure. > > You're going to need to boot from a live CD/USB in order to set a > password on the drive. > > You should also double-check that your BIOS doesn't have a setting > to disable this too. I've heard that some BIOSes have a toggle > for this to help mitigate the above-mentioned password setting > attacks. > > Also, another poster mentioned that these are SMR drives. If > that's the case, then the "stuttering" speeds you described is > normal for them. SMR drives are good for storing infrequently > accessed files. They're big and they're cheap, but they're not > always very fast. > > Like the old saying goes when it comes to hard drives, "Pick any > two: cheap, fast, big". SMR drives write data in "stripes". If > you change even one bit of one byte anywhere in that stripe, the > drive has to read the entire stripe into memory, change what was > changed, then re-write the entire stripe. > > This is a limitation of the technology they use. It allows very > high density drives, but has the drawback of slowing things down a > lot whenever the drive has to re-write a stripe of data. > > > I've personally found that SMR drives are good enough for my use > case, but I wouldn't recommend them for a live database where > latency is much more critical. > > It seems like the new hierarchy is now: > > SSD >> PMR > SMR > > when it comes to speed. The inverse is true when it comes to > capacity. > > So to summarize, your drive may be working exactly as intended. >
Re: Unconsistent two-level write speed bouncing on softraid RAID1 SSD's
>> The Crucial BX500 SSD uses SMR technology, which is best used for >> infrequent-write applications. >> For general-purpose, and especially NAS, applications, CMR technology should >> be used. > > hmm, does SMR stand for something other than "shingled magnetic recording" > related to storage? that only relates to HD not SSD. You're right. I was confused because I was recently burned by both SMR-based and MX500-based issues recently, and hence conflated them after a quick "BX500 SMR" search seemed to return hits. I recall now that the MX500 SSDs were really quite amazing, but I couldn't use them because they don't report ATA TRIM in a way that is understood by the LSI HBAs I have. K.
Re: Unconsistent two-level write speed bouncing on softraid RAID1 SSD's
On 2021-06-10, Kent Watsen wrote: > The Crucial BX500 SSD uses SMR technology, which is best used for > infrequent-write applications. > For general-purpose, and especially NAS, applications, CMR technology should > be used. hmm, does SMR stand for something other than "shingled magnetic recording" related to storage? that only relates to HD not SSD. >> On Jun 10, 2021, at 6:20 AM, Xavier Sanchez wrote: >> >> Written from my laptop directly to the device and >> - good and constant read speed >> - bouncing 7MB/s to high write speed Bouncing between speeds is not impossible, SSDs often have faster cache and do flash erase/programming in the background, until the cache is full. But 7MB/s seems a bit too slow even then.
Re: Unconsistent two-level write speed bouncing on softraid RAID1 SSD's
On 2021-06-10 11:49:59, Xavier Sanchez wrote: > > Read somewhere that issuing a security erase could also help. So I > tried issuing the following: > > # atactl sd0c secsetpass user high > User password: > Retype user password: > atactl: ATA device returned error register 0 > > But any sec* command returned: > atactl: ATA device returned error register 0 > > even after a coldboot ( non-frozen ), despite the devices supports the > Security Mode feature set > > - Am I attempting to issue the security erase the wrong way ? > This is not possible on OpenBSD. It's actually a feature, not a bug. OpenBSD issues the secfreeze command at the driver level when disks attach. >From atactl(8): secfreeze Prevents changes to passwords until a following power cycle. The purpose of this command is to prevent password setting attacks on the security system. After command completion any other commands that update the device lock mode will be aborted. You can see in src/sys/dev/ata/atascsi.c:408 and src/sys/dev/ata/wd.c:305 that the same command is issued to all sd(4) and wd(4) drives as a security measure. You're going to need to boot from a live CD/USB in order to set a password on the drive. You should also double-check that your BIOS doesn't have a setting to disable this too. I've heard that some BIOSes have a toggle for this to help mitigate the above-mentioned password setting attacks. Also, another poster mentioned that these are SMR drives. If that's the case, then the "stuttering" speeds you described is normal for them. SMR drives are good for storing infrequently accessed files. They're big and they're cheap, but they're not always very fast. Like the old saying goes when it comes to hard drives, "Pick any two: cheap, fast, big". SMR drives write data in "stripes". If you change even one bit of one byte anywhere in that stripe, the drive has to read the entire stripe into memory, change what was changed, then re-write the entire stripe. This is a limitation of the technology they use. It allows very high density drives, but has the drawback of slowing things down a lot whenever the drive has to re-write a stripe of data. I've personally found that SMR drives are good enough for my use case, but I wouldn't recommend them for a live database where latency is much more critical. It seems like the new hierarchy is now: SSD >> PMR > SMR when it comes to speed. The inverse is true when it comes to capacity. So to summarize, your drive may be working exactly as intended. -- Bryan
Re: Unconsistent two-level write speed bouncing on softraid RAID1 SSD's
The Crucial BX500 SSD uses SMR technology, which is best used for infrequent-write applications. For general-purpose, and especially NAS, applications, CMR technology should be used. K. > On Jun 10, 2021, at 6:20 AM, Xavier Sanchez wrote: > > Hi ! not so surprising news: hardware is the problem > > I managed to get one of the two disks apart yesterday and I figured out > that those disks was in cause. (both of them) > > Written from my laptop directly to the device and > - good and constant read speed > - bouncing 7MB/s to high write speed > > I did looked at the serial number, they're the same. > > Manufacturer's support suggests that if there's no trim, write speed > may be impacted ( but so much ? ) and told to let the disk idle for 6 > to 8 hours so the internal garbage collector could clean it. > > I tried that with no luck as well. > > Read somewhere that issuing a security erase could also help. So I > tried issuing the following: > > # atactl sd0c secsetpass user high > User password: > Retype user password: > atactl: ATA device returned error register 0 > > But any sec* command returned: > atactl: ATA device returned error register 0 > > even after a coldboot ( non-frozen ), despite the devices supports the > Security Mode feature set > > - Am I attempting to issue the security erase the wrong way ? > > To me it was 0) check if not frozen 2) set user pass 3) issue security > erase command with password. > > # atactl sd0c > Model: CT480BX500SSD1, Rev: M6CR022, Serial #: 2030E408CA88 > Device type: ATA, fixed > Cylinders: 16383, heads: 16, sec/track: 63, total sectors: 937703088 > Device capabilities: >ATA standby timer values >IORDY operation >IORDY disabling > Device supports the following standards: > ATA-3 ATA-4 ATA-5 ATA-6 ATA-7 ATA-8 ATA-9 ATA-10 > Master password revision code 0xfffe > Device supports the following command sets: >NOP command >READ BUFFER command >WRITE BUFFER command >Host Protected Area feature set >Read look-ahead >Write cache >Power Management feature set >Security Mode feature set >SMART feature set >Flush Cache Ext command >Flush Cache command >48bit address feature set >Advanced Power Management feature set >DOWNLOAD MICROCODE command > Device has enabled the following command sets/features: >NOP command >READ BUFFER command >WRITE BUFFER command >Host Protected Area feature set >Read look-ahead >Write cache >Power Management feature set >SMART feature set >Flush Cache Ext command >Flush Cache command >48bit address feature set >DOWNLOAD MICROCODE command > > >> On Wed, 2021-06-09 at 03:45 +0200, xavie...@mailoo.org wrote: >> Hello, There's a strange write speed bounce behavior on my SATA >> softraid >> RAID1 SSD (Crucial BX500 480GB 3D NAND). Sequential writes starts >> high >> (~450MB/s with dd and a bs of 1M) then after about 30s to 1:30 minute >> it >> falls to a low ~7MB/s for one minute, then bounce back to the high >> speed >> of 450MB/s and so forth. >> >> Maybe the problem come from my Crucial BX500 480GB 3D NAND SATA 2.5- >> inch >> SSD which are new. But I'm not 100% sure what's happening really. >> Maybe >> this would help someone facing a similar situation with this >> particular >> high / low write speed bounces. I also tried with a second softraid >> on >> the same machine but with spinning USB disks. No problems so far, the >> write speed is constant. Read speed are fine and constant on SSD as >> well. >> >> Please let me know if there something I should try to workaroud or >> identify this >> problem. >> >> Reproduction scenario: >> >> note: The test I made to show you used the default 512B block size >> with dd (so >> the high speed is limited to ~130MB/s and the low speed remains >> around 7MB/s) >> >> - disabled pf and system logs >> - dd if=/dev/zero of=testfile # on /home >> - iostat -w1 sd0 sd1 sd6 # chunk0 chunk1 softraid_volume >> >> See iostat: for results >> >> mount: >> /dev/sd6a on / type ffs (local, softdep) >> /dev/sd6h on /home type ffs (local, nodev, nosuid, softdep) >> /dev/sd6e on /tmp type ffs (local, nodev, nosuid, softdep) >> /dev
Re: Unconsistent two-level write speed bouncing on softraid RAID1 SSD's
Hi ! not so surprising news: hardware is the problem I managed to get one of the two disks apart yesterday and I figured out that those disks was in cause. (both of them) Written from my laptop directly to the device and - good and constant read speed - bouncing 7MB/s to high write speed I did looked at the serial number, they're the same. Manufacturer's support suggests that if there's no trim, write speed may be impacted ( but so much ? ) and told to let the disk idle for 6 to 8 hours so the internal garbage collector could clean it. I tried that with no luck as well. Read somewhere that issuing a security erase could also help. So I tried issuing the following: # atactl sd0c secsetpass user high User password: Retype user password: atactl: ATA device returned error register 0 But any sec* command returned: atactl: ATA device returned error register 0 even after a coldboot ( non-frozen ), despite the devices supports the Security Mode feature set - Am I attempting to issue the security erase the wrong way ? To me it was 0) check if not frozen 2) set user pass 3) issue security erase command with password. # atactl sd0c Model: CT480BX500SSD1, Rev: M6CR022, Serial #: 2030E408CA88 Device type: ATA, fixed Cylinders: 16383, heads: 16, sec/track: 63, total sectors: 937703088 Device capabilities: ATA standby timer values IORDY operation IORDY disabling Device supports the following standards: ATA-3 ATA-4 ATA-5 ATA-6 ATA-7 ATA-8 ATA-9 ATA-10 Master password revision code 0xfffe Device supports the following command sets: NOP command READ BUFFER command WRITE BUFFER command Host Protected Area feature set Read look-ahead Write cache Power Management feature set Security Mode feature set SMART feature set Flush Cache Ext command Flush Cache command 48bit address feature set Advanced Power Management feature set DOWNLOAD MICROCODE command Device has enabled the following command sets/features: NOP command READ BUFFER command WRITE BUFFER command Host Protected Area feature set Read look-ahead Write cache Power Management feature set SMART feature set Flush Cache Ext command Flush Cache command 48bit address feature set DOWNLOAD MICROCODE command On Wed, 2021-06-09 at 03:45 +0200, xavie...@mailoo.org wrote: > Hello, There's a strange write speed bounce behavior on my SATA > softraid > RAID1 SSD (Crucial BX500 480GB 3D NAND). Sequential writes starts > high > (~450MB/s with dd and a bs of 1M) then after about 30s to 1:30 minute > it > falls to a low ~7MB/s for one minute, then bounce back to the high > speed > of 450MB/s and so forth. > > Maybe the problem come from my Crucial BX500 480GB 3D NAND SATA 2.5- > inch > SSD which are new. But I'm not 100% sure what's happening really. > Maybe > this would help someone facing a similar situation with this > particular > high / low write speed bounces. I also tried with a second softraid > on > the same machine but with spinning USB disks. No problems so far, the > write speed is constant. Read speed are fine and constant on SSD as > well. > > Please let me know if there something I should try to workaroud or > identify this > problem. > > Reproduction scenario: > > note: The test I made to show you used the default 512B block size > with dd (so > the high speed is limited to ~130MB/s and the low speed remains > around 7MB/s) > > - disabled pf and system logs > - dd if=/dev/zero of=testfile # on /home > - iostat -w1 sd0 sd1 sd6 # chunk0 chunk1 softraid_volume > > See iostat: for results > > mount: > /dev/sd6a on / type ffs (local, softdep) > /dev/sd6h on /home type ffs (local, nodev, nosuid, softdep) > /dev/sd6e on /tmp type ffs (local, nodev, nosuid, softdep) > /dev/sd6f on /usr type ffs (local, nodev, softdep) > /dev/sd6g on /var type ffs (local, nodev, nosuid, softdep) > > disklabel: > # /dev/rsd0c: > type: SCSI > disk: SCSI disk > label: CT480BX500SSD1 > duid: 808fe38d1751a671 > flags: > bytes/sector: 512 > sectors/track: 63 > tracks/cylinder: 255 > sectors/cylinder: 16065 > cylinders: 58369 > total sectors: 937703088 > boundstart: 64noatimenoatime > boundend: 937697985 > drivedata: 0 > > 16 partitions: > # size offset fstype [fsize bsize cpg] > a: 937697921 64 RAID > c: 937703088 0 unused > # /dev/rsd1c: > type: SCSI > disk: SCSI disk > label: CT480BX500SSD1 > duid: 33c950831897af57 > flags: > bytes/sector: 512 > sectors/track: 63 > tracks/cylinder: 25
Unconsistent two-level write speed bouncing on softraid RAID1 SSD's
Hello, There's a strange write speed bounce behavior on my SATA softraid RAID1 SSD (Crucial BX500 480GB 3D NAND). Sequential writes starts high (~450MB/s with dd and a bs of 1M) then after about 30s to 1:30 minute it falls to a low ~7MB/s for one minute, then bounce back to the high speed of 450MB/s and so forth. Maybe the problem come from my Crucial BX500 480GB 3D NAND SATA 2.5-inch SSD which are new. But I'm not 100% sure what's happening really. Maybe this would help someone facing a similar situation with this particular high / low write speed bounces. I also tried with a second softraid on the same machine but with spinning USB disks. No problems so far, the write speed is constant. Read speed are fine and constant on SSD as well. Please let me know if there something I should try to workaroud or identify this problem. Reproduction scenario: note: The test I made to show you used the default 512B block size with dd (so the high speed is limited to ~130MB/s and the low speed remains around 7MB/s) - disabled pf and system logs - dd if=/dev/zero of=testfile # on /home - iostat -w1 sd0 sd1 sd6 # chunk0 chunk1 softraid_volume See iostat: for results mount: /dev/sd6a on / type ffs (local, softdep) /dev/sd6h on /home type ffs (local, nodev, nosuid, softdep) /dev/sd6e on /tmp type ffs (local, nodev, nosuid, softdep) /dev/sd6f on /usr type ffs (local, nodev, softdep) /dev/sd6g on /var type ffs (local, nodev, nosuid, softdep) disklabel: # /dev/rsd0c: type: SCSI disk: SCSI disk label: CT480BX500SSD1 duid: 808fe38d1751a671 flags: bytes/sector: 512 sectors/track: 63 tracks/cylinder: 255 sectors/cylinder: 16065 cylinders: 58369 total sectors: 937703088 boundstart: 64noatimenoatime boundend: 937697985 drivedata: 0 16 partitions: # size offset fstype [fsize bsize cpg] a: 937697921 64 RAID c: 937703088 0 unused # /dev/rsd1c: type: SCSI disk: SCSI disk label: CT480BX500SSD1 duid: 33c950831897af57 flags: bytes/sector: 512 sectors/track: 63 tracks/cylinder: 255 sectors/cylinder: 16065 cylinders: 58369 total sectors: 937703088 boundstart: 64 boundend: 937697985 drivedata: 0 16 partitions: # size offset fstype [fsize bsize cpg] a: 937697921 64 RAID c: 937703088 0 unused # /dev/rsd6c: type: SCSI disk: SCSI disk label: SR RAID 1 duid: 1266e4d9a58f149d flags: bytes/sector: 512 sectors/track: 63 tracks/cylinder: 255 sectors/cylinder: 16065 cylinders: 58368 total sectors: 937697393 boundstart: 64 boundend: 937681920 drivedata: 0 16 partitions: # size offset fstype [fsize bsize cpg] a: 2104448 64 4.2BSD 2048 16384 12960 # / b: 33768633 2104512 swap # none c: 937697393 0 unused d: 2104480 35873152 4.2BSD 2048 16384 12960 e: 8402016 37977632 4.2BSD 2048 16384 12960 # /tmp f: 62926592 46379648 4.2BSD 2048 16384 12960 # /usr g: 62926624 109306240 4.2BSD 2048 16384 12960 # /var h: 765449024 172232896 4.2BSD 4096 32768 26062 # /home bioctl: Volume Status Size Device softraid0 1 Online 1000170315776 sd7 RAID1 0 Online 1000170315776 1:0.0 noencl 1 Online 1000170315776 1:1.0 noencl dd: 23679552+0 records in 679551+0 records out 123930112 bytes transferred in 177.691 secs (68230103 bytes/sec) corresponding iostat: sd0 sd1 sd6 KB/t t/s MB/s KB/t t/s MB/s KB/t t/s MB/s 30.06 31 0.92 3023679552+0 records in 679551+0 records out 123930112 bytes transferred in 177.691 secs (68230103 bytes/sec) .12 31 0.92 29.81 32 0.95 14.47 17 0.24 14.47 17 0.24 14.47 17 0.24 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 2.00 2 0.00 2.00 2 0.00 2.00 2 0.00 16.00 1 0.02 16.00 1 0.02 16.00 1 0.02 16.00 1 0.02 16.00 1 0.02 16.00 1 0.02 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 32.00 250 7.80 32.00 250 7.80 32.00 250 7.80 DD START 32.00 5116 159.88 32.00 5116 159.88 32.00 5116 159.88 31.95 4656 145.30 31.95 4655 145.27 31.95 4655 145.27 31.99 4501 140.60 31.99 4502 140.63 31.99 4502 140.63 32.00 4446 138.94 32.00 4446 138.94 32.00 4446 138.94 32.00 4303 134.47 32.00 4302 134.44 32.00 4303 134.47 32.00 4313 134.77 32.00 4313 134.77 32.00 4313 134.77 32.00 4380 136.88 32.00 4380 136.88 32.00 4380 136.88 32.00 4316 134.87 32.00 4316 134.87 32.00 4316 134.87 32.00 4251 132.84 32.00 4252 132.87 32.00 4252 132.87 sd0 sd1 sd6 KB/t t/s MB/s KB/t t/s MB/s KB/t t/s MB/s 32.00 4185 130.79 32.00 4185 130.79 32.00 4185 130.79 32.00 4289 134.02 32.00 4289 134.02 32.00 4289 134.02 32.00 4304 134.50 32.00 4303 134.47 32.00 4304 134.50 32.00 4261 133.17 32.00 4261 133.17 32.00 4261 133.17 31.98 4264 133.19 31.98 4264 133.19 31.98 4264 133.19 31.95 4193 130.85 31.95 4193 130.85 31.95 4193 130.85 31.99 4227 132.06 31.99 4228 132.10 31.99 4228 132.10 32.00 4270 133.44 32.00 4270 133.44 32.00 4270 133.44 31.99 4192 130.96 31.99 4192 130.96 31.99 4192 130.96 32.00 4221 131.91 32.00 4221 131.91 32.00 4221 131.91 32.00 4058 126.81 32.00 4057 126.78 32.00 4058 126.81 31.99 4190 130.91 31.99 4190 130.91 31.99 4190 130.91 31.99 4204 131.32 31.99 4204 131.32 31.99 42
Re: OpenBSD 6.9 RAID 1C (encrypted raid1) softraid discipline can't boot
On Wed, Apr 28, 2021 at 04:38:35PM +0800, Fung wrote: > OpenBSD 6.9 RAID 1C (encrypted raid1) softraid discipline can't boot > > OpenBSD 6.9 (GENERIC.MP) #473: Mon Apr 19 10:40:28 MDT 2021 > > one disk, shell create RAID CRYPTO, install system ok, boot ok > two disk, shell create RAID 1, install system ok, boot ok > > two disk, shell create RAID 1C ok, install system ok, but boot failed in > starting There is no boot support for RAID 1C yet.
OpenBSD 6.9 RAID 1C (encrypted raid1) softraid discipline can't boot
OpenBSD 6.9 RAID 1C (encrypted raid1) softraid discipline can't boot OpenBSD 6.9 (GENERIC.MP) #473: Mon Apr 19 10:40:28 MDT 2021 one disk, shell create RAID CRYPTO, install system ok, boot ok two disk, shell create RAID 1, install system ok, boot ok two disk, shell create RAID 1C ok, install system ok, but boot failed in starting boot messages: Using drive 0, partition 3. Loading.. probing: pc0 com0 com1 mem... disk: hd0+ hd1+ >> OpenBSD/amd64 BOOT 3.53 open(sr0a:/etc/boot.conf): can't read disk label boot> cannot open sr0a:/etc/random.seed: can't read disk label booting sr0a:/bsd: open sr0a:/bsd:can't read disk label failed(100). will try /bsd