On Mon, Sep 28, 2020 at 08:25:34AM -0600, Theo de Raadt wrote: > ... > So we are at an impasse. The recommended solution is for people to stop > making sysupgrade-incompatible layouts in the future, and to consider > repairing their incompatible layouts from the past. > > if sysupgrade doesn't work, people have the old ways of doing things. > doctor doctor it hurts when i layout my disk strangely... Hi there,
So, I think I have a workaround for my issue with sysupgrade and, from my side, everything is more or less hunky dory ... but as Theo wrote, now I have in the back of my mind "consider repairing" ... So I just have to ask ... what then would be the supported/approved disk layout for OpenBSD 6.8 on my Intel 8i5 NUC with the following storage: 1. A 2TB Samsung SSD: Currently identified as: sd0 at scsibus1 targ 2 lun 0: <ATA, Samsung SSD 860, RVM0> naa.5002538e4109632a sd0: 1953514MB, 512 bytes/sector, 4000797360 sectors, thin 2. A 512GB Samsung M.2 NVMe device: Currently identified as: sd1 at scsibus2 targ 1 lun 0: <NVMe, Samsung SSD 970, 1B2Q> sd1: 476940MB, 512 bytes/sector, 976773168 sectors It's my main desktop system, running XFCE. Currently df shows: Filesystem Size Used Avail Capacity Mounted on /dev/sd1a 1005M 314M 640M 33% / mfs:6361 7.7G 331M 7.0G 4% /tmp /dev/sd1e 58.3G 91.3M 55.3G 0% /var /dev/sd1f 2.0G 1.2G 686M 64% /usr /dev/sd1g 1005M 251M 703M 26% /usr/X11R6 /dev/sd1h 19.7G 11.0G 7.7G 59% /usr/local /dev/sd1k 5.9G 2.0K 5.6G 0% /usr/obj /dev/sd1j 2.0G 2.0K 1.9G 0% /usr/src /dev/sd1l 295G 10.0G 271G 4% /fast /dev/sd0h 1.8T 964G 758G 56% /space (Yeah, yeah, when I installed I made "/var" way too big for some reason.) There is a swap area on sd1b of 64GB (twice the size of the RAM). At install time I thought about not allocating any swap at all, but I wasn't sure if that was a good idea or not. That mount "/space" contains essentially all the non OS stuff in subdirectories e.g. "home", "images", "videos", "music", "netapp". It will eventually be just over 1TB (and then keep growing :). Too big to fit on the NVMe stick. The "/fast" mount is used for working/output data from apps e.g. Wireshark, Influxdb, Telegraf, Grafana, NetApp. How would 6.8 layout these drives differently. if I were to installed it, from scratch, for example? Output of disklabel below. Feel free to ignore this email, since, if I am honest, I am unlikely to start moving >1TB of data around for fun (maybe with the next hardware refresh). But I would still be interested to hear how it would be done differently. Cheers, Robb. disklabel sd0 # /dev/rsd0c: type: SCSI disk: SCSI disk label: Samsung SSD 860 duid: 7a1775fef773535e flags: bytes/sector: 512 sectors/track: 63 tracks/cylinder: 255 sectors/cylinder: 16065 cylinders: 249038 total sectors: 4000797360 boundstart: 64 boundend: 4000797297 drivedata: 0 16 partitions: # size offset fstype [fsize bsize cpg] a: 2097152 1024 4.2BSD 2048 16384 12958 c: 4000797360 0 unused h: 3998699008 2098176 4.2BSD 8192 65536 52270 # /space disklabel sd1 # /dev/rsd1c: type: SCSI disk: SCSI disk label: Samsung SSD 970 duid: 281ef747da03afe7 flags: bytes/sector: 512 sectors/track: 63 tracks/cylinder: 255 sectors/cylinder: 16065 cylinders: 60801 total sectors: 976773168 boundstart: 1024 boundend: 976773105 drivedata: 0 16 partitions: # size offset fstype [fsize bsize cpg] a: 2097152 1024 4.2BSD 2048 16384 12958 # / b: 67324128 2098176 swap # none c: 976773168 0 unused d: 8388608 69422304 4.2BSD 2048 16384 12958 e: 124326848 77810912 4.2BSD 2048 16384 12958 # /var f: 4194304 202137760 4.2BSD 2048 16384 12958 # /usr g: 2097152 206332064 4.2BSD 2048 16384 12958 # /usr/X11R6 h: 41943040 208429216 4.2BSD 2048 16384 12958 # /usr/local i: 960 64 MSDOS j: 4194304 250372256 4.2BSD 2048 16384 12958 # /usr/src k: 12582912 254566560 4.2BSD 2048 16384 12958 # /usr/obj l: 629145536 267149504 4.2BSD 4096 32768 26062 # /fast