Re: automounter (amd) local file system issue

2020-01-12 Thread Nick Holland
On 2020-01-12 15:39, Antoine Jacoutot wrote:
> Sounds like something is keeping your fs busy. Could be gio-kqueue, do you 
> have glib2 installed?

That would be my first guess, too -- it's not unmounting because it
shouldn't.  But ... this is a VERY single purpose machine (backups
via rsync --link-dest), and the only third party package is rsync
and my scripts to do the backups.  X is installed, but not running.

$ pkg_info
intel-firmware-20191115p0v0 microcode update binaries for Intel CPUs
inteldrm-firmware-20181218 firmware binary images for inteldrm(4) driver
quirks-3.216exceptions to pkg_add rules
rsync-3.1.3 mirroring/synchronization over low bandwidth links
vmm-firmware-1.11.0p2 firmware binary images for vmm(4) driver

I was careful to access the amd mounts by ls , while
sitting in my home directory, which is NOT part of the amd, so I
didn't have a task under a doas or su camped out on the amd vols.

I've tesed a lot of ways, but I just did an upgrade to -current and
immediately "looked" at the amd mount, so even my backup scripts
haven't run.

Plus -- as a control, /v/2 has absolutely nothing on it, and it
behaves the same way.  Not that something couldn't camp out on the
empty file system, but not much reason for something to do so.

Thanks for looking!

Nick.

 
> —
> Antoine
> 
>> On 13 Jan 2020, at 06:01, Nick Holland  wrote:
>> 
>> Hiya.
>> 
>> I'd like to use amd(8) to automatically mount and dismount local file
>> systems.  The file systems in question are big, lots of complicated
>> links, lots of files, and take a while to fsck if the power goes out
>> unexpectedly, and are used relatively rarely (maybe an hour a day).
>> Sounds like a perfect job for amd(8)!
>> 
>> The file systems in question are mounted to /v/1 and /v/2
>> 
>> I've got the following set up:
>> 
>>  $ cat /etc/rc.conf.local
>>  amd_flags=-l syslog -x all -c 10 -w 10
>>  lockd_flags=
>>  portmap_flags=
>> 
>>  $ cat /etc/amd/master   
>>  /v  amd.v
>> 
>>  $ cat /etc/amd/amd.v   
>>  1   type:=ufs;dev:=/dev/sd2i
>>  2   type:=ufs;dev:=/dev/sd2j
>> 
>> 
>> ANDit works!
>> 
>> start the system up, I get this:
>> 
>>  $ df
>>  Filesystem  512-blocks  Used Avail Capacity  Mounted on
>>  /dev/sd2a  101167620381275728421%/
>>  /dev/sd2h 1031983648   9803800 0%/home
>>  /dev/sd2f  413682820   3929968 0%/tmp
>>  /dev/sd2d  8264188   2369920   548106030%/usr
>>  /dev/sd2e  2065116  2104   1959760 0%/usr/local
>>  /dev/sd2g  4136828 64920   3865068 2%/var
>>  amd:365830 0 0   100%/v
>> 
>>  $ ls /v/1/
>> [...expected output from files and directories on that file system...]
>> 
>>  $ df
>>  Filesystem  1K-blocks  Used Avail Capacity  Mounted on
>>  /dev/sd2a  505838 8360239694617%/
>>  /dev/sd2h 515991824   4901900 0%/home
>>  /dev/sd2f 206841410   1964984 0%/tmp
>>  /dev/sd2d 4132094   1280264   264522633%/usr
>>  /dev/sd2e 1032558  1052979880 0%/usr/local
>>  /dev/sd2g 2068414 32572   1932422 2%/var
>>  amd:92953   0 0 0   100%/v
>>  /dev/sd2i   2106117872 298739480 170207250415%/tmp_mnt/dbu/v/1
>> 
>> Success!!
>> well...no.  Seems it never umounts the amd file systems.  And that is
>> basically the point of this exercise -- to increase the odds that a FS
>> isn't mounted when the power goes out.
>> 
>> Am I doing something wrong?  Do I have inaccurate expectations of
>> what amd(8) does with local file systems? 
>> 
>> Nick.
>> 
>> OpenBSD 6.6-current (GENERIC.MP) #599: Sat Jan 11 18:52:00 MST 2020
>>dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP
>> real mem = 2038652928 (1944MB)
>> avail mem = 1964462080 (1873MB)
>> mpath0 at root
>> scsibus0 at mpath0: 256 targets
>> mainbus0 at root
>> bios0 at mainbus0: SMBIOS rev. 2.8 @ 0xebd30 (52 entries)
>> bios0: vendor American Megatrends Inc. version "1020" date 12/15/2014
>> bios0: PowerSpec V400
>> acpi0 at bios0: ACPI 5.0
>> acpi0: sleep states S0 S3 S4 S5
>> acpi0: tables DSDT FACP APIC FPDT MSDM MCFG LPIT SLIC HPET SSDT SSDT SSDT 
>> UEFI
>> acpi0: wakeup devices XHC1(S3) PXSX(S4) PXSX(S4) PXSX(S4) PXSX(S4) PWRB(S0)
>> acpitimer0 at acpi0: 3579545 Hz, 24 bits
>> acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
>> cpu0 at mainbus0: apid 0 (boot processor)
>> cpu0: Intel(R) Pentium(R) CPU J2900 @ 2.41GHz, 2417.12 MHz, 06-37-08
>> cpu0: 
>> 

Re: What is you motivational to use OpenBSD

2020-01-12 Thread Aaron Mason
On Thu, Aug 29, 2019 at 12:40 AM Mohamed salah
 wrote:
>
> I wanna put something in discussion, what's your motivational to use
> OPENBSD what not other bsd's what not gnu/Linux, if something doesn't work
> fine on openbsd and you love this os so much what will do?

For most of my purposes, it Just Works(TM). The firewall rules are
user readable and easy to understand, most of the out of the box
software with configs follows the same easy-to-read scheme, and it
doesn't load anything out of the box that I don't need, it leaves that
decision to me and never insults my intelligence.  And those man
pages...

When I changed jobs and needed a service desk suite, I opted for
Request Tracker and rolled up a Hyper-V VM running OpenBSD 6.1.  Even
though no doco exists for this, I was able to make my way well enough
that I started to document my process as best I could on the httpd
GitHub repo wiki.

As a case study, at previous jobs I ran ManageEngine ServiceDesk Plus
on Windows Server, and the whole required 2GB of RAM minimum.  My
pokey little RT server has 512MB of RAM and it's all it has ever
needed.

-- 
Aaron Mason - Programmer, open source addict
I've taken my software vows - for beta or for worse



Re: Userland PCI drivers possible in OpenBSD?

2020-01-12 Thread Andrew Tipton
Joseph Mayer wrote:
> Is there some way I can implement PCI drivers in userland in OpenBSD?
> 
> On a quick Internet search, see some discussion for Linux and NetBSD
> e.g. [1] however nothing in OpenBSD.
> 
> I may be interested in operating some PCI device manually from my own
> program (run as root or user) in OpenBSD, and I can see this being of
> interest to others also, asking therefore.

As others have mentioned, poking at PCI configuration space and raw
physical memory from userspace is wildly insecure.

However, contrary to popular belief, you can in fact poke at devices
from userspace on OpenBSD.  The primary user of this special ability is
the X server, which has a plethora of userspace drivers for
graphics cards.  (It's almost as dangerous as the inteldrm(4) mess!)

While you really really don't want to do this in production, it's handy
for experimenting with PCI devices on a development machine.  Without
further ado, here's how to do it:

  1. Use the 'machdep memory' command in the bootloader to carve out a
 hole in the system's physical memory map.
  2. Set the kern.securelevel sysctl to -1.  (Told you it's a bad idea.)
  3. Set the machdep.allowaperture systel to 2.
  4. Become root.
  5. Open /dev/pci%d to access PCI bus number %d, and issue PCIOCREAD/
 PCIOCWRITE ioctls to access PCI configuration space.  See pci(4)
 for details and the pcidump(8) source for usage examples.
  6. Map the device's base address register(s) to somewhere in physical
 memory space that isn't in use, such as your memory hole.
  7. Open /dev/xf86 and mmap() the section of physical address space
 that you have mapped your device at.  As long as the kernel hasn't
 "claimed" those addresses (i.e. you're mapping the memory hole that
 you created at boot time) the mmap() will succeed.  See xf86(4) for
 a bit more explanation.

Once you've been successful at exploring your shiny new PCI device, and
understand how it works, you can write a proper kernel driver for it so
that it can actually be used on a normal system and by non-root users.

Normal systems run at securelevel=1 (or 2) for good reason, and ideally
are also running with machdep.allowaperture=0.

(I shall now don my flameproof suit.)


Cheers
-Andrew



Re: softraid(4) RAID1 tools or experimental patches for consistency checking

2020-01-12 Thread Karel Gardas



Tried something like that in the past: 
https://marc.info/?l=openbsd-tech=144217941801350=2


It worked kind of OK except the performance. The problem is that data 
layout makes read op. -> 2x read op. and write op. -> read op. + 2x 
write op. which is not the speed winner. Caching of checksuming blocks 
helped in some cases a lot, but was not submitted since you would also 
ideally need readahead and this was not done at all. The other perf 
issue is that putting this slow virtual drive impl. under already slow 
ffs is a receipt for disappointment from the perf. point of view. 
Certainly no speed daemon and certainly completely different league than 
checkumming able fss from open-source world (ZFS, btrfs, bcachefs. No 
HAMMER2 is not there since it checksum only meta-data and not user data 
and can't self-heal).


Yes, you are right that ideally drive would be fs aware to optimize 
rebuild, but this may be worked around by more clever layout marking 
also used blocks. Anyway, that's (and above) are IMHO reasons why 
development is done on checksumming fss instead of checksumming software 
raids. Read somewhere paper about linux's mdadm hacked to do checksums 
and the result was pretty much the same (IIRC!). E.g. perf. 
disappointment. If you are curious, google for it.


So, work on it if you can tolerate the speed...

On 1/12/20 6:46 AM, Constantine A. Murenin wrote:

Dear misc@,

I'm curious if anyone has any sort of tools / patches to verify the consistency 
of softraid(4) RAID1 volumes?


If one adds a new disc (i.e. chunk) to a volume with the RAID1 discipline, the 
resilvering process of softraid(4) will read data from one of the existing 
discs, and write it back to all the discs, ridding you of the artefacts that 
could potentially be used to reconstruct the flipped bits correctly.

Additionally, this resilvering process is also really slow.  Per my notes from 
a few years ago, softraid has a fixed block size of 64KB (MAXPHYS); if we're 
talking about spindle-based HDDs, they only support like 80 random IOPS at 7,2k 
RPM, half of which we gotta use for reads, half for writes; this means it'll 
take (1TB/64KB/(80/s/2)) = 4,5 days to resilver each 1TB of an average 7,2k RPM 
HDD; compare this with sequential resilvering, which will take (1TB/120MB/s) = 
2,3 hours; the reality may vary from these imprecise calculations, but these 
numbers do seem representative of the experience.

The above behaviour is defined here:

http://bxr.su/o/sys/dev/softraid_raid1.c#sr_raid1_rw

369} else {
370/* writes go on all working disks */
371chunk = i;
372scp = sd->sd_vol.sv_chunks[chunk];
373switch (scp->src_meta.scm_status) {
374case BIOC_SDONLINE:
375case BIOC_SDSCRUB:
376case BIOC_SDREBUILD:
377break;
378
379case BIOC_SDHOTSPARE: /* should never happen */
380case BIOC_SDOFFLINE:
381continue;
382
383default:
384goto bad;
385}
386}


What we could do is something like the following, to pretend that any online 
volume is not available for writes when the wu (Work Unit) we're handling is 
part of the rebuild process from http://bxr.su/o/sys/dev/softraid.c#sr_rebuild, 
mimicking the BIOC_SDOFFLINE behaviour for BIOC_SDONLINE chunks (discs) when 
the SR_WUF_REBUILD flag is set for the workunit:

switch (scp->src_meta.scm_status) {
case BIOC_SDONLINE:
+   if (wu->swu_flags & SR_WUF_REBUILD)
+   continue;   /* must be same as 
BIOC_SDOFFLINE case */
+   /* FALLTHROUGH */
case BIOC_SDSCRUB:
case BIOC_SDREBUILD:


Obviously, there's both pros and cons to such an approach; I've tested a 
variation of the above in production (not a fan weeks-long random-read/write 
rebuilds); but use this at your own risk, obviously.

...

But back to the original problem, this consistency check would have to be 
file-system-specific, because we gotta know which blocks of softraid have and 
have not been used by the filesystem, as softraid itself is 
filesystem-agnostic.  I'd imagine it'll be somewhat similar in concept to the 
fstrim(8) utility on GNU/Linux -- 
http://man7.org/linux/man-pages/man8/fstrim.8.html -- and would also open the 
door for the cron-based TRIM support as well (it would also have to know the 
softraid format itself, too).  Any pointers or hints where to get started, or 
whether anyone has worked on this in the past?


Cheers,
Constantine.http://cm.su/





Re: automounter (amd) local file system issue

2020-01-12 Thread Antoine Jacoutot
Sounds like something is keeping your fs busy. Could be gio-kqueue, do you have 
glib2 installed?


—
Antoine

> On 13 Jan 2020, at 06:01, Nick Holland  wrote:
> 
> Hiya.
> 
> I'd like to use amd(8) to automatically mount and dismount local file
> systems.  The file systems in question are big, lots of complicated
> links, lots of files, and take a while to fsck if the power goes out
> unexpectedly, and are used relatively rarely (maybe an hour a day).
> Sounds like a perfect job for amd(8)!
> 
> The file systems in question are mounted to /v/1 and /v/2
> 
> I've got the following set up:
> 
>  $ cat /etc/rc.conf.local
>  amd_flags=-l syslog -x all -c 10 -w 10
>  lockd_flags=
>  portmap_flags=
> 
>  $ cat /etc/amd/master   
>  /v  amd.v
> 
>  $ cat /etc/amd/amd.v   
>  1   type:=ufs;dev:=/dev/sd2i
>  2   type:=ufs;dev:=/dev/sd2j
> 
> 
> ANDit works!
> 
> start the system up, I get this:
> 
>  $ df
>  Filesystem  512-blocks  Used Avail Capacity  Mounted on
>  /dev/sd2a  101167620381275728421%/
>  /dev/sd2h 1031983648   9803800 0%/home
>  /dev/sd2f  413682820   3929968 0%/tmp
>  /dev/sd2d  8264188   2369920   548106030%/usr
>  /dev/sd2e  2065116  2104   1959760 0%/usr/local
>  /dev/sd2g  4136828 64920   3865068 2%/var
>  amd:365830 0 0   100%/v
> 
>  $ ls /v/1/
> [...expected output from files and directories on that file system...]
> 
>  $ df
>  Filesystem  1K-blocks  Used Avail Capacity  Mounted on
>  /dev/sd2a  505838 8360239694617%/
>  /dev/sd2h 515991824   4901900 0%/home
>  /dev/sd2f 206841410   1964984 0%/tmp
>  /dev/sd2d 4132094   1280264   264522633%/usr
>  /dev/sd2e 1032558  1052979880 0%/usr/local
>  /dev/sd2g 2068414 32572   1932422 2%/var
>  amd:92953   0 0 0   100%/v
>  /dev/sd2i   2106117872 298739480 170207250415%/tmp_mnt/dbu/v/1
> 
> Success!!
> well...no.  Seems it never umounts the amd file systems.  And that is
> basically the point of this exercise -- to increase the odds that a FS
> isn't mounted when the power goes out.
> 
> Am I doing something wrong?  Do I have inaccurate expectations of
> what amd(8) does with local file systems? 
> 
> Nick.
> 
> OpenBSD 6.6-current (GENERIC.MP) #599: Sat Jan 11 18:52:00 MST 2020
>dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP
> real mem = 2038652928 (1944MB)
> avail mem = 1964462080 (1873MB)
> mpath0 at root
> scsibus0 at mpath0: 256 targets
> mainbus0 at root
> bios0 at mainbus0: SMBIOS rev. 2.8 @ 0xebd30 (52 entries)
> bios0: vendor American Megatrends Inc. version "1020" date 12/15/2014
> bios0: PowerSpec V400
> acpi0 at bios0: ACPI 5.0
> acpi0: sleep states S0 S3 S4 S5
> acpi0: tables DSDT FACP APIC FPDT MSDM MCFG LPIT SLIC HPET SSDT SSDT SSDT UEFI
> acpi0: wakeup devices XHC1(S3) PXSX(S4) PXSX(S4) PXSX(S4) PXSX(S4) PWRB(S0)
> acpitimer0 at acpi0: 3579545 Hz, 24 bits
> acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
> cpu0 at mainbus0: apid 0 (boot processor)
> cpu0: Intel(R) Pentium(R) CPU J2900 @ 2.41GHz, 2417.12 MHz, 06-37-08
> cpu0: 
> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,SSE4.1,SSE4.2,MOVBE,POPCNT,DEADLINE,RDRAND,NXE,RDTSCP,LONG,LAHF,3DNOWP,PERF,ITSC,TSC_ADJUST,SMEP,ERMS,MD_CLEAR,IBRS,IBPB,STIBP,SENSOR,ARAT,MELTDOWN
> cpu0: 1MB 64b/line 16-way L2 cache
> cpu0: smt 0, core 0, package 0
> mtrr: Pentium Pro MTRR support, 8 var ranges, 88 fixed ranges
> cpu0: apic clock running at 83MHz
> cpu0: mwait min=64, max=64, C-substates=0.2.0.0.0.0.3.3, IBE
> cpu1 at mainbus0: apid 2 (application processor)
> cpu1: Intel(R) Pentium(R) CPU J2900 @ 2.41GHz, 2416.67 MHz, 06-37-08
> cpu1: 
> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,SSE4.1,SSE4.2,MOVBE,POPCNT,DEADLINE,RDRAND,NXE,RDTSCP,LONG,LAHF,3DNOWP,PERF,ITSC,TSC_ADJUST,SMEP,ERMS,MD_CLEAR,IBRS,IBPB,STIBP,SENSOR,ARAT,MELTDOWN
> cpu1: 1MB 64b/line 16-way L2 cache
> cpu1: smt 0, core 1, package 0
> cpu2 at mainbus0: apid 4 (application processor)
> cpu2: Intel(R) Pentium(R) CPU J2900 @ 2.41GHz, 2416.69 MHz, 06-37-08
> cpu2: 
> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,SSE4.1,SSE4.2,MOVBE,POPCNT,DEADLINE,RDRAND,NXE,RDTSCP,LONG,LAHF,3DNOWP,PERF,ITSC,TSC_ADJUST,SMEP,ERMS,MD_CLEAR,IBRS,IBPB,STIBP,SENSOR,ARAT,MELTDOWN
> cpu2: 1MB 64b/line 16-way L2 cache
> cpu2: 

automounter (amd) local file system issue

2020-01-12 Thread Nick Holland
Hiya.

I'd like to use amd(8) to automatically mount and dismount local file
systems.  The file systems in question are big, lots of complicated
links, lots of files, and take a while to fsck if the power goes out
unexpectedly, and are used relatively rarely (maybe an hour a day).
Sounds like a perfect job for amd(8)!

The file systems in question are mounted to /v/1 and /v/2

I've got the following set up:

  $ cat /etc/rc.conf.local
  amd_flags=-l syslog -x all -c 10 -w 10
  lockd_flags=
  portmap_flags=

  $ cat /etc/amd/master   
  /v  amd.v

  $ cat /etc/amd/amd.v   
  1   type:=ufs;dev:=/dev/sd2i
  2   type:=ufs;dev:=/dev/sd2j


ANDit works!

start the system up, I get this:

  $ df
  Filesystem  512-blocks  Used Avail Capacity  Mounted on
  /dev/sd2a  101167620381275728421%/
  /dev/sd2h 1031983648   9803800 0%/home
  /dev/sd2f  413682820   3929968 0%/tmp
  /dev/sd2d  8264188   2369920   548106030%/usr
  /dev/sd2e  2065116  2104   1959760 0%/usr/local
  /dev/sd2g  4136828 64920   3865068 2%/var
  amd:365830 0 0   100%/v

  $ ls /v/1/
[...expected output from files and directories on that file system...]

  $ df
  Filesystem  1K-blocks  Used Avail Capacity  Mounted on
  /dev/sd2a  505838 8360239694617%/
  /dev/sd2h 515991824   4901900 0%/home
  /dev/sd2f 206841410   1964984 0%/tmp
  /dev/sd2d 4132094   1280264   264522633%/usr
  /dev/sd2e 1032558  1052979880 0%/usr/local
  /dev/sd2g 2068414 32572   1932422 2%/var
  amd:92953   0 0 0   100%/v
  /dev/sd2i   2106117872 298739480 170207250415%/tmp_mnt/dbu/v/1

Success!!
well...no.  Seems it never umounts the amd file systems.  And that is
basically the point of this exercise -- to increase the odds that a FS
isn't mounted when the power goes out.

Am I doing something wrong?  Do I have inaccurate expectations of
what amd(8) does with local file systems? 

Nick.

OpenBSD 6.6-current (GENERIC.MP) #599: Sat Jan 11 18:52:00 MST 2020
dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP
real mem = 2038652928 (1944MB)
avail mem = 1964462080 (1873MB)
mpath0 at root
scsibus0 at mpath0: 256 targets
mainbus0 at root
bios0 at mainbus0: SMBIOS rev. 2.8 @ 0xebd30 (52 entries)
bios0: vendor American Megatrends Inc. version "1020" date 12/15/2014
bios0: PowerSpec V400
acpi0 at bios0: ACPI 5.0
acpi0: sleep states S0 S3 S4 S5
acpi0: tables DSDT FACP APIC FPDT MSDM MCFG LPIT SLIC HPET SSDT SSDT SSDT UEFI
acpi0: wakeup devices XHC1(S3) PXSX(S4) PXSX(S4) PXSX(S4) PXSX(S4) PWRB(S0)
acpitimer0 at acpi0: 3579545 Hz, 24 bits
acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
cpu0 at mainbus0: apid 0 (boot processor)
cpu0: Intel(R) Pentium(R) CPU J2900 @ 2.41GHz, 2417.12 MHz, 06-37-08
cpu0: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,SSE4.1,SSE4.2,MOVBE,POPCNT,DEADLINE,RDRAND,NXE,RDTSCP,LONG,LAHF,3DNOWP,PERF,ITSC,TSC_ADJUST,SMEP,ERMS,MD_CLEAR,IBRS,IBPB,STIBP,SENSOR,ARAT,MELTDOWN
cpu0: 1MB 64b/line 16-way L2 cache
cpu0: smt 0, core 0, package 0
mtrr: Pentium Pro MTRR support, 8 var ranges, 88 fixed ranges
cpu0: apic clock running at 83MHz
cpu0: mwait min=64, max=64, C-substates=0.2.0.0.0.0.3.3, IBE
cpu1 at mainbus0: apid 2 (application processor)
cpu1: Intel(R) Pentium(R) CPU J2900 @ 2.41GHz, 2416.67 MHz, 06-37-08
cpu1: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,SSE4.1,SSE4.2,MOVBE,POPCNT,DEADLINE,RDRAND,NXE,RDTSCP,LONG,LAHF,3DNOWP,PERF,ITSC,TSC_ADJUST,SMEP,ERMS,MD_CLEAR,IBRS,IBPB,STIBP,SENSOR,ARAT,MELTDOWN
cpu1: 1MB 64b/line 16-way L2 cache
cpu1: smt 0, core 1, package 0
cpu2 at mainbus0: apid 4 (application processor)
cpu2: Intel(R) Pentium(R) CPU J2900 @ 2.41GHz, 2416.69 MHz, 06-37-08
cpu2: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,SSE4.1,SSE4.2,MOVBE,POPCNT,DEADLINE,RDRAND,NXE,RDTSCP,LONG,LAHF,3DNOWP,PERF,ITSC,TSC_ADJUST,SMEP,ERMS,MD_CLEAR,IBRS,IBPB,STIBP,SENSOR,ARAT,MELTDOWN
cpu2: 1MB 64b/line 16-way L2 cache
cpu2: smt 0, core 2, package 0
cpu3 at mainbus0: apid 6 (application processor)
cpu3: Intel(R) Pentium(R) CPU J2900 @ 2.41GHz, 2416.68 MHz, 06-37-08
cpu3: 

softraid(4) RAID1 tools or experimental patches for consistency checking

2020-01-12 Thread Constantine A. Murenin
Dear misc@,

I'm curious if anyone has any sort of tools / patches to verify the consistency 
of softraid(4) RAID1 volumes?


If one adds a new disc (i.e. chunk) to a volume with the RAID1 discipline, the 
resilvering process of softraid(4) will read data from one of the existing 
discs, and write it back to all the discs, ridding you of the artefacts that 
could potentially be used to reconstruct the flipped bits correctly.

Additionally, this resilvering process is also really slow.  Per my notes from 
a few years ago, softraid has a fixed block size of 64KB (MAXPHYS); if we're 
talking about spindle-based HDDs, they only support like 80 random IOPS at 7,2k 
RPM, half of which we gotta use for reads, half for writes; this means it'll 
take (1TB/64KB/(80/s/2)) = 4,5 days to resilver each 1TB of an average 7,2k RPM 
HDD; compare this with sequential resilvering, which will take (1TB/120MB/s) = 
2,3 hours; the reality may vary from these imprecise calculations, but these 
numbers do seem representative of the experience.

The above behaviour is defined here:

http://bxr.su/o/sys/dev/softraid_raid1.c#sr_raid1_rw

369} else {
370/* writes go on all working disks */
371chunk = i;
372scp = sd->sd_vol.sv_chunks[chunk];
373switch (scp->src_meta.scm_status) {
374case BIOC_SDONLINE:
375case BIOC_SDSCRUB:
376case BIOC_SDREBUILD:
377break;
378
379case BIOC_SDHOTSPARE: /* should never happen */
380case BIOC_SDOFFLINE:
381continue;
382
383default:
384goto bad;
385}
386}


What we could do is something like the following, to pretend that any online 
volume is not available for writes when the wu (Work Unit) we're handling is 
part of the rebuild process from http://bxr.su/o/sys/dev/softraid.c#sr_rebuild, 
mimicking the BIOC_SDOFFLINE behaviour for BIOC_SDONLINE chunks (discs) when 
the SR_WUF_REBUILD flag is set for the workunit:

switch (scp->src_meta.scm_status) {
case BIOC_SDONLINE:
+   if (wu->swu_flags & SR_WUF_REBUILD)
+   continue;   /* must be same as 
BIOC_SDOFFLINE case */
+   /* FALLTHROUGH */
case BIOC_SDSCRUB:
case BIOC_SDREBUILD:


Obviously, there's both pros and cons to such an approach; I've tested a 
variation of the above in production (not a fan weeks-long random-read/write 
rebuilds); but use this at your own risk, obviously.

...

But back to the original problem, this consistency check would have to be 
file-system-specific, because we gotta know which blocks of softraid have and 
have not been used by the filesystem, as softraid itself is 
filesystem-agnostic.  I'd imagine it'll be somewhat similar in concept to the 
fstrim(8) utility on GNU/Linux -- 
http://man7.org/linux/man-pages/man8/fstrim.8.html -- and would also open the 
door for the cron-based TRIM support as well (it would also have to know the 
softraid format itself, too).  Any pointers or hints where to get started, or 
whether anyone has worked on this in the past?


Cheers,
Constantine.http://cm.su/



Re: dhcpd and unbound on a small LAN

2020-01-12 Thread Marcus MERIGHI
Morning!

What I have not seen mentioned:

dhcpd.conf -> "deny unknown-clients;"

Beware if you use static leases as already mentioned, then dhcpd does
*not* feed the IPs to it's PF tables when it hands the IP out to the
client.

If you do:

host foobar { hardware ethernet a8:34:6a:e1:1d:1c; }

with "deny unknown-clients" directive, then the IP is taken from the
"range" pool but only for known MACs.

See net/arpd and net/arpwatch packages(7)!

As for your hosts(5) versus unbound(8) problem, I've the following:

$ whence vihosts
'doas vi /etc/hosts; hoststounbound'

$ whence hoststounbound
'grep -v -e ^# -e ^$ /etc/hosts | hoststounbound.sh hosts > \
  /var/unbound/etc/localzone.hosts.conf; reload-unbound'

$ whence reload-unbound
'doas unbound-control -c /var/unbound/etc/unbound.conf reload'

"hoststounbound.sh" is a script that parses hosts(5) lines and outputs a
valid unbound.conf(5) config. feedback, improvements, all welcome:

#!/bin/sh -eu
_zone=${1:-"hosts"}
_ttl=${2:-"3600"}

_ip=""
_names=""
_name=""
_line=""
_word=""

print "server:\n"
print "local-zone: \"${_zone}\" transparent\n"

while read _line; do
_ip=""
_names=""
for _word in $_line; do
if [[ "X${_word}" == X"#"* ]]; then
break
elif [[ -z $_ip ]]; then
_ip="${_word}"
else
_names="${_names}${_word} "
fi
done
#[[ "X${_ip}" == X"127.0.0.1" || "X${_ip}" == X"::1" ]] && continue
a="A"
[[ "X${_ip}" == X*":"* ]] && a=""
for _name in ${_names}; do
[[ ${_name%%.*} == "*" ]] && { _name=${_name#*.}; \
  print "local-zone: \"${_name}.\" redirect"; }
print "local-data: \"${_name}. ${_ttl} ${a} ${_ip}\""
[[ "X${_ip}" == X"0.0.0.0" ]] || \
  print "local-data-ptr: \"${_ip} ${_ttl} ${_name}\"\n"
done
done

Marcus

pipat...@gmail.com (Anders Andersson), 2020.01.06 (Mon) 13:24 (CET):
> I'm in the process of replacing an aging OpenWRT device on my home LAN
> with an apu4d4 running OpenBSD as my personal router.
> 
> I would like to use unbound as a caching DNS server for my local
> hosts, but I'm trying to figure out how to handle local hostnames. It
> seems like a common scenario but I can't find a solution that feels
> like the "right" way. I have two problems, one is trivial compared to
> the other.
> 
> 
> My first and very minor issue is that I would like to register my
> static hosts in a more convenient way than what's currently offered by
> unbound. From what I understand you would configure your local hosts
> something like this:
> 
> local-zone: "home.lan." static
> local-data: "laptop.home.lan.IN A 10.0.0.2"
> local-data-ptr: "10.0.0.2  laptop.home.lan"
> 
> Every time information has to be entered twice there is room for error
> and inconsistencies, so preferably this list should be automatically
> generated from a simpler file, maybe /etc/hosts. I can of course
> easily write such a script, but I'm wondering if there might be a
> standard, go-to way of doing this.
> 
> 
> 
> My second and more difficult issue is that I can't seem to find a way
> to feed information from the DHCP server into unbound, so that locally
> assigned hosts can be queried by their hostnames. To clarify with an
> example:
> 
> 1. I install a new system and in the installation procedure I name it "alice".
> 2. "alice" asks for and receives an IP number from my DHCP server.
> 3. Every other machine can now connect to "alice" by name, assuming
> that "alice" informed the DHCP server of its name when asking for an
> address.
> 
> Currently this works because OpenWRT is using dnsmasq which is both a
> caching DNS server and a DHCP server, so the left hand knows what the
> right hand is doing. How can I solve this in OpenBSD base without
> jumping through hoops?
> 
> Right now I'm considering something that monitors dhcpd.leases for
> changes and updates a running unbound using unbound-control(8) but I
> don't feel confident enough writing such a tool that does not miss a
> lot of corner cases and handle startup/shutdown gracefully. I'm also
> thinking that it can't be such an unusual use case, so someone surely
> must have written such a tool already. I just haven't found any in my
> search.
> 
> Or am I doing this the wrong way? I've now read about things like mDNS
> and Zeroconf and Avahi and I'm just getting more and more confused.
> Ideas are welcome!