Re: mkuboot.8: Add missing arm64 architecture

2021-05-31 Thread Jonathan Gray
On Tue, Jun 01, 2021 at 01:43:32AM +0200, Leon Fischer wrote:
> The mkuboot(8) man page was not updated after this commit:
> 
> RCS file: /cvs/src/usr.sbin/mkuboot/mkuboot.c,v
> 
> revision 1.7
> date: 2016/12/20 11:27:11;  author: jsg;  state: Exp;  lines: +3 -1;  
> commitid: fELL92HBrGHkYVvc;
> Add the u-boot arm64 architecture number and map it to "aarch64" to
> match OpenBSD/arm64 MACHINE_ARCH.
> 
> ok patrick@
> 
> 
> Here's a patch to add it to the supported architectures list.

thanks, committed

> 
> This man page is still missing a few things: what is "infile", what do
> the types mean, an example for installing the image, etc.

Before there were armv7/arm64 efi bootloaders kernels used to require
having a U-Boot header and had to reside on a fat/ext filesystem.
With boot scripts created by something like
'mkuboot -t script -a arm -o linux boot.cmd boot.scr' with boot.scr
also placed on the fat/ext filesystem.  boot.cmd being a text file with
U-Boot commands.

Now U-Boot supports uefi and generic distro boot there isn't a need to
use mkuboot for the most part.

> 
> Index: mkuboot.8
> ===
> RCS file: /cvs/src/usr.sbin/mkuboot/mkuboot.8,v
> retrieving revision 1.1
> diff -u -p -r1.1 mkuboot.8
> --- mkuboot.8 30 May 2013 19:17:15 -  1.1
> +++ mkuboot.8 31 May 2021 23:30:54 -
> @@ -65,6 +65,7 @@ The following arguments are valid as the
>  .Ar arch
>  parameter:
>  .Bd -unfilled -offset indent -compact
> +aarch64
>  alpha
>  amd64
>  arm
> 
> 



pcidevs + azalia: patch for new intel audio

2021-05-31 Thread Ashton Fagg
My new Intel Z590-based machine seems to have some different kind of
Intel audio device onboard.

I couldn't find very much online about it (all the usual pci id
databases don't seem to have it yet). The only really useful thing I
found was this:

https://github.com/torvalds/linux/commit/f84d3a1ec375e46a55cc3ba85c04272b24bd3921#diff-bfe681fff464a07274400d493ba696cc6e10649a993ae7c1cfc1c29a106feda0

This doesn't give much info but seems to indicate that it's a variant of
some existing chip.

I gave this the not very descriptive name of
"PCI_PRODUCT_INTEL_500SERIES_HDA_2", since that's about all I could come
up with, since I'm assuming it's a variant of
"PCI_PRODUCT_INTEL_500SERIES_HDA".

Patch which defines device in pcidevs and tells azalia how to
configure it is attached. Playback was the only thing I could readily
test and that's working - so my itch here has been scratched.

I've also attached a dmesg output and a pcidump output (dmesg has also
been sent to dmesg@).

Feedback greatly welcomed.

Before:

azalia0 at pci0 dev 31 function 3 vendor "Intel", unknown product 0xf0c8 rev 
0x11: msi
azalia0: codecs: Realtek/0x0897, 0x/0x, using Realtek/0x0897
audio0 at azalia0

After:

azalia0 at pci0 dev 31 function 3 "Intel 500 Series HD Audio" rev 0x11: msi
azalia0: codecs: Realtek/0x0897, 0x/0x, using Realtek/0x0897
audio0 at azalia0


Index: sys/dev/pci/azalia.c
===
RCS file: /cvs/src/sys/dev/pci/azalia.c,v
retrieving revision 1.262
diff -u -p -u -p -r1.262 azalia.c
--- sys/dev/pci/azalia.c	30 May 2021 02:54:36 -	1.262
+++ sys/dev/pci/azalia.c	1 Jun 2021 01:20:42 -
@@ -470,6 +470,7 @@ azalia_configure_pci(azalia_t *az)
 	case PCI_PRODUCT_INTEL_400SERIES_LP_HDA:
 	case PCI_PRODUCT_INTEL_495SERIES_LP_HDA:
 	case PCI_PRODUCT_INTEL_500SERIES_HDA:
+	case PCI_PRODUCT_INTEL_500SERIES_HDA_2:
 	case PCI_PRODUCT_INTEL_500SERIES_LP_HDA:
 	case PCI_PRODUCT_INTEL_C600_HDA:
 	case PCI_PRODUCT_INTEL_C610_HDA_1:
Index: sys/dev/pci/pcidevs
===
RCS file: /cvs/src/sys/dev/pci/pcidevs,v
retrieving revision 1.1970
diff -u -p -u -p -r1.1970 pcidevs
--- sys/dev/pci/pcidevs	19 May 2021 05:20:48 -	1.1970
+++ sys/dev/pci/pcidevs	1 Jun 2021 01:20:42 -
@@ -5371,6 +5371,7 @@ product INTEL 500SERIES_PCIE_22	0x43c5	5
 product INTEL 500SERIES_PCIE_23	0x43c6	500 Series PCIE
 product INTEL 500SERIES_PCIE_24	0x43c7	500 Series PCIE
 product INTEL 500SERIES_HDA	0x43c8	500 Series HD Audio
+product INTEL 500SERIES_HDA_2	0xf0c8	500 Series HD Audio
 product INTEL 500SERIES_THC_0	0x43d0	500 Series THC
 product INTEL 500SERIES_THC_1	0x43d1	500 Series THC
 product INTEL 500SERIES_AHCI_1	0x43d2	500 Series AHCI
Index: sys/dev/pci/pcidevs.h
===
RCS file: /cvs/src/sys/dev/pci/pcidevs.h,v
retrieving revision 1.1964
diff -u -p -u -p -r1.1964 pcidevs.h
--- sys/dev/pci/pcidevs.h	19 May 2021 05:21:24 -	1.1964
+++ sys/dev/pci/pcidevs.h	1 Jun 2021 01:20:42 -
@@ -5376,6 +5376,7 @@
 #define	PCI_PRODUCT_INTEL_500SERIES_PCIE_23	0x43c6		/* 500 Series PCIE */
 #define	PCI_PRODUCT_INTEL_500SERIES_PCIE_24	0x43c7		/* 500 Series PCIE */
 #define	PCI_PRODUCT_INTEL_500SERIES_HDA	0x43c8		/* 500 Series HD Audio */
+#define	PCI_PRODUCT_INTEL_500SERIES_HDA_2	0xf0c8		/* 500 Series HD Audio */
 #define	PCI_PRODUCT_INTEL_500SERIES_THC_0	0x43d0		/* 500 Series THC */
 #define	PCI_PRODUCT_INTEL_500SERIES_THC_1	0x43d1		/* 500 Series THC */
 #define	PCI_PRODUCT_INTEL_500SERIES_AHCI_1	0x43d2		/* 500 Series AHCI */
Index: sys/dev/pci/pcidevs_data.h
===
RCS file: /cvs/src/sys/dev/pci/pcidevs_data.h,v
retrieving revision 1.1959
diff -u -p -u -p -r1.1959 pcidevs_data.h
--- sys/dev/pci/pcidevs_data.h	19 May 2021 05:21:24 -	1.1959
+++ sys/dev/pci/pcidevs_data.h	1 Jun 2021 01:20:43 -
@@ -18912,6 +18912,10 @@ static const struct pci_known_product pc
 	"500 Series HD Audio",
 	},
 	{
+	PCI_VENDOR_INTEL, PCI_PRODUCT_INTEL_500SERIES_HDA_2,
+	"500 Series HD Audio",
+	},
+	{
 	PCI_VENDOR_INTEL, PCI_PRODUCT_INTEL_500SERIES_THC_0,
 	"500 Series THC",
 	},
OpenBSD 6.9-current (GENERIC.MP) #7: Mon May 31 21:13:25 EDT 2021
f...@elara.fagg.id.au:/usr/src/sys/arch/amd64/compile/GENERIC.MP
real mem = 42702798848 (40724MB)
avail mem = 41393143808 (39475MB)
random: good seed from bootblocks
mpath0 at root
scsibus0 at mpath0: 256 targets
mainbus0 at root
bios0 at mainbus0: SMBIOS rev. 3.3 @ 0x99d71000 (109 entries)
bios0: vendor American Megatrends Inc. version "0405" date 01/14/2021
bios0: ASUS PRIME Z590-V
acpi0 at bios0: ACPI 6.2
acpi0: sleep states S0 S3 S4 S5
acpi0: tables DSDT FACP MCFG FIDT FPDT SSDT SSDT SSDT HPET APIC SSDT SSDT NHLT 
SSDT LPIT SSDT SSDT DBGP DBG2 SSDT SSDT WPBT PTDT WSMT
acpi0: wakeup devices PEGP(S4) PEGP(S4) PEGP(S4) PEGP(S4) RP09(S4) PXSX(S4) 
RP10(S4) 

mkuboot.8: Add missing arm64 architecture

2021-05-31 Thread Leon Fischer
The mkuboot(8) man page was not updated after this commit:

RCS file: /cvs/src/usr.sbin/mkuboot/mkuboot.c,v

revision 1.7
date: 2016/12/20 11:27:11;  author: jsg;  state: Exp;  lines: +3 -1;  commitid: 
fELL92HBrGHkYVvc;
Add the u-boot arm64 architecture number and map it to "aarch64" to
match OpenBSD/arm64 MACHINE_ARCH.

ok patrick@


Here's a patch to add it to the supported architectures list.

This man page is still missing a few things: what is "infile", what do
the types mean, an example for installing the image, etc.

Index: mkuboot.8
===
RCS file: /cvs/src/usr.sbin/mkuboot/mkuboot.8,v
retrieving revision 1.1
diff -u -p -r1.1 mkuboot.8
--- mkuboot.8   30 May 2013 19:17:15 -  1.1
+++ mkuboot.8   31 May 2021 23:30:54 -
@@ -65,6 +65,7 @@ The following arguments are valid as the
 .Ar arch
 parameter:
 .Bd -unfilled -offset indent -compact
+aarch64
 alpha
 amd64
 arm



Re: nvme(4): fix prpl sync length

2021-05-31 Thread Jonathan Matthew
On Tue, Jun 01, 2021 at 08:24:10AM +1000, David Gwynne wrote:
> 
> 
> > On 1 Jun 2021, at 04:17, Patrick Wildt  wrote:
> > 
> > Hi,
> > 
> > this call to sync the DMA mem wants to sync N - 1 number of prpl
> > entries, as the first segment is configured regularly, while the
> > addresses for the following segments (if more than 2), are in a
> > special DMA memory.
> > 
> > The code currently removes a single byte, instead of an entry.
> > This just means that it is syncing more than it should.
> 
> nice.
> 
> > ok?
> 
> ok.

ok by me too.

> 
> > 
> > Patrick
> > 
> > diff --git a/sys/dev/ic/nvme.c b/sys/dev/ic/nvme.c
> > index 62b8e40c626..6db25260ef0 100644
> > --- a/sys/dev/ic/nvme.c
> > +++ b/sys/dev/ic/nvme.c
> > @@ -629,7 +629,7 @@ nvme_scsi_io(struct scsi_xfer *xs, int dir)
> > bus_dmamap_sync(sc->sc_dmat,
> > NVME_DMA_MAP(sc->sc_ccb_prpls),
> > ccb->ccb_prpl_off,
> > -   sizeof(*ccb->ccb_prpl) * dmap->dm_nsegs - 1,
> > +   sizeof(*ccb->ccb_prpl) * (dmap->dm_nsegs - 1),
> > BUS_DMASYNC_PREWRITE);
> > }
> > 
> > @@ -691,7 +691,7 @@ nvme_scsi_io_done(struct nvme_softc *sc, struct 
> > nvme_ccb *ccb,
> > bus_dmamap_sync(sc->sc_dmat,
> > NVME_DMA_MAP(sc->sc_ccb_prpls),
> > ccb->ccb_prpl_off,
> > -   sizeof(*ccb->ccb_prpl) * dmap->dm_nsegs - 1,
> > +   sizeof(*ccb->ccb_prpl) * (dmap->dm_nsegs - 1),
> > BUS_DMASYNC_POSTWRITE);
> > }
> > 
> > 
> 



Re: mcx(4): sync only received length on RX

2021-05-31 Thread Jonathan Matthew
On Tue, Jun 01, 2021 at 08:20:43AM +1000, David Gwynne wrote:
> 
> 
> > On 1 Jun 2021, at 04:15, Patrick Wildt  wrote:
> > 
> > Hi,
> > 
> > mcx(4) seems to sync the whole mapsize on processing a received packet.
> > As far as I know, we usually only sync the actual size that we have
> > received.  Noticed this when doing bounce buffer tests, seeing that
> > it copied a lot more data than is necessary.
> > 
> > That's because the RX buffer size is maximum supported MTU, which is
> > about 9500 bytes or so.  For small packets, or regular 1500 bytes,
> > this adds overhead.
> > 
> > This change should not change anything for ARM machines that have a
> > cache coherent PCIe bus or x86.
> > 
> > ok?
> 
> ok.

ok by me too.

> 
> > 
> > Patrick
> > 
> > diff --git a/sys/dev/pci/if_mcx.c b/sys/dev/pci/if_mcx.c
> > index 38437e54897..065855d46d3 100644
> > --- a/sys/dev/pci/if_mcx.c
> > +++ b/sys/dev/pci/if_mcx.c
> > @@ -6800,20 +6800,20 @@ mcx_process_rx(struct mcx_softc *sc, struct mcx_rx 
> > *rx,
> > {
> > struct mcx_slot *ms;
> > struct mbuf *m;
> > -   uint32_t flags;
> > +   uint32_t flags, len;
> > int slot;
> > 
> > +   len = bemtoh32(>cq_byte_cnt);
> > slot = betoh16(cqe->cq_wqe_count) % (1 << MCX_LOG_RQ_SIZE);
> > 
> > ms = >rx_slots[slot];
> > -   bus_dmamap_sync(sc->sc_dmat, ms->ms_map, 0, ms->ms_map->dm_mapsize,
> > -   BUS_DMASYNC_POSTREAD);
> > +   bus_dmamap_sync(sc->sc_dmat, ms->ms_map, 0, len, BUS_DMASYNC_POSTREAD);
> > bus_dmamap_unload(sc->sc_dmat, ms->ms_map);
> > 
> > m = ms->ms_m;
> > ms->ms_m = NULL;
> > 
> > -   m->m_pkthdr.len = m->m_len = bemtoh32(>cq_byte_cnt);
> > +   m->m_pkthdr.len = m->m_len = len;
> > 
> > if (cqe->cq_rx_hash_type) {
> > m->m_pkthdr.ph_flowid = betoh32(cqe->cq_rx_hash);
> > 
> 



Re: nvme(4): fix prpl sync length

2021-05-31 Thread David Gwynne



> On 1 Jun 2021, at 04:17, Patrick Wildt  wrote:
> 
> Hi,
> 
> this call to sync the DMA mem wants to sync N - 1 number of prpl
> entries, as the first segment is configured regularly, while the
> addresses for the following segments (if more than 2), are in a
> special DMA memory.
> 
> The code currently removes a single byte, instead of an entry.
> This just means that it is syncing more than it should.

nice.

> ok?

ok.

> 
> Patrick
> 
> diff --git a/sys/dev/ic/nvme.c b/sys/dev/ic/nvme.c
> index 62b8e40c626..6db25260ef0 100644
> --- a/sys/dev/ic/nvme.c
> +++ b/sys/dev/ic/nvme.c
> @@ -629,7 +629,7 @@ nvme_scsi_io(struct scsi_xfer *xs, int dir)
>   bus_dmamap_sync(sc->sc_dmat,
>   NVME_DMA_MAP(sc->sc_ccb_prpls),
>   ccb->ccb_prpl_off,
> - sizeof(*ccb->ccb_prpl) * dmap->dm_nsegs - 1,
> + sizeof(*ccb->ccb_prpl) * (dmap->dm_nsegs - 1),
>   BUS_DMASYNC_PREWRITE);
>   }
> 
> @@ -691,7 +691,7 @@ nvme_scsi_io_done(struct nvme_softc *sc, struct nvme_ccb 
> *ccb,
>   bus_dmamap_sync(sc->sc_dmat,
>   NVME_DMA_MAP(sc->sc_ccb_prpls),
>   ccb->ccb_prpl_off,
> - sizeof(*ccb->ccb_prpl) * dmap->dm_nsegs - 1,
> + sizeof(*ccb->ccb_prpl) * (dmap->dm_nsegs - 1),
>   BUS_DMASYNC_POSTWRITE);
>   }
> 
> 



Re: Enable pool cache on knote_pool

2021-05-31 Thread David Gwynne



> On 1 Jun 2021, at 02:58, Visa Hankala  wrote:
> 
> This patch enables the pool cache feature on the knote pool to reduce
> the overhead of knote management.
> 
> Profiling done by mpi@ and bluhm@ indicate that the potentially needless
> allocation of knotes in kqueue_register() causes slowdown with
> kqueue-based poll(2) and select(2).
> 
> One approach to fix this is to reverse the function's initial guess
> about knote: Try without allocation first. Then allocate and retry if
> the knote is missing from the kqueue and EV_ADD is given.
> 
> Another option is to cache free knotes so that the shared knote pool
> would be accessed less frequently.
> 
> The following diff takes the second approach. The caching is implemented
> simply by enabling the pool cache feature. This makes use of existing
> code and does not complicate kqueue_register(). The feature also helps
> if there is heavy knote churn.
> 
> I think the most substantial part of the diff is that it extends pool
> cache usage beyond mbufs. Is this change acceptable?

absolutely.

> Note the cache is not particularly useful without kqueue-based poll(2)
> and select(2). The pool view of systat(1) shows that there are pools
> that would benefit more than knote_pool from caching, at least in terms
> of request frequencies. The relative frequencies are dependent on system
> workload, though. Kqpoll would definitely make knote pool more heavily
> used.

ok.

separate to this diff, at some point maybe we should have a task list/dohook 
thing for "per cpu init" like mountroot or startup?

> Index: kern/init_main.c
> ===
> RCS file: src/sys/kern/init_main.c,v
> retrieving revision 1.306
> diff -u -p -r1.306 init_main.c
> --- kern/init_main.c  8 Feb 2021 10:51:01 -   1.306
> +++ kern/init_main.c  31 May 2021 16:50:17 -
> @@ -71,6 +71,7 @@
> #include 
> #endif
> #include 
> +#include 
> #include 
> #include 
> #include 
> @@ -148,7 +149,6 @@ void  crypto_init(void);
> void  db_ctf_init(void);
> void  prof_init(void);
> void  init_exec(void);
> -void kqueue_init(void);
> void  futex_init(void);
> void  taskq_init(void);
> void  timeout_proc_init(void);
> @@ -432,7 +432,9 @@ main(void *framep)
>   prof_init();
> #endif
> 
> - mbcpuinit();/* enable per cpu mbuf data */
> + /* Enable per-CPU data. */
> + mbcpuinit();
> + kqueue_init_percpu();
>   uvm_init_percpu();
> 
>   /* init exec and emul */
> Index: kern/kern_event.c
> ===
> RCS file: src/sys/kern/kern_event.c,v
> retrieving revision 1.163
> diff -u -p -r1.163 kern_event.c
> --- kern/kern_event.c 22 Apr 2021 15:30:12 -  1.163
> +++ kern/kern_event.c 31 May 2021 16:50:17 -
> @@ -231,6 +231,12 @@ kqueue_init(void)
>   PR_WAITOK, "knotepl", NULL);
> }
> 
> +void
> +kqueue_init_percpu(void)
> +{
> + pool_cache_init(_pool);
> +}
> +
> int
> filt_fileattach(struct knote *kn)
> {
> Index: sys/event.h
> ===
> RCS file: src/sys/sys/event.h,v
> retrieving revision 1.54
> diff -u -p -r1.54 event.h
> --- sys/event.h   24 Feb 2021 14:59:52 -  1.54
> +++ sys/event.h   31 May 2021 16:50:18 -
> @@ -292,6 +292,8 @@ extern void   knote_fdclose(struct proc *p
> extern void   knote_processexit(struct proc *);
> extern void   knote_modify(const struct kevent *, struct knote *);
> extern void   knote_submit(struct knote *, struct kevent *);
> +extern void  kqueue_init(void);
> +extern void  kqueue_init_percpu(void);
> extern intkqueue_register(struct kqueue *kq,
>   struct kevent *kev, struct proc *p);
> extern intkqueue_scan(struct kqueue_scan_state *, int, struct kevent *,
> 



Re: mcx(4): sync only received length on RX

2021-05-31 Thread David Gwynne



> On 1 Jun 2021, at 04:15, Patrick Wildt  wrote:
> 
> Hi,
> 
> mcx(4) seems to sync the whole mapsize on processing a received packet.
> As far as I know, we usually only sync the actual size that we have
> received.  Noticed this when doing bounce buffer tests, seeing that
> it copied a lot more data than is necessary.
> 
> That's because the RX buffer size is maximum supported MTU, which is
> about 9500 bytes or so.  For small packets, or regular 1500 bytes,
> this adds overhead.
> 
> This change should not change anything for ARM machines that have a
> cache coherent PCIe bus or x86.
> 
> ok?

ok.

> 
> Patrick
> 
> diff --git a/sys/dev/pci/if_mcx.c b/sys/dev/pci/if_mcx.c
> index 38437e54897..065855d46d3 100644
> --- a/sys/dev/pci/if_mcx.c
> +++ b/sys/dev/pci/if_mcx.c
> @@ -6800,20 +6800,20 @@ mcx_process_rx(struct mcx_softc *sc, struct mcx_rx 
> *rx,
> {
>   struct mcx_slot *ms;
>   struct mbuf *m;
> - uint32_t flags;
> + uint32_t flags, len;
>   int slot;
> 
> + len = bemtoh32(>cq_byte_cnt);
>   slot = betoh16(cqe->cq_wqe_count) % (1 << MCX_LOG_RQ_SIZE);
> 
>   ms = >rx_slots[slot];
> - bus_dmamap_sync(sc->sc_dmat, ms->ms_map, 0, ms->ms_map->dm_mapsize,
> - BUS_DMASYNC_POSTREAD);
> + bus_dmamap_sync(sc->sc_dmat, ms->ms_map, 0, len, BUS_DMASYNC_POSTREAD);
>   bus_dmamap_unload(sc->sc_dmat, ms->ms_map);
> 
>   m = ms->ms_m;
>   ms->ms_m = NULL;
> 
> - m->m_pkthdr.len = m->m_len = bemtoh32(>cq_byte_cnt);
> + m->m_pkthdr.len = m->m_len = len;
> 
>   if (cqe->cq_rx_hash_type) {
>   m->m_pkthdr.ph_flowid = betoh32(cqe->cq_rx_hash);
> 



Re: [External] : factor out ipv4 and ipv6 initial packet sanity checks for bridges

2021-05-31 Thread Alexandr Nedvedicky
Hello,

On Mon, May 31, 2021 at 02:33:00PM +1000, David Gwynne wrote:
> if you're looking at an ip header, it makes sense to do some checks to
> make sure that the values and addresses make some sense. the canonical
> versions of these checks are in the ipv4 and ipv6 input paths, which
> makes sense. when bridge(4) is about to run packets through pf it makes
> sure the ip headers are sane before first, which i think also makes
> sense. veb and tpmr don't do these checks before they run pf, but i
> think they should. however, duplicating the code again doesn't appeal to
> me.
> 
> this factors the ip checks out in the ip_input path, and uses that code
> from bridge, veb, and tpmr.
> 
> this is mostly shuffling the deck chairs, but ipv6 is moved around a bit
> more than ipv4, so some eyes and tests would be appreciated.
> 
> in the future i think the ipv6 code should do length checks like the
> ipv4 code does too. this diff is big enough as it is though.
> 
> ok?
> 

no objection.

OK sashan



Incorrect comment after changing value in another place

2021-05-31 Thread Reuven Plevinsky
There is an old fix of SB_MAX, increasing it to 2MB (instead of 256k) makes
TCP scaling factor 6 (instead of 3).
But the comment in the shifting remained with the old values.

Although it's just a comment, it might be very confusing.

diff --git a/sys/netinet/tcp_input.c b/sys/netinet/tcp_input.c
index cd0c12dcd3ba..74ce2621f762 100644
--- a/sys/netinet/tcp_input.c
+++ b/sys/netinet/tcp_input.c
@@ -3832,8 +3832,8 @@ syn_cache_add(struct sockaddr *src, struct sockaddr
*dst, struct tcphdr *th,
 * leading to serious problems when traversing these
 * broken firewalls.
 *
-* With the default sbmax of 256K, a scale factor
-* of 3 will be chosen by this algorithm.  Those who
+* With the default sbmax of 2MB, a scale factor
+* of 6 will be chosen by this algorithm.  Those who
 * choose a larger sbmax should watch out
 * for the compatibility problems mentioned above.
 *


Re: Ryzen 5800X hw.setperf vs hw.cpuspeed

2021-05-31 Thread Josh
thanks Otto for the dmesg.

I'd like to get one B550 mobo as well. Which version of Gigabyte B550
AORUS ELITE do you have exactly? ATX? mATX ?
Most of them listed here[1] have either RLT8118 or RLT8125 chipset and
re(4) doesn't list them...

Can't find any reference to your model there[1] (RTL8168 chipset) "re0
at pci5 dev 0 function 0 "Realtek 8168" rev 0x06: RTL8168E/8111E
(0x2c00), msi, address 64:70:02:01:db:3c
rgephy0 at re0 phy 7: RTL8169S/8110S/8211 PHY, rev. 4"

Could it be this one[2]?

Cheers

[1] https://www.gigabyte.com/Motherboard/AORUS-Gaming
[2] https://www.gigabyte.com/Motherboard/B550-AORUS-ELITE-rev-10/sp#sp

On Fri, Nov 20, 2020 at 9:28 AM Otto Moerbeek  wrote:
>
> Hi,
>
> I got a new Ryzen machine, dmesg below. What I'm observing might be a
> issue with hw.setperf.
>
> On startsup it shows:
>
> hw.cpuspeed=3800
> hw.setperf=100
>
> If I lower hw.setperf to zero, the new state is reflect immediately in
> hw.cpuspeed:
>
> hw.cpuspeed=2200
> hw.setperf=0
>
> And also sha256 -t becomes slower as expected.
>
> But If I raise hw.setperf to 100 I'm seeing:
>
> hw.cpuspeed=2200
> hw.setperf=100
>
> and sha256 -t is still slow. Only after some time passes (lets say a
> couple of tens of seconds) it does show:
>
> hw.cpuspeed=3800
> hw.setperf=100
>
> and sha256 -t is fast again.
>
> This behaviour is different from my old machine, where setting
> hs.setperf was reflected in hs.cpuspeed immediately both ways
>
> Any clue?
>
> -Otto
>
> OpenBSD 6.8-current (GENERIC.MP) #1: Thu Nov 19 21:01:06 CET 2020
> o...@lou.intra.drijf.net:/usr/src/sys/arch/amd64/compile/GENERIC.MP
> real mem = 34286964736 (32698MB)
> avail mem = 33232543744 (31693MB)
> random: good seed from bootblocks
> mpath0 at root
> scsibus0 at mpath0: 256 targets
> mainbus0 at root
> bios0 at mainbus0: SMBIOS rev. 3.3 @ 0xe8d60 (55 entries)
> bios0: vendor American Megatrends Inc. version "F11d" date 10/29/2020
> bios0: Gigabyte Technology Co., Ltd. B550 AORUS ELITE
> acpi0 at bios0: ACPI 6.0
> acpi0: sleep states S0 S3 S4 S5
> acpi0: tables DSDT FACP SSDT SSDT SSDT SSDT FIDT MCFG HPET BGRT IVRS PCCT 
> SSDT CRAT CDIT SSDT SSDT SSDT SSDT WSMT APIC SSDT SSDT SSDT FPDT
> acpi0: wakeup devices GPP0(S4) GP12(S4) GP13(S4) XHC0(S4) GP30(S4) GP31(S4) 
> GPP2(S4) GPP3(S4) GPP8(S4) GPP1(S4)
> acpitimer0 at acpi0: 3579545 Hz, 32 bits
> acpimcfg0 at acpi0
> acpimcfg0: addr 0xf000, bus 0-127
> acpihpet0 at acpi0: 14318180 Hz
> acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
> cpu0 at mainbus0: apid 0 (boot processor)
> cpu0: AMD Ryzen 7 5800X 8-Core Processor, 3793.35 MHz, 19-21-00
> cpu0: 
> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,HTT,SSE3,PCLMUL,MWAIT,SSSE3,FMA3,CX16,SSE4.1,SSE4.2,MOVBE,POPCNT,AES,XSAVE,AVX,F16C,RDRAND,NXE,MMXX,FFXSR,PAGE1GB,RDTSCP,LONG,LAHF,CMPLEG,SVM,EAPICSP,AMCR8,ABM,SSE4A,MASSE,3DNOWP,OSVW,IBS,SKINIT,TCE,TOPEXT,CPCTR,DBKP,PCTRL3,MWAITX,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,PQM,RDSEED,ADX,SMAP,CLFLUSHOPT,CLWB,SHA,UMIP,PKU,IBPB,IBRS,STIBP,SSBD,XSAVEOPT,XSAVEC,XGETBV1,XSAVES
> cpu0: 32KB 64b/line 8-way I-cache, 32KB 64b/line 8-way D-cache, 512KB 
> 64b/line 8-way L2 cache, 32MB 64b/line disabled L3 cache
> cpu0: ITLB 64 4KB entries fully associative, 64 4MB entries fully associative
> cpu0: DTLB 64 4KB entries fully associative, 64 4MB entries fully associative
> cpu0: smt 0, core 0, package 0
> mtrr: Pentium Pro MTRR support, 8 var ranges, 88 fixed ranges
> cpu0: apic clock running at 99MHz
> cpu0: mwait min=64, max=64, C-substates=1.1, IBE
> cpu1 at mainbus0: apid 2 (application processor)
> cpu1: AMD Ryzen 7 5800X 8-Core Processor, 3792.89 MHz, 19-21-00
> cpu1: 
> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,HTT,SSE3,PCLMUL,MWAIT,SSSE3,FMA3,CX16,SSE4.1,SSE4.2,MOVBE,POPCNT,AES,XSAVE,AVX,F16C,RDRAND,NXE,MMXX,FFXSR,PAGE1GB,RDTSCP,LONG,LAHF,CMPLEG,SVM,EAPICSP,AMCR8,ABM,SSE4A,MASSE,3DNOWP,OSVW,IBS,SKINIT,TCE,TOPEXT,CPCTR,DBKP,PCTRL3,MWAITX,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,PQM,RDSEED,ADX,SMAP,CLFLUSHOPT,CLWB,SHA,UMIP,PKU,IBPB,IBRS,STIBP,SSBD,XSAVEOPT,XSAVEC,XGETBV1,XSAVES
> cpu1: 32KB 64b/line 8-way I-cache, 32KB 64b/line 8-way D-cache, 512KB 
> 64b/line 8-way L2 cache, 32MB 64b/line disabled L3 cache
> cpu1: ITLB 64 4KB entries fully associative, 64 4MB entries fully associative
> cpu1: DTLB 64 4KB entries fully associative, 64 4MB entries fully associative
> cpu1: smt 0, core 1, package 0
> cpu2 at mainbus0: apid 4 (application processor)
> cpu2: AMD Ryzen 7 5800X 8-Core Processor, 3792.89 MHz, 19-21-00
> cpu2: 
> 

nvme(4): fix prpl sync length

2021-05-31 Thread Patrick Wildt
Hi,

this call to sync the DMA mem wants to sync N - 1 number of prpl
entries, as the first segment is configured regularly, while the
addresses for the following segments (if more than 2), are in a
special DMA memory.

The code currently removes a single byte, instead of an entry.
This just means that it is syncing more than it should.

ok?

Patrick

diff --git a/sys/dev/ic/nvme.c b/sys/dev/ic/nvme.c
index 62b8e40c626..6db25260ef0 100644
--- a/sys/dev/ic/nvme.c
+++ b/sys/dev/ic/nvme.c
@@ -629,7 +629,7 @@ nvme_scsi_io(struct scsi_xfer *xs, int dir)
bus_dmamap_sync(sc->sc_dmat,
NVME_DMA_MAP(sc->sc_ccb_prpls),
ccb->ccb_prpl_off,
-   sizeof(*ccb->ccb_prpl) * dmap->dm_nsegs - 1,
+   sizeof(*ccb->ccb_prpl) * (dmap->dm_nsegs - 1),
BUS_DMASYNC_PREWRITE);
}
 
@@ -691,7 +691,7 @@ nvme_scsi_io_done(struct nvme_softc *sc, struct nvme_ccb 
*ccb,
bus_dmamap_sync(sc->sc_dmat,
NVME_DMA_MAP(sc->sc_ccb_prpls),
ccb->ccb_prpl_off,
-   sizeof(*ccb->ccb_prpl) * dmap->dm_nsegs - 1,
+   sizeof(*ccb->ccb_prpl) * (dmap->dm_nsegs - 1),
BUS_DMASYNC_POSTWRITE);
}
 



mcx(4): sync only received length on RX

2021-05-31 Thread Patrick Wildt
Hi,

mcx(4) seems to sync the whole mapsize on processing a received packet.
As far as I know, we usually only sync the actual size that we have
received.  Noticed this when doing bounce buffer tests, seeing that
it copied a lot more data than is necessary.

That's because the RX buffer size is maximum supported MTU, which is
about 9500 bytes or so.  For small packets, or regular 1500 bytes,
this adds overhead.

This change should not change anything for ARM machines that have a
cache coherent PCIe bus or x86.

ok?

Patrick

diff --git a/sys/dev/pci/if_mcx.c b/sys/dev/pci/if_mcx.c
index 38437e54897..065855d46d3 100644
--- a/sys/dev/pci/if_mcx.c
+++ b/sys/dev/pci/if_mcx.c
@@ -6800,20 +6800,20 @@ mcx_process_rx(struct mcx_softc *sc, struct mcx_rx *rx,
 {
struct mcx_slot *ms;
struct mbuf *m;
-   uint32_t flags;
+   uint32_t flags, len;
int slot;
 
+   len = bemtoh32(>cq_byte_cnt);
slot = betoh16(cqe->cq_wqe_count) % (1 << MCX_LOG_RQ_SIZE);
 
ms = >rx_slots[slot];
-   bus_dmamap_sync(sc->sc_dmat, ms->ms_map, 0, ms->ms_map->dm_mapsize,
-   BUS_DMASYNC_POSTREAD);
+   bus_dmamap_sync(sc->sc_dmat, ms->ms_map, 0, len, BUS_DMASYNC_POSTREAD);
bus_dmamap_unload(sc->sc_dmat, ms->ms_map);
 
m = ms->ms_m;
ms->ms_m = NULL;
 
-   m->m_pkthdr.len = m->m_len = bemtoh32(>cq_byte_cnt);
+   m->m_pkthdr.len = m->m_len = len;
 
if (cqe->cq_rx_hash_type) {
m->m_pkthdr.ph_flowid = betoh32(cqe->cq_rx_hash);



add table_procexec in smtpd

2021-05-31 Thread Aisha Tammy
Hi all,
  I've attached a diff to add table_procexec as a table backend
in smtpd(8). This imports the table_procexec from opensmtpd-extras,
which is available upstream but is not present in the port.
I've successfully replaced the standard aliases table

table aliases file:/etc/mail/aliases

with a very simple proof of concept shell script which forwards all
mail to a single account (r...@bsd.ac)

table aliases proc-exec:/usr/local/bin/aliases_procexec

The shell script (/usr/local/bin/aliases_procexec) is

#!/bin/ksh

while read line
do
  reqid="$(echo $line | awk -F'|' '{ print $5; }')"
  reply="table-result|$reqid|found|r...@bsd.ac"
  echo $reply
done < /dev/stdin
exit 0

The full /etc/mail/smtpd.conf is

#   $OpenBSD: smtpd.conf,v 1.14 2019/11/26 20:14:38 gilles Exp $

# This is the smtpd server system-wide configuration file.
# See smtpd.conf(5) for more information.

table aliases proc-exec:/usr/local/bin/aliases_procexec
listen on socket

# To accept external mail, replace with: listen on all
#
listen on lo0

action "local_mail" mbox alias 
action "outbound" relay
action "bsd.ac" relay host smtp://10.7.0.1

# Uncomment the following to accept external mail for domain "example.org"
#
# match from any for domain "example.org" action "local_mail"
match from local for local action "local_mail"
match from local for domain "bsd.ac" action "bsd.ac"
match from local for any action "outbound"

This diff is still a very early work, so I'm hoping for comments
to improve the work.

Some points that are still left to do
- document the line protocol in table(5)
- add a better reference implementation, maybe a replacement for aliases
- I've left in the table_procexec_check but commented it out.
  I am unsure if that is needed at all.
- Maybe my method of closing, in table_procexec_close is not the best.

(and maybe more are still left but those can be thought of later)

This is my first diff for anything in base, so I may have made some mistakes.
Comments and improvements would be really nice.

Cheers,
Aisha

PS: I've cc'ed eric@ as this was their original work and I'm just modifying it.


diff --git a/usr.sbin/smtpd/smtpctl/Makefile b/usr.sbin/smtpd/smtpctl/Makefile
index ef8148be8c9..2e8beff1ad1 100644
--- a/usr.sbin/smtpd/smtpctl/Makefile
+++ b/usr.sbin/smtpd/smtpctl/Makefile
@@ -48,6 +48,7 @@ SRCS+=table_static.c
 SRCS+= table_db.c
 SRCS+= table_getpwnam.c
 SRCS+= table_proc.c
+SRCS+= table_procexec.c
 SRCS+= unpack_dns.c
 SRCS+= spfwalk.c
 
diff --git a/usr.sbin/smtpd/smtpd.h b/usr.sbin/smtpd/smtpd.h
index be934112103..221f24fbdc4 100644
--- a/usr.sbin/smtpd/smtpd.h
+++ b/usr.sbin/smtpd/smtpd.h
@@ -1656,6 +1656,7 @@ int table_regex_match(const char *, const char *);
 void   table_open_all(struct smtpd *);
 void   table_dump_all(struct smtpd *);
 void   table_close_all(struct smtpd *);
+const char *table_service_name(enum table_service );
 
 
 /* to.c */
diff --git a/usr.sbin/smtpd/smtpd/Makefile b/usr.sbin/smtpd/smtpd/Makefile
index b31d4e42224..e2f6e82c6e8 100644
--- a/usr.sbin/smtpd/smtpd/Makefile
+++ b/usr.sbin/smtpd/smtpd/Makefile
@@ -64,6 +64,7 @@ SRCS+=table_db.c
 SRCS+= table_getpwnam.c
 SRCS+= table_proc.c
 SRCS+= table_static.c
+SRCS+= table_procexec.c
 
 SRCS+= queue_fs.c
 SRCS+= queue_null.c
diff --git a/usr.sbin/smtpd/table.c b/usr.sbin/smtpd/table.c
index 1d82d88b81a..0c67d205065 100644
--- a/usr.sbin/smtpd/table.c
+++ b/usr.sbin/smtpd/table.c
@@ -46,8 +46,8 @@ extern struct table_backend table_backend_static;
 extern struct table_backend table_backend_db;
 extern struct table_backend table_backend_getpwnam;
 extern struct table_backend table_backend_proc;
+extern struct table_backend table_backend_procexec;
 
-static const char * table_service_name(enum table_service);
 static int table_parse_lookup(enum table_service, const char *, const char *,
 union lookup *);
 static int parse_sockaddr(struct sockaddr *, int, const char *);
@@ -59,6 +59,7 @@ static struct table_backend *backends[] = {
_backend_db,
_backend_getpwnam,
_backend_proc,
+   _backend_procexec,
NULL
 };
 
@@ -77,7 +78,7 @@ table_backend_lookup(const char *backend)
return NULL;
 }
 
-static const char *
+const char *
 table_service_name(enum table_service s)
 {
switch (s) {
diff --git a/usr.sbin/smtpd/table_procexec.c b/usr.sbin/smtpd/table_procexec.c
new file mode 100644
index 000..269c55ecd9f
--- /dev/null
+++ b/usr.sbin/smtpd/table_procexec.c
@@ -0,0 +1,355 @@
+/*
+ * Copyright (c) 2013 Eric Faurot 
+ *
+ * Permission to use, copy, modify, and distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND 

Enable pool cache on knote_pool

2021-05-31 Thread Visa Hankala
This patch enables the pool cache feature on the knote pool to reduce
the overhead of knote management.

Profiling done by mpi@ and bluhm@ indicate that the potentially needless
allocation of knotes in kqueue_register() causes slowdown with
kqueue-based poll(2) and select(2).

One approach to fix this is to reverse the function's initial guess
about knote: Try without allocation first. Then allocate and retry if
the knote is missing from the kqueue and EV_ADD is given.

Another option is to cache free knotes so that the shared knote pool
would be accessed less frequently.

The following diff takes the second approach. The caching is implemented
simply by enabling the pool cache feature. This makes use of existing
code and does not complicate kqueue_register(). The feature also helps
if there is heavy knote churn.

I think the most substantial part of the diff is that it extends pool
cache usage beyond mbufs. Is this change acceptable?

Note the cache is not particularly useful without kqueue-based poll(2)
and select(2). The pool view of systat(1) shows that there are pools
that would benefit more than knote_pool from caching, at least in terms
of request frequencies. The relative frequencies are dependent on system
workload, though. Kqpoll would definitely make knote pool more heavily
used.

Index: kern/init_main.c
===
RCS file: src/sys/kern/init_main.c,v
retrieving revision 1.306
diff -u -p -r1.306 init_main.c
--- kern/init_main.c8 Feb 2021 10:51:01 -   1.306
+++ kern/init_main.c31 May 2021 16:50:17 -
@@ -71,6 +71,7 @@
 #include 
 #endif
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -148,7 +149,6 @@ voidcrypto_init(void);
 void   db_ctf_init(void);
 void   prof_init(void);
 void   init_exec(void);
-void   kqueue_init(void);
 void   futex_init(void);
 void   taskq_init(void);
 void   timeout_proc_init(void);
@@ -432,7 +432,9 @@ main(void *framep)
prof_init();
 #endif
 
-   mbcpuinit();/* enable per cpu mbuf data */
+   /* Enable per-CPU data. */
+   mbcpuinit();
+   kqueue_init_percpu();
uvm_init_percpu();
 
/* init exec and emul */
Index: kern/kern_event.c
===
RCS file: src/sys/kern/kern_event.c,v
retrieving revision 1.163
diff -u -p -r1.163 kern_event.c
--- kern/kern_event.c   22 Apr 2021 15:30:12 -  1.163
+++ kern/kern_event.c   31 May 2021 16:50:17 -
@@ -231,6 +231,12 @@ kqueue_init(void)
PR_WAITOK, "knotepl", NULL);
 }
 
+void
+kqueue_init_percpu(void)
+{
+   pool_cache_init(_pool);
+}
+
 int
 filt_fileattach(struct knote *kn)
 {
Index: sys/event.h
===
RCS file: src/sys/sys/event.h,v
retrieving revision 1.54
diff -u -p -r1.54 event.h
--- sys/event.h 24 Feb 2021 14:59:52 -  1.54
+++ sys/event.h 31 May 2021 16:50:18 -
@@ -292,6 +292,8 @@ extern void knote_fdclose(struct proc *p
 extern voidknote_processexit(struct proc *);
 extern voidknote_modify(const struct kevent *, struct knote *);
 extern voidknote_submit(struct knote *, struct kevent *);
+extern voidkqueue_init(void);
+extern voidkqueue_init_percpu(void);
 extern int kqueue_register(struct kqueue *kq,
struct kevent *kev, struct proc *p);
 extern int kqueue_scan(struct kqueue_scan_state *, int, struct kevent *,



Re: libagent: fix agentx_context_object_nfind

2021-05-31 Thread Alexander Bluhm
On Sat, May 01, 2021 at 06:59:59PM +0200, Martijn van Duren wrote:
> The parameters for ax_oid_cmp are swapped.
> This fixes the few failing regress tests I just committed.
> 
> OK?

OK bluhm@

> Index: agentx.c
> ===
> RCS file: /cvs/src/lib/libagentx/agentx.c,v
> retrieving revision 1.9
> diff -u -p -r1.9 agentx.c
> --- agentx.c  1 May 2021 16:44:17 -   1.9
> +++ agentx.c  1 May 2021 16:59:08 -
> @@ -675,7 +675,7 @@ agentx_context_object_nfind(struct agent
>  
>   axo = RB_NFIND(axc_objects, &(axc->axc_objects), _search);
>   if (!inclusive && axo != NULL &&
> - ax_oid_cmp(&(axo_search.axo_oid), &(axo->axo_oid)) <= 0) {
> + ax_oid_cmp(&(axo->axo_oid), &(axo_search.axo_oid)) <= 0) {
>   axo = RB_NEXT(axc_objects, &(axc->axc_objects), axo);
>   }
>  
> 



Re: factor out ipv4 and ipv6 initial packet sanity checks for bridges

2021-05-31 Thread Alexander Bluhm
On Mon, May 31, 2021 at 02:33:00PM +1000, David Gwynne wrote:
> this is mostly shuffling the deck chairs, but ipv6 is moved around a bit
> more than ipv4, so some eyes and tests would be appreciated.

Deck shuffling looks correct.  OK bluhm@

> +int
> +ip6_input_if(struct mbuf **mp, int *offp, int nxt, int af, struct ifnet *ifp)
> +{
> + struct mbuf *m = *mp;

Minor nit.  You don't need = *mp here, it is done a few lines below
in m = *mp = ipv6_check(ifp, *mp).  That is how you did it in v4.



Re: Larger kernel fonts in RAMDISK_CD?

2021-05-31 Thread Theo de Raadt
Frederic Cambus  wrote:

> Does this look reasonable?
> 
> If it does and if we want to go this way, I can try to build a release
> and check if MINIROOTSIZE must be bumped on RAMDISK_CD. Then we could do
> the same for i386, armv7 and arm64.

We need to see these results first.



Re: Diff

2021-05-31 Thread Stefan Sperling
On Mon, May 31, 2021 at 02:33:03PM +0300, Reuven Plevinsky wrote:
> Hi,
> Here is my diff:
> https://github.com/reuvenP/src/commit/db909be68a3b03e68787de55d218388f33c4c4c6

Not everyone wants to read diffs in a browser.

Sending your diff inline will work better, like this:

diff --git a/sys/netinet/tcp_input.c b/sys/netinet/tcp_input.c
index cd0c12dcd3ba..74ce2621f762 100644
--- a/sys/netinet/tcp_input.c
+++ b/sys/netinet/tcp_input.c
@@ -3832,8 +3832,8 @@ syn_cache_add(struct sockaddr *src, struct sockaddr *dst, 
struct tcphdr *th,
 * leading to serious problems when traversing these
 * broken firewalls.
 *
-* With the default sbmax of 256K, a scale factor
-* of 3 will be chosen by this algorithm.  Those who
+* With the default sbmax of 2MB, a scale factor
+* of 6 will be chosen by this algorithm.  Those who
 * choose a larger sbmax should watch out
 * for the compatibility problems mentioned above.
 *



Diff

2021-05-31 Thread Reuven Plevinsky
Hi,
Here is my diff:
https://github.com/reuvenP/src/commit/db909be68a3b03e68787de55d218388f33c4c4c6
There is an old fix of SB_MAX, increasing of it to 2MB (instead of 256k)
makes TCP scaling factor 6 (instead of 3).
But the comment in the shifting remained with the old values.


Re: Pull Request

2021-05-31 Thread Stuart Henderson
OpenBSD does not use pull requests. Please send your diff, with 
explanation, in an email to this mailing list.


--
 Sent from a phone, apologies for poor formatting.
On 31 May 2021 11:56:27 Reuven Plevinsky  wrote:


https://github.com/reuvenP/src/commit/db909be68a3b03e68787de55d218388f33c4c4c6




Re: Larger kernel fonts in RAMDISK_CD?

2021-05-31 Thread Mark Kettenis
> Date: Mon, 31 May 2021 12:21:39 +0200
> From: Frederic Cambus 
> 
> Hi tech@,
> 
> The size of kernel fonts in RAMDISKs has long been a problem on systems
> with large screen resolutions booting via EFI, as currently only the 8x16
> font is built into RAMDISKs. As those systems are becoming more common, I
> would like to revisit the topic.
> 
> Currently, we decide which font is built into which kernel in wsfont(9),
> which will only add the 8x16 one when SMALL_KERNEL is defined, and larger
> fonts for selected architectures for !SMALL_KERNEL. There is no way to
> distinguish between RAMDISK and RAMDISK_CD kernels using #ifdef trickery,
> so with the current way we cannot add larger fonts only on RAMDISK_CD.
> As a reminder, we cannot add them to RAMDISK because there is no space
> left on the floppies, and there is no support for EFI systems on the
> floppies anyway.
> 
> However, unless I overlooked something, this could be solved by adding
> option directives directly in the RAMDISK_CD kernel configuration file.
> 
> This is how it would look like for amd64:
> 
> Index: sys/arch/amd64/conf/RAMDISK_CD
> ===
> RCS file: /cvs/src/sys/arch/amd64/conf/RAMDISK_CD,v
> retrieving revision 1.190
> diff -u -p -r1.190 RAMDISK_CD
> --- sys/arch/amd64/conf/RAMDISK_CD27 Dec 2020 23:05:37 -  1.190
> +++ sys/arch/amd64/conf/RAMDISK_CD31 May 2021 09:39:24 -
> @@ -20,6 +20,11 @@ option MSDOSFS
>  option   INET6
>  option   CRYPTO
>  
> +option   FONT_SPLEEN8x16
> +option   FONT_SPLEEN12x24
> +option   FONT_SPLEEN16x32
> +option   FONT_SPLEEN32x64
> +
>  option   RAMDISK_HOOKS
>  option   MINIROOTSIZE=7360
>  
> Does this look reasonable?

I would skip some sizes.  8x16 is readable on any screen size where
12x24 would be picked.  And maybe 16x32 is good enough for 4K screens
as well?

> If it does and if we want to go this way, I can try to build a release
> and check if MINIROOTSIZE must be bumped on RAMDISK_CD. Then we could do
> the same for i386, armv7 and arm64.

I'm all for it, but last time this came up Theo didn't like it and
suggested adding code to scale up the fonts instead.  I really don't
think you want to upscale the 8x16 font to 32x64.  But if we add the
16x32 font, upscaling that to 32x64 for the really big screens might
be an option and a reasonable compromise?

But figuring out how much things grow by adding the 16x32 font would
be a good start.



Pull Request

2021-05-31 Thread Reuven Plevinsky
https://github.com/reuvenP/src/commit/db909be68a3b03e68787de55d218388f33c4c4c6



Larger kernel fonts in RAMDISK_CD?

2021-05-31 Thread Frederic Cambus
Hi tech@,

The size of kernel fonts in RAMDISKs has long been a problem on systems
with large screen resolutions booting via EFI, as currently only the 8x16
font is built into RAMDISKs. As those systems are becoming more common, I
would like to revisit the topic.

Currently, we decide which font is built into which kernel in wsfont(9),
which will only add the 8x16 one when SMALL_KERNEL is defined, and larger
fonts for selected architectures for !SMALL_KERNEL. There is no way to
distinguish between RAMDISK and RAMDISK_CD kernels using #ifdef trickery,
so with the current way we cannot add larger fonts only on RAMDISK_CD.
As a reminder, we cannot add them to RAMDISK because there is no space
left on the floppies, and there is no support for EFI systems on the
floppies anyway.

However, unless I overlooked something, this could be solved by adding
option directives directly in the RAMDISK_CD kernel configuration file.

This is how it would look like for amd64:

Index: sys/arch/amd64/conf/RAMDISK_CD
===
RCS file: /cvs/src/sys/arch/amd64/conf/RAMDISK_CD,v
retrieving revision 1.190
diff -u -p -r1.190 RAMDISK_CD
--- sys/arch/amd64/conf/RAMDISK_CD  27 Dec 2020 23:05:37 -  1.190
+++ sys/arch/amd64/conf/RAMDISK_CD  31 May 2021 09:39:24 -
@@ -20,6 +20,11 @@ option   MSDOSFS
 option INET6
 option CRYPTO
 
+option FONT_SPLEEN8x16
+option FONT_SPLEEN12x24
+option FONT_SPLEEN16x32
+option FONT_SPLEEN32x64
+
 option RAMDISK_HOOKS
 option MINIROOTSIZE=7360
 
Does this look reasonable?

If it does and if we want to go this way, I can try to build a release
and check if MINIROOTSIZE must be bumped on RAMDISK_CD. Then we could do
the same for i386, armv7 and arm64.