On Tue, 1 Sep 2020 20:57:09 +0200
Klemens Nanni wrote:
> The driver increases a static peer counter across all wg interfaces
> when creating peers, the peer number is only used in debug output,
> though.
>
> Feedback? OK?
High level, I understand the problem and this makes debugging easier.
It
Like ospfd, ospf6d can use ROUTE_FLAGFILTER to opt out of receiving messages
relating to L2 and broadcast routes on its routing socket. We've been running
this for a week or so with no problems.
ok?
Index: kroute.c
===
RCS file: /cv
On Tue, Sep 01, 2020 at 11:44:03PM -0500, Jordan Hargrave wrote:
> This patch adds a common function for scanning PCIE Express Capability list
> The PCIE Capability list starts at 0x100 in extended PCI configuration space.
This seems to only handle extended capabilities?
Something like pcie_get_ex
Oh good catch thanks. Weird, it does compile!
From: Daniel Dickman
Sent: Tuesday, September 1, 2020 11:23 PM
To: Jordan Hargrave
Cc: tech@openbsd.org
Subject: Re: [PATCH] Add IOMMU support for Intel VT-d and AMD-Vi
> [PATCH] Add IOMMU support for Intel VT-d and
This patch adds a common function for scanning PCIE Express Capability list
The PCIE Capability list starts at 0x100 in extended PCI configuration space.
---
sys/dev/pci/pci.c| 28
sys/dev/pci/pcivar.h | 2 ++
2 files changed, 30 insertions(+)
diff --git a/sys/d
> [PATCH] Add IOMMU support for Intel VT-d and AMD Vi
>
> This hooks each pci device and overrides bus_dmamap_xxx to issue
> remap of DMA requests to virtual DMA space. It protects devices
> from issuing I/O requests to memory in the system that is outside
> the requested DMA space.
Hi Jordan, th
Moving from bugs to tech.
cwen reported that base-clang crashed on macppc in graphics/babl and
emulators/mednafen [1]. I observed that clang crashed on powerpc64 in
mednafen. I now propose to backport a commit in llvm 11.x git [2] to
prevent these crashes. This change affects other arches.
[1]
[PATCH] Add IOMMU support for Intel VT-d and AMD Vi
This hooks each pci device and overrides bus_dmamap_xxx to issue
remap of DMA requests to virtual DMA space. It protects devices
from issuing I/O requests to memory in the system that is outside
the requested DMA space.
---
sys/arch/amd64/conf/
On Mon, Aug 31, 2020 at 08:50:09AM +0200, Theo Buehler wrote:
> On Tue, Aug 25, 2020 at 03:28:03PM +0200, Claudio Jeker wrote:
> > On Tue, Aug 25, 2020 at 08:38:06PM +1000, Matt Dunwoodie wrote:
> > > On Tue, 25 Aug 2020 08:54:10 +0200
> > > Claudio Jeker wrote:
> > >
> > > > On Tue, Aug 25, 2020
The driver increases a static peer counter across all wg interfaces when
creating peers, the peer number is only used in debug output, though.
Output from console around recreating an interface (2 and 4 are the same):
wg1: Receiving handshake response from peer 2
wg1: Receiving k
On Tue, Sep 01, 2020 at 06:14:05PM +0200, Mark Kettenis wrote:
> > Date: Tue, 1 Sep 2020 11:05:26 -0500
> > From: Scott Cheloha
> >
> > Hi,
> >
> > At boot, if we don't know the lapic frequency offhand we compute it by
> > waiting for a known clock (the i8254) with a known frequency to cycle
> >
> Date: Tue, 1 Sep 2020 11:05:26 -0500
> From: Scott Cheloha
>
> Hi,
>
> At boot, if we don't know the lapic frequency offhand we compute it by
> waiting for a known clock (the i8254) with a known frequency to cycle
> a few times.
>
> Currently we cycle hz times. This doesn't make sense. Ther
Hi,
At boot, if we don't know the lapic frequency offhand we compute it by
waiting for a known clock (the i8254) with a known frequency to cycle
a few times.
Currently we cycle hz times. This doesn't make sense. There is
little to no benefit to waiting additional cycles if your kernel is
compil
On Mon, Aug 17, 2020 at 05:55:34PM -0500, Scott Cheloha wrote:
>
> [...]
Two week bump.
In summary:
- Merge the critical sections so that "timer swap" with setitimer(2)
is atomic.
- To do this, move error-free operations into a common kernel
subroutine, setitimer(). Now we have one critic
On 2020/08/31 08:39, Otto Moerbeek wrote:
> A question from Theo made me think about realloc and come up with a
> particular bad case for performance. I do not know if it happens in
> practice, but it was easy to create a test program to hit the case.
Not very scientific testing (a single attempt
15 matches
Mail list logo