On Mon, Sep 04, 2023 at 07:57:18PM +0200, Mischa wrote:
> On 2023-09-04 18:58, Mischa wrote:
> > On 2023-09-04 18:55, Mischa wrote:
> > > On 2023-09-04 17:57, Dave Voutila wrote:
> > > > Mischa <open...@mlst.nl> writes:
> > > > > On 2023-09-04 16:23, Mike Larkin wrote:
> > > > > > On Mon, Sep 04, 2023 at 02:30:23PM +0200, Mischa wrote:
> > > > > > > On 2023-09-03 21:18, Dave Voutila wrote:
> > > > > > > > Mischa <open...@mlst.nl> writes:
> > > > > > > >
> > > > > > > > > Nice!! Thanx Dave!
> > > > > > > > >
> > > > > > > > > Running go brrr as we speak.
> > > > > > > > > Testing with someone who is running Debian.
> > > > > > > >
> > > > > > > > Great. I'll plan on committing this tomorrow afternoon (4 Sep) 
> > > > > > > > my time
> > > > > > > > unless I hear of any issues.
> > > > > > > There are a couple of permanent VMs running on this host, 1 ToR
> > > > > > > node,
> > > > > > > OpenBSD VM and a Debian VM.
> > > > > > > While they were running I started my stress script.
> > > > > > > The first round I started 40 VMs with just bsd.rd, 2G memory
> > > > > > > All good, then I started 40 VMs with a base disk and 2G memory.
> > > > > > > After 20 VMs started I got the following messages on the console:
> > > > > > > [umd116390/221323 sp=752d7ac9f090 inside 75c264948000-75c26147fff:
> > > > > > > not
> > > > > > > MAP_STACK
> > > > > > > [umd159360/355276 sp=783369$96750 inside
> > > > > > > 7256d538c000-725645b8bFff:
> > > > > > > not
> > > > > > > MAP_STACK
> > > > > > > [umd172263/319211 sp=70fb86794b60 inside
> > > > > > > 75247a4d2000-75247acdifff:
> > > > > > > not
> > > > > > > MAP_STACK
> > > > > > > [umd142824/38950 sp=7db1ed2a64d0 inside
> > > > > > > 756c57d18000-756c58517fff: not
> > > > > > > MAP_STACK
> > > > > > > [umd19808/286658 sp=7dbied2a64d0 inside
> > > > > > > 70f685f41000-70f6867dofff: not
> > > > > > > MAP_STACK
> > > > > > > [umd193279/488634 sp=72652c3e3da0 inside
> > > > > > > 7845f168d000-7845f1e8cfff:
> > > > > > > not
> > > > > > > MAP_STACK
> > > > > > > [umd155924/286116 sp=7eac5a1ff060 inside
> > > > > > > 7b88bcb79000-7b88b4378fff:
> > > > > > > not
> > > > > > > MAP_STACK
> > > > > > > Not sure if this is related to starting of the VMs or something
> > > > > > > else, the
> > > > > > > ToR node was consuming 100%+ CPU at the time. :)
> > > > > > > Mischa
> > > > > > I have not seen this; can you try without the ToR node
> > > > > > some time and
> > > > > > see if
> > > > > > this still happens?
> > > > >
> > > > > Testing again without any other VMs running.
> > > > > Things wrong when I run the following command and wait a little.
> > > > >
> > > > > for i in $(jot 10 10); do vmctl create -b /var/vmm/vm09.qcow2
> > > > > /var/vmm/vm${i}.qcow2 && vmctl start -L -d
> > > > > /var/vmm/vm${i}.qcow2 -m 2G
> > > > > vm${i}; done
> > > >
> > > > Can you try adding a "sleep 2" or something in the loop? I can't
> > > > think
> > > > of a reason my changes would cause this. Do you see this on -current
> > > > without the diff?
> > >
> > > Adding the sleep 2 does indeed help. I managed to get 20 VMs started
> > > this way, before it would choke on 2-3.
> > >
> > > Do I only need the unpatched kernel or also the vmd/vmctl from snap?
> >
> > I do still get the same message on the console, but the machine isn't
> > freezing up.
> >
> > [umd173152/210775 sp=7a5f577a1780 inside 702698535000-702698d34fff: not
> > MAP_STACK
>
> Starting 30 VMs this way caused the machine to become unresponsive again,
> but nothing on the console. :(
>
> Mischa

Were you seeing these uvm errors before this diff? If so, this isn't
causing the problem and something else is.

If this diff causes the errors to occur, and without the diff it's fine, then
we need to look into that.


Also I think a pid number in that printf might be useful, I'll see what I can
find. If it's not vmd causing this and rather some other process then that
would be good to know also.

Reply via email to