Re: [vm-bhyve] Windows 2012 and 2016 servers guests would not stop

2019-04-22 Thread Rodney W. Grimes
-- Start of PGP signed section.
> Paul Vixie wrote:
> > 
> > Victor Sudakov wrote on 2019-04-22 19:43:
> > ...
> > >> And the implementation is pretty brutal:
> > >> # 'vm stopall'
> > >> # stop all bhyve instances
> > >> # note this will also stop instances not started by vm-bhyve
> > >> #
> > >> core::stopall(){
> > >>  local _pids=$(pgrep -f 'bhyve:')
> > >>
> > >>  echo "Shutting down all bhyve virtual machines"
> > >>  killall bhyve
> > >>  sleep 1
> > >>  killall bhyve
> > >>  wait_for_pids ${_pids}
> > >> }
> > 
> > yow.
> 
> To be sure, I was unable to find the above code (as is) in
> /usr/local/lib/vm-bhyve/vm-* (the vm-bhyve port 1.3.0). It may be that
> something more intelligent is happening in a more recent version, like a
> sequential shutdown. However, "kill $pid; sleep 1; kill $pid" seems to
> be still present.

I probably pulled that from old code, pulled from:
vm-bhyve-1.2.3 Management system for bhyve virtual machines

> 
> > 
> > >>
> > >> I wonder what the effect of the second kill is,
> > >> that seems odd.
> > > 
> > > Indeed.
> > 
> > the first killall will cause each client OS to see a soft shutdown 
> > signal. the sleep 1 gives them some time to flush their buffers. the 
> > second killall says, time's up, just stop.
> > 
> > i think this is worse than brutal, it's wrong. consider freebsd's own 
> > work flow when trying to comply with the first soft shutdown it got:
> > 
> > https://github.com/freebsd/freebsd/blob/master/sbin/reboot/reboot.c#L220
> > 
> > this has bitten me more than once, because using "pageins" as a proxy 
> > for "my server processes are busy trying to synchronize their user mode 
> > state" is inaccurate. i think _any_ continuing I/O should be reason to 
> > wait the full 60 seconds.
> 
> Would it be beneficial to just hack /usr/local/lib/vm-bhyve/vm-* ?

One can always hack experiments, my vm-bhyve is fairly diverged
from the release bits as I have done stuff to it so that it
better meets my needs.  Most of that is not submittable as upstream
changes, some I should really sort out and try to push up.

Some is due to local changes to bhyve that are not mainstream
and thus are not (yet) applicable.

> > and so i think the "sleep 1" above should be a "sleep 65".
> > 
> > > What is needed in vm-bhyve is the feature that if ACPI does not stop the
> > > guest for a predefined period of time, the guest is powered off.
> > 
> > i agree with this.
> 
> Will you please support the bug report: 
> https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=237479

More powerfull would be if we could come up with some
patches against 1.3.0 that effected some of the changes
we desire.  And some more investigation as to just how
the guests are handling this ACPI shutdown event.  What
might be wrong for FreeBSD might be right for windows?
 
Does the ACPI spec saying anything about hitting the
power down button twice within 1 second for example?

> Victor Sudakov,  VAS4-RIPE, VAS47-RIPN

-- 
Rod Grimes rgri...@freebsd.org
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Re: [vm-bhyve] Windows 2012 and 2016 servers guests would not stop

2019-04-22 Thread Rodney W. Grimes
> Victor Sudakov wrote on 2019-04-22 19:43:
> ...
> >> And the implementation is pretty brutal:
> >> # 'vm stopall'
> >> # stop all bhyve instances
> >> # note this will also stop instances not started by vm-bhyve
> >> #
> >> core::stopall(){
> >>  local _pids=$(pgrep -f 'bhyve:')
> >>
> >>  echo "Shutting down all bhyve virtual machines"
> >>  killall bhyve
> >>  sleep 1
> >>  killall bhyve
> >>  wait_for_pids ${_pids}
> >> }
> 
> yow.
> 
> >>
> >> I wonder what the effect of the second kill is,
> >> that seems odd.
> > 
> > Indeed.
> 
> the first killall will cause each client OS to see a soft shutdown 
> signal. the sleep 1 gives them some time to flush their buffers. the 
> second killall says, time's up, just stop.
> 
> i think this is worse than brutal, it's wrong. consider freebsd's own 
> work flow when trying to comply with the first soft shutdown it got:
> 
> https://github.com/freebsd/freebsd/blob/master/sbin/reboot/reboot.c#L220

Interesting, more interesting would be to dig out the
path that the system takes when it gets teh ACPI shutdown
event.  That is not going to end up in sbin/reboot is it?

Doesnt that run from within init itself?

I find in the init man page this:
 The init utility will terminate all possible processes (again, it will
 not wait for deadlocked processes) and reboot the machine if sent the
 interrupt (INT) signal, i.e. ``kill -INT 1''.  This is useful for
 shutting the machine down cleanly from inside the kernel or from X when
 the machine appears to be hung.

So my guess is that sending an ACPI shutdown event to the VM ends up
sending a -INT to init.  Looking inside init, it seems to end up
execing a shell runing /etc/rc.shutdown.

I do not know if the ACPI event is then blocked in the kernel so
the second one is pointless, but I believe once we enter into
-INT processing that signal is ignored, so infact this second
signal is actaully OK.  But I am not 100% sure on this, yet.


> this has bitten me more than once, because using "pageins" as a proxy 
> for "my server processes are busy trying to synchronize their user mode 
> state" is inaccurate. i think _any_ continuing I/O should be reason to 
> wait the full 60 seconds.
> 
> and so i think the "sleep 1" above should be a "sleep 65".

I think the sleep and the second signal are near pointless.  If
we have a race in ACPI processing we need to fix that.

> > What is needed in vm-bhyve is the feature that if ACPI does not stop the
> > guest for a predefined period of time, the guest is powered off.
> 
> i agree with this.

There is a PR up,
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=237479
please add comments there if you feel so inclined.

> -- 
> P Vixie

-- 
Rod Grimes rgri...@freebsd.org
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Re: [vm-bhyve] Windows 2012 and 2016 servers guests would not stop

2019-04-22 Thread Victor Sudakov
Paul Vixie wrote:
> 
> Victor Sudakov wrote on 2019-04-22 19:43:
> ...
> >> And the implementation is pretty brutal:
> >> # 'vm stopall'
> >> # stop all bhyve instances
> >> # note this will also stop instances not started by vm-bhyve
> >> #
> >> core::stopall(){
> >>  local _pids=$(pgrep -f 'bhyve:')
> >>
> >>  echo "Shutting down all bhyve virtual machines"
> >>  killall bhyve
> >>  sleep 1
> >>  killall bhyve
> >>  wait_for_pids ${_pids}
> >> }
> 
> yow.

To be sure, I was unable to find the above code (as is) in
/usr/local/lib/vm-bhyve/vm-* (the vm-bhyve port 1.3.0). It may be that
something more intelligent is happening in a more recent version, like a
sequential shutdown. However, "kill $pid; sleep 1; kill $pid" seems to
be still present.

> 
> >>
> >> I wonder what the effect of the second kill is,
> >> that seems odd.
> > 
> > Indeed.
> 
> the first killall will cause each client OS to see a soft shutdown 
> signal. the sleep 1 gives them some time to flush their buffers. the 
> second killall says, time's up, just stop.
> 
> i think this is worse than brutal, it's wrong. consider freebsd's own 
> work flow when trying to comply with the first soft shutdown it got:
> 
> https://github.com/freebsd/freebsd/blob/master/sbin/reboot/reboot.c#L220
> 
> this has bitten me more than once, because using "pageins" as a proxy 
> for "my server processes are busy trying to synchronize their user mode 
> state" is inaccurate. i think _any_ continuing I/O should be reason to 
> wait the full 60 seconds.

Would it be beneficial to just hack /usr/local/lib/vm-bhyve/vm-* ?
> 
> and so i think the "sleep 1" above should be a "sleep 65".
> 
> > What is needed in vm-bhyve is the feature that if ACPI does not stop the
> > guest for a predefined period of time, the guest is powered off.
> 
> i agree with this.

Will you please support the bug report: 
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=237479

-- 
Victor Sudakov,  VAS4-RIPE, VAS47-RIPN
2:5005/49@fidonet http://vas.tomsk.ru/


signature.asc
Description: PGP signature


Re: [vm-bhyve] Windows 2012 and 2016 servers guests would not stop

2019-04-22 Thread Paul Vixie




Victor Sudakov wrote on 2019-04-22 19:43:
...

And the implementation is pretty brutal:
# 'vm stopall'
# stop all bhyve instances
# note this will also stop instances not started by vm-bhyve
#
core::stopall(){
 local _pids=$(pgrep -f 'bhyve:')

 echo "Shutting down all bhyve virtual machines"
 killall bhyve
 sleep 1
 killall bhyve
 wait_for_pids ${_pids}
}


yow.



I wonder what the effect of the second kill is,
that seems odd.


Indeed.


the first killall will cause each client OS to see a soft shutdown 
signal. the sleep 1 gives them some time to flush their buffers. the 
second killall says, time's up, just stop.


i think this is worse than brutal, it's wrong. consider freebsd's own 
work flow when trying to comply with the first soft shutdown it got:


https://github.com/freebsd/freebsd/blob/master/sbin/reboot/reboot.c#L220

this has bitten me more than once, because using "pageins" as a proxy 
for "my server processes are busy trying to synchronize their user mode 
state" is inaccurate. i think _any_ continuing I/O should be reason to 
wait the full 60 seconds.


and so i think the "sleep 1" above should be a "sleep 65".


What is needed in vm-bhyve is the feature that if ACPI does not stop the
guest for a predefined period of time, the guest is powered off.


i agree with this.

--
P Vixie

___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Re: [vm-bhyve] Windows 2012 and 2016 servers guests would not stop

2019-04-22 Thread Victor Sudakov
Victor Sudakov wrote:

[dd]

> > > > It may be possile to adjust vm_delay to 0 and have that be better,
> > > > though I have not locked at the code.  You may also wish to discuss
> > > > the issue with the vm-bhyve maintainer and maybe a "lights out"
> > > > procedure needs to be added.
> 
> What is needed in vm-bhyve is the feature that if ACPI does not stop the
> guest for a predefined period of time, the guest is powered off.

I've created a feature request
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=237479
please support me for adding yourself to the CC list, or commenting.

-- 
Victor Sudakov,  VAS4-RIPE, VAS47-RIPN
2:5005/49@fidonet http://vas.tomsk.ru/


signature.asc
Description: PGP signature


[Bug 237429] bhyve: Performance regression after 12 upgrade

2019-04-22 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=237429

--- Comment #15 from doc...@doctor.nl2k.ab.ca ---
(In reply to Rodney W. Grimes from comment #13)


Spinoff errors make sense.  What I will do now is to build VMs basedon 512MB
and the next machine I obatin will be SAS and ZFS friendly

-- 
You are receiving this mail because:
You are the assignee for the bug.
You are on the CC list for the bug.
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


[Bug 237429] bhyve: Performance regression after 12 upgrade

2019-04-22 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=237429

--- Comment #14 from doc...@doctor.nl2k.ab.ca ---
(In reply to amvandemore from comment #12)
 You might have a point.  Now trying to install every VM with 512MB and see if
that helps.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Re: [vm-bhyve] Windows 2012 and 2016 servers guests would not stop

2019-04-22 Thread Victor Sudakov
Rodney W. Grimes wrote:

[dd]

> > > That is more likely your problem in that your sending these acpi
> > > shutdown requests one at a time, and they should be broadcast in
> > > the "power going out" case.
> > 
> > Whence is the idea that "vm stopall" does a sequential shutdown? What sense
> > would that make? 
> 
> Well I am not sure that vm-bhyve does this, but esxi certainly does,
> and it is a royal PITA to deal with sometimes.  It does make sense
> in some aspects, do not want the database server going offline before
> all the clients go down do we?  Kinda makes for a bunch of nonsense
> errors logged due to missing server.  I kinda like my virtual routers
> and firewalls to keep going tell the end too.
> 
> This is all YMMV situations though.

OK, you have convinced me, a predictable sequential shutdown may make
sense sometimes. Anyway, it's not there im vm-bhyve so it's not the
reason for the slow shutdown.

> 
> > A sequential startup does make sense but a sequential shutdown? Useless
> > I think. The man page says that 
> 
> For you perhaps useless, but that rarely rules out that there may be
> a totally valid and usefull case.
> 
> > stopall
> >  Stop all running virtual machines.  This sends a stop command 
> > to
> >  all bhyve(8) instances, regardless of whether they were 
> > starting
> >  using vm or not.
> 
> And the implementation is pretty brutal:
> # 'vm stopall'
> # stop all bhyve instances
> # note this will also stop instances not started by vm-bhyve
> # 
> core::stopall(){
> local _pids=$(pgrep -f 'bhyve:')
> 
> echo "Shutting down all bhyve virtual machines"
> killall bhyve
> sleep 1
> killall bhyve
> wait_for_pids ${_pids}
> }
> 
> I wonder what the effect of the second kill is,
> that seems odd.

Indeed.

> Almost like you might cause more
> issues than you solve as now you already have a
> vm in what should be ACPI shutdown process.
> 
> Also this killall operation probably puts a high stress
> on disk i/o as you just woke up and made all the vm's
> get busy all at once and your going to basically thrash
> on every single resource they are using (yet another reason
> you may actually want to serialize these shutdown operations.)

You are right.

> 
> IIRC windows, especially newer ones, do a boat load of work
> on a shutdown unless they are told to shutdown quickly.  Ie
> they even try to apply pending updates and such, that could
> be part of why you see windows vm's lagging around.

Do you know how to configure Windows for an unconditional ACPI shutdown?
Last time I stopped my Windows guests, they stopped pretty quickly. But
sometimes it just takes them forever to stop. Maybe it was giving a
shutdown warning to the users or something. Or maybe it was this issue:
https://serverfault.com/questions/871792/acpi-shutdown-does-not-always-work-on-a-windows-server-virtual-machine


> You may also want to: Disable Clear page file on shutdown
> that is a windows thing, if you have a huge page file that
> can do a LOT of io, if you have a few windows vm's on the
> same spindle and try to stop them all at once your going to
> trash that disk for much longer than you need.

Last time I checked on the Windows 2012 and 2016 servers, the
ClearPageFileAtShutdown setting was 0x0. I think it is the default.

> > > It may be possile to adjust vm_delay to 0 and have that be better,
> > > though I have not locked at the code.  You may also wish to discuss
> > > the issue with the vm-bhyve maintainer and maybe a "lights out"
> > > procedure needs to be added.

What is needed in vm-bhyve is the feature that if ACPI does not stop the
guest for a predefined period of time, the guest is powered off.

-- 
Victor Sudakov,  VAS4-RIPE, VAS47-RIPN
2:5005/49@fidonet http://vas.tomsk.ru/


signature.asc
Description: PGP signature


[Bug 237429] bhyve: Performance regression after 12 upgrade

2019-04-22 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=237429

--- Comment #13 from Rodney W. Grimes  ---
(In reply to doctor from comment #11)
I am concerned that you have loaded zfs, you have no pool what so ever present
on this machine?  So my assumption is neither did 11.2, so please can you
remove zfs, just so we know in no possible way is that having an effect.  I am
pretty sure that zfs grabs a chunk of memory at boot time even if you just load
it and never do a zpool import, but it should not be a very big chunk.

If in fact you do have a pool, even if your not using it, zfs is going to
import it and suck up a bunch of your memory.

I am not so interested in a large chunk of top data,
however a single shot of "ps alwwx" would be of interest if you could get
one during the slow down phase.

At this stage I am pretty convinced this is additional memory pressure and or
system tuning changes that came with 12.0.  It does have a slightly larger
memory foot print, and there are a fairly extensive amount of vm system changes
between 11.2 and 12.0.

-- 
You are receiving this mail because:
You are the assignee for the bug.
You are on the CC list for the bug.
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


[Bug 237429] bhyve: Performance regression after 12 upgrade

2019-04-22 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=237429

amvandem...@gmail.com changed:

   What|Removed |Added

 CC||amvandem...@gmail.com

--- Comment #12 from amvandem...@gmail.com ---
I don't think this is a reasonable bug report.  Greatly overcommitted resources
and configuration like loading ZFS when the system is already memory starved.  

I expect the output of something like 'top -aSwd 2' would provide much clarity
missing from this report.

If you do believe this be an actual bug in 12, can you provide the minimal test
case for reproduction.

Also can you expand on the CPU errors you are seeing in guest VM's?  I suspect
these are spinlock errors which occur because you are overcommited, not because
you are running 12-STABLE.

-- 
You are receiving this mail because:
You are the assignee for the bug.
You are on the CC list for the bug.
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


[Bug 237429] bhyve: Performance regression after 12 upgrade

2019-04-22 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=237429

--- Comment #11 from doc...@doctor.nl2k.ab.ca ---
(In reply to Rodney W. Grimes from comment #10)
(In reply to doctor from comment #9)
> 1) In case I wish to expand to say 64G RAM I can do so without trying to 
> rebuild the server.
Ok, valid, thanks for clarifying.

Well  small server.

> 2)Running UFS not ZFS.  Did I forget to mention this?
Your system shows it has loaded ZFS:
ZFS filesystem version: 5
ZFS storage pool version: features support (5000)
you can say your not using it, but your dmesg says differently.

I know I have it enable,  would like to experiment with a hybrid UFS / ZFS
system.

> 3) In 11.2 There were not problems.  in 12.0 These problems suddenly 
> manifested.

Are you certain that the only change is that of from 11.2 to 12.0,
or did some other change, perhaps a change not considered is the
cause of issues.  I am simply trying to identify what performance
would be low on 12.0 independent of any prior status.

I do have a 2M ps axww | top -t file That needs to be added.


> 4) I was able to put up to 8 VM in 11.2 and below without issue.
8 VM's with 4G each on 11.2 with 16G of memory, I do not accept this as true,
that is 32G of memory commit + host memory needs.  That should of been trashing
itself unless those 8vm's are pretty well idle

They idle for the most part on 11.2 .

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


[Bug 236989] AWS EC2 lockups "Missing interrupt"

2019-04-22 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=236989

Rodney W. Grimes  changed:

   What|Removed |Added

 CC||ch...@freebsd.org,
   ||rgri...@freebsd.org

--- Comment #1 from Rodney W. Grimes  ---
Pulling in Chuck Tuffli as am NVMe expert

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


[Bug 236989] AWS EC2 lockups "Missing interrupt"

2019-04-22 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=236989

Mark Linimon  changed:

   What|Removed |Added

   Assignee|b...@freebsd.org|virtualizat...@freebsd.org

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


[Bug 237429] bhyve: Performance regression after 12 upgrade

2019-04-22 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=237429

--- Comment #10 from Rodney W. Grimes  ---
(In reply to doctor from comment #9)
> 1) In case I wish to expand to say 64G RAM I can do so without trying to 
> rebuild the server.
Ok, valid, thanks for clarifing.

> 2)Running UFS not ZFS.  Did I forget to mention this?
Your system shows it has loaded ZFS:
ZFS filesystem version: 5
ZFS storage pool version: features support (5000)
you can say your not using it, but your dmesg says differently.

> 3) In 11.2 There were not problems.  in 12.0 These problems suddenly 
> manifested.

Are you certain that the only change is that of from 11.2 to 12.0,
or did some other change, perhaps a change not considered is the
cause of issues.  I am simply trying to identify what performance
would be low on 12.0 independent of any prior status.

> 4) I was able to put up to 8 VM in 11.2 and below without issue.
8 VM's with 4G each on 11.2 with 16G of memory, I do not accept this as true,
that is 32G of memory commit + host memory needs.  That should of been trashing
itself unless those 8vm's are pretty well idle

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


[Bug 237429] bhyve: Performance regression after 12 upgrade

2019-04-22 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=237429

--- Comment #9 from doc...@doctor.nl2k.ab.ca ---
(In reply to Rodney W. Grimes from comment #8)
1) In case I wish to expand to say 64G RAM I can do so without trying to
rebuild the server.

2)Running UFS not ZFS.  Did I forget to mention this?

3) In 11.2 There were not problems.  in 12.0 These problems suddenly
manifested.

4) I was able to put up to 8 VM in 11.2 and below without issue.

-- 
You are receiving this mail because:
You are the assignee for the bug.
You are on the CC list for the bug.
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


[Bug 237429] bhyve: Performance regression after 12 upgrade

2019-04-22 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=237429

--- Comment #8 from Rodney W. Grimes  ---
Something that jumps out is:
warning: total configured swap (58982400 pages) exceeds maximum recommended
amount (1612 pages).

You have ~60G of swap space configured, the system only has 16G of memory,
do you actually expect to over commit near 4x on memory?

To go with this the 3 vm's you show are 4G in memory size, so they would
need 12 of the 16G for guests, leaving your host running ZFS with only
4G of memory.  Have you tuned the arc cache to be < than say about 2G
as with just these 3 VM's your gong to start having memory pressure.

I see 16 tap devices configured manually, this is normally handled by
vm-bhyve, but leads me to the question how many VM's are you trying to
run at once and what is the total memory footprint?

-- 
You are receiving this mail because:
You are the assignee for the bug.
You are on the CC list for the bug.
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"