Re: 13.0 RC4 might be delayed

2021-03-28 Thread Alan Somers
On Sun, Mar 28, 2021 at 10:36 PM Gleb Popov  wrote:

> On Mon, Mar 29, 2021 at 4:37 AM David G Lawrence via freebsd-current <
> freebsd-current@freebsd.org> wrote:
>
> > > > On 27/03/21 06:04, David G Lawrence via freebsd-current wrote:
> > > >>> On Fri, Mar 26, 2021 at 1:01 PM Graham Perrin <
> > grahamper...@gmail.com>
> > > >>> wrote:
> > > >>>
> > >  On 26/03/2021 03:40, The Doctor via freebsd-current wrote:
> > > > ??? if people are having issues with ports like ???
> > > 
> > >  If I'm not mistaken:
> > > 
> > >  * 13.0-RC3 seems to be troublesome, as a guest machine, with
> > >  emulators/virtualbox-ose 6.1.18 as the host
> > > 
> > >  * no such trouble with 12.0-RELEASE-p5 as a guest.
> > > 
> > >  I hope to refine the bug report this weekend.
> > > 
> > > >>>
> > > >>> Had nothing but frequent guest lockups on 6.1.18 with my Win7
> > system.
> > > >>> That
> > > >>> was right after 6.1.18 was put into ports. Fell back to legacy (v5)
> > and
> > > >>> will try again shortly to see if it's any better.
> > > >>
> > > >> Kevin,
> > > >>
> > > >> ?? Make sure you have these options in your /etc/sysctl.conf :
> > > >>
> > > >> vfs.aio.max_buf_aio=8192
> > > >> vfs.aio.max_aio_queue_per_proc=65536
> > > >> vfs.aio.max_aio_per_proc=8192
> > > >> vfs.aio.max_aio_queue=65536
> > > >>
> > > >> ?? ...otherwise the guest I/O will random hang in VirtualBox.
> > This
> > > >> issue was
> > > >> mitigated in a late 5.x VirtualBox by patching to not use AIO, but
> > the
> > > >> issue
> > > >> came back in 6.x when that patch wasn't carried forward.
> > > >
> > > > Sorry I lost that patch. Can you point me to the patch? Maybe it can
> > be
> > > > easily ported.
> > > >
> > >
> > > I found the relevant commit. Please give me some time for testing and
> > > I'll put this patch back in the tree.
> >
> >If you're going to put that patch back in, then AIO should probably be
> > made an option in the port config, as shutting AIO off by default will
> > have a significant performance impact. Without AIO, all guest IO will
> > be become synchronous.
> >
>
> Are you sure about that? Without AIO, VBox uses a generic POSIX backend,
> which is based on pthread, I think.
>

We should also consider changing the defaults.

vfs.aio.max_buf_aio: this is the maximum number of buffered AIO requests
per process.  Buffered AIO requests are only used when directing AIO to
device nodes, not files, and only for devices that don't support unmapped
I/O.  Most devices do support unmapped I/O, including all GEOM devices.
For devices that do support unmapped I/O, the number of AIO requests per
process is unlimited.  So this knob isn't very important.  However, it is
more important on powerpc and mips, where unmapped I/O isn't always
possible.  16 is probably pretty reasonable for mips.

vfs.aio.max_aio_queue_per_proc: this is the maximum queued aio requests per
process.  This applies to all AIO requests, whether to files or devices.
So it ought to be large.  If your program is too unsophisticated to handle
EAGAIN, then it must be very large.  Otherwise, a few multiples of
max(vfs.aio.max_aio_per_proc, your SSD's queue depth) is probably
sufficient.

vfs.aio.max_aio_per_proc: this is the max number of active aio requests in
the slow path (for I/O to files, or other cases like misaligned I/O to
disks).  Setting this too low won't cause programs to fail, but it could
hurt performance.  Setting it higher than vfs.aio.max_aio_procs probably
won't have any benefit.

vfs.aio.max_aio_queue: like max_aio_per_proc, but global instead of
per-process.  Doesn't need to be more than a few multiples of
max_aio_per_proc.

Finally, I see that emulators/virtualbox-ose's pkg-message advises checking
for the AIO kernel module.  That advice is obsolete.  AIO is nowadays
builtin to the kernel and always enabled.  There is no kernel module any
longer.

Actually, the defaults don't look unreasonable to me, for an amd64 system
with disk, file, or zvol-backed VMs.  Does virtualbox properly handle
EAGAIN as returned by aio_write, aio_read, and lio_listio?  If not, raising
these limits is a poor substitute for fixing virtualbox.  If so, then I'm
really curious.  If anybody could tell me which limit actually solves the
problem, I would like to know.

-Alan
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: 13.0 RC4 might be delayed

2021-03-28 Thread David G Lawrence via freebsd-current
> On Mon, Mar 29, 2021 at 4:37 AM David G Lawrence via freebsd-current <
> freebsd-current@freebsd.org> wrote:
> 
> > > > On 27/03/21 06:04, David G Lawrence via freebsd-current wrote:
> > > >>> On Fri, Mar 26, 2021 at 1:01 PM Graham Perrin <
> > grahamper...@gmail.com>
> > > >>> wrote:
> > > >>>
> > >  On 26/03/2021 03:40, The Doctor via freebsd-current wrote:
> > > > ??? if people are having issues with ports like ???
> > > 
> > >  If I'm not mistaken:
> > > 
> > >  * 13.0-RC3 seems to be troublesome, as a guest machine, with
> > >  emulators/virtualbox-ose 6.1.18 as the host
> > > 
> > >  * no such trouble with 12.0-RELEASE-p5 as a guest.
> > > 
> > >  I hope to refine the bug report this weekend.
> > > 
> > > >>>
> > > >>> Had nothing but frequent guest lockups on 6.1.18 with my Win7
> > system.
> > > >>> That
> > > >>> was right after 6.1.18 was put into ports. Fell back to legacy (v5)
> > and
> > > >>> will try again shortly to see if it's any better.
> > > >>
> > > >> Kevin,
> > > >>
> > > >> ?? Make sure you have these options in your /etc/sysctl.conf :
> > > >>
> > > >> vfs.aio.max_buf_aio=8192
> > > >> vfs.aio.max_aio_queue_per_proc=65536
> > > >> vfs.aio.max_aio_per_proc=8192
> > > >> vfs.aio.max_aio_queue=65536
> > > >>
> > > >> ?? ...otherwise the guest I/O will random hang in VirtualBox.
> > This
> > > >> issue was
> > > >> mitigated in a late 5.x VirtualBox by patching to not use AIO, but
> > the
> > > >> issue
> > > >> came back in 6.x when that patch wasn't carried forward.
> > > >
> > > > Sorry I lost that patch. Can you point me to the patch? Maybe it can
> > be
> > > > easily ported.
> > > >
> > >
> > > I found the relevant commit. Please give me some time for testing and
> > > I'll put this patch back in the tree.
> >
> >If you're going to put that patch back in, then AIO should probably be
> > made an option in the port config, as shutting AIO off by default will
> > have a significant performance impact. Without AIO, all guest IO will
> > be become synchronous.
> >
> 
> Are you sure about that? Without AIO, VBox uses a generic POSIX backend,
> which is based on pthread, I think.

   No, I'm not sure - I haven't looked at the code. My comment is based on
some comments that a VirtualBox developer made in a forum 3-4 years ago
where he said that the "generic POSIX" IO was very simple and never
intended to be used in production - only as a placeholder when developing
for a new host platform.

   Are you sure that it does multi-threaded IO?

-DG
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: 13.0 RC4 might be delayed

2021-03-28 Thread Gleb Popov
On Mon, Mar 29, 2021 at 4:37 AM David G Lawrence via freebsd-current <
freebsd-current@freebsd.org> wrote:

> > > On 27/03/21 06:04, David G Lawrence via freebsd-current wrote:
> > >>> On Fri, Mar 26, 2021 at 1:01 PM Graham Perrin <
> grahamper...@gmail.com>
> > >>> wrote:
> > >>>
> >  On 26/03/2021 03:40, The Doctor via freebsd-current wrote:
> > > ??? if people are having issues with ports like ???
> > 
> >  If I'm not mistaken:
> > 
> >  * 13.0-RC3 seems to be troublesome, as a guest machine, with
> >  emulators/virtualbox-ose 6.1.18 as the host
> > 
> >  * no such trouble with 12.0-RELEASE-p5 as a guest.
> > 
> >  I hope to refine the bug report this weekend.
> > 
> > >>>
> > >>> Had nothing but frequent guest lockups on 6.1.18 with my Win7
> system.
> > >>> That
> > >>> was right after 6.1.18 was put into ports. Fell back to legacy (v5)
> and
> > >>> will try again shortly to see if it's any better.
> > >>
> > >> Kevin,
> > >>
> > >> ?? Make sure you have these options in your /etc/sysctl.conf :
> > >>
> > >> vfs.aio.max_buf_aio=8192
> > >> vfs.aio.max_aio_queue_per_proc=65536
> > >> vfs.aio.max_aio_per_proc=8192
> > >> vfs.aio.max_aio_queue=65536
> > >>
> > >> ?? ...otherwise the guest I/O will random hang in VirtualBox.
> This
> > >> issue was
> > >> mitigated in a late 5.x VirtualBox by patching to not use AIO, but
> the
> > >> issue
> > >> came back in 6.x when that patch wasn't carried forward.
> > >
> > > Sorry I lost that patch. Can you point me to the patch? Maybe it can
> be
> > > easily ported.
> > >
> >
> > I found the relevant commit. Please give me some time for testing and
> > I'll put this patch back in the tree.
>
>If you're going to put that patch back in, then AIO should probably be
> made an option in the port config, as shutting AIO off by default will
> have a significant performance impact. Without AIO, all guest IO will
> be become synchronous.
>

Are you sure about that? Without AIO, VBox uses a generic POSIX backend,
which is based on pthread, I think.
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: 13.0 RC4 might be delayed

2021-03-28 Thread Greg Rivers via freebsd-current
On Sunday, 28 March 2021 20:37:13 CDT David G Lawrence via freebsd-current 
wrote:
> > > On 27/03/21 06:04, David G Lawrence via freebsd-current wrote:
> > >>> On Fri, Mar 26, 2021 at 1:01 PM Graham Perrin 
> > >>> wrote:
> > >>>
> >  On 26/03/2021 03:40, The Doctor via freebsd-current wrote:
> > > ??? if people are having issues with ports like ???
> > 
> >  If I'm not mistaken:
> > 
> >  * 13.0-RC3 seems to be troublesome, as a guest machine, with
> >  emulators/virtualbox-ose 6.1.18 as the host
> > 
> >  * no such trouble with 12.0-RELEASE-p5 as a guest.
> > 
> >  I hope to refine the bug report this weekend.
> > 
> > >>>
> > >>> Had nothing but frequent guest lockups on 6.1.18 with my Win7 system. 
> > >>> That
> > >>> was right after 6.1.18 was put into ports. Fell back to legacy (v5) and
> > >>> will try again shortly to see if it's any better.
> > >>
> > >> Kevin,
> > >>
> > >> ?? Make sure you have these options in your /etc/sysctl.conf :
> > >>
> > >> vfs.aio.max_buf_aio=8192
> > >> vfs.aio.max_aio_queue_per_proc=65536
> > >> vfs.aio.max_aio_per_proc=8192
> > >> vfs.aio.max_aio_queue=65536
> > >>
> > >> ?? ...otherwise the guest I/O will random hang in VirtualBox. This 
> > >> issue was
> > >> mitigated in a late 5.x VirtualBox by patching to not use AIO, but the 
> > >> issue
> > >> came back in 6.x when that patch wasn't carried forward.
> > > 
> > > Sorry I lost that patch. Can you point me to the patch? Maybe it can be 
> > > easily ported.
> > > 
> > 
> > I found the relevant commit. Please give me some time for testing and 
> > I'll put this patch back in the tree.
> 
>If you're going to put that patch back in, then AIO should probably be
> made an option in the port config, as shutting AIO off by default will
> have a significant performance impact. Without AIO, all guest IO will
> be become synchronous.
>Ideally, someone would fix the AIO case by either increasing the defaults
> in FreeBSD to something reasonable, and/or properly handing the case when
> an AIO limit is reached.
> 
Agreed, it would be a shame to have AIO disabled by default. A one time update 
to sysctl.conf (per the existing pkg message!) is a small price to pay for much 
better performance.

-- 
Greg


___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: 13.0 RC4 might be delayed

2021-03-28 Thread David G Lawrence via freebsd-current
> > On 27/03/21 06:04, David G Lawrence via freebsd-current wrote:
> >>> On Fri, Mar 26, 2021 at 1:01 PM Graham Perrin 
> >>> wrote:
> >>>
>  On 26/03/2021 03:40, The Doctor via freebsd-current wrote:
> > ??? if people are having issues with ports like ???
> 
>  If I'm not mistaken:
> 
>  * 13.0-RC3 seems to be troublesome, as a guest machine, with
>  emulators/virtualbox-ose 6.1.18 as the host
> 
>  * no such trouble with 12.0-RELEASE-p5 as a guest.
> 
>  I hope to refine the bug report this weekend.
> 
> >>>
> >>> Had nothing but frequent guest lockups on 6.1.18 with my Win7 system. 
> >>> That
> >>> was right after 6.1.18 was put into ports. Fell back to legacy (v5) and
> >>> will try again shortly to see if it's any better.
> >>
> >> Kevin,
> >>
> >> ?? Make sure you have these options in your /etc/sysctl.conf :
> >>
> >> vfs.aio.max_buf_aio=8192
> >> vfs.aio.max_aio_queue_per_proc=65536
> >> vfs.aio.max_aio_per_proc=8192
> >> vfs.aio.max_aio_queue=65536
> >>
> >> ?? ...otherwise the guest I/O will random hang in VirtualBox. This 
> >> issue was
> >> mitigated in a late 5.x VirtualBox by patching to not use AIO, but the 
> >> issue
> >> came back in 6.x when that patch wasn't carried forward.
> > 
> > Sorry I lost that patch. Can you point me to the patch? Maybe it can be 
> > easily ported.
> > 
> 
> I found the relevant commit. Please give me some time for testing and 
> I'll put this patch back in the tree.

   If you're going to put that patch back in, then AIO should probably be
made an option in the port config, as shutting AIO off by default will
have a significant performance impact. Without AIO, all guest IO will
be become synchronous.
   Ideally, someone would fix the AIO case by either increasing the defaults
in FreeBSD to something reasonable, and/or properly handing the case when
an AIO limit is reached.

-DG
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: Strange behavior after running under high load

2021-03-28 Thread Mateusz Guzik
This may be the problem fixed in
e9272225e6bed840b00eef1c817b188c172338ee ("vfs: fix vnlru marker
handling for filtered/unfiltered cases").

However, there is a long standing performance bug where if vnode limit
is hit, and there is nothing to reclaim, the code is just going to
sleep for one second.

On 3/28/21, Stefan Esser  wrote:
> Am 28.03.21 um 17:44 schrieb Andriy Gapon:
>> On 28/03/2021 17:39, Stefan Esser wrote:
>>> After a period of high load, my now idle system needs 4 to 10 seconds to
>>> run any trivial command - even after 20 minutes of no load ...
>>>
>>>
>>> I have run some Monte-Carlo simulations for a few hours, with initially
>>> 35
>>> processes running in parallel for some 10 seconds each.
>>
>> I saw somewhat similar symptoms with 13-CURRENT some time ago.
>> To me it looked like even small kernel memory allocations took a very long
>> time.
>> But it was hard to properly diagnose that as my favorite tool, dtrace, was
>> also
>> affected by the same problem.
>
> That could have been the case - but I had to reboot to recover the system.
>
> I had let it sit idle fpr a few hours and the last "time uptime" before
> the reboot took 15 second real time to complete.
>
> Response from within the shell (e.g. "echo *") was instantaneous, though.
>
> I tried to trace the program execution of "uptime" with truss and found,
> that the loading of shared libraries proceeded at about one or two per
> second until all were attached and then the program quickly printed the
> expected results.
>
> I could probably recreate the issue by running the same set of programs
> that triggered it a few hours ago, but this is a production system and
> I need it to be operational through the week ...
>
> Regards, STefan
>
>


-- 
Mateusz Guzik 
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


review of NFSv4.1/4.2 client side patch D29475

2021-03-28 Thread Rick Macklem
Hi,

If anyone would like to review D29475, which adds required support
for BindConnectionToSession, please do so.
--> Needed to make callbacks to continue working after a TCP
   reconnect occurs, due to a networking partition.

Until this patch is in a client, it is recommended to not run the
nfscbd(8) daemon, since the callback path may break after a
TCP reconnect.

rick
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: 13.0 RC4 might be delayed

2021-03-28 Thread Guido Falsi via freebsd-current

On 28/03/21 22:34, Guido Falsi via freebsd-current wrote:

On 27/03/21 06:04, David G Lawrence via freebsd-current wrote:

On Fri, Mar 26, 2021 at 1:01 PM Graham Perrin 
wrote:


On 26/03/2021 03:40, The Doctor via freebsd-current wrote:

??? if people are having issues with ports like ???


If I'm not mistaken:

* 13.0-RC3 seems to be troublesome, as a guest machine, with
emulators/virtualbox-ose 6.1.18 as the host

* no such trouble with 12.0-RELEASE-p5 as a guest.

I hope to refine the bug report this weekend.



Had nothing but frequent guest lockups on 6.1.18 with my Win7 system. 
That

was right after 6.1.18 was put into ports. Fell back to legacy (v5) and
will try again shortly to see if it's any better.


Kevin,

    Make sure you have these options in your /etc/sysctl.conf :

vfs.aio.max_buf_aio=8192
vfs.aio.max_aio_queue_per_proc=65536
vfs.aio.max_aio_per_proc=8192
vfs.aio.max_aio_queue=65536

    ...otherwise the guest I/O will random hang in VirtualBox. This 
issue was
mitigated in a late 5.x VirtualBox by patching to not use AIO, but the 
issue

came back in 6.x when that patch wasn't carried forward.


Sorry I lost that patch. Can you point me to the patch? Maybe it can be 
easily ported.




I found the relevant commit. Please give me some time for testing and 
I'll put this patch back in the tree.


--
Guido Falsi 
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: 13.0 RC4 might be delayed

2021-03-28 Thread Guido Falsi via freebsd-current

On 27/03/21 06:04, David G Lawrence via freebsd-current wrote:

On Fri, Mar 26, 2021 at 1:01 PM Graham Perrin 
wrote:


On 26/03/2021 03:40, The Doctor via freebsd-current wrote:

??? if people are having issues with ports like ???


If I'm not mistaken:

* 13.0-RC3 seems to be troublesome, as a guest machine, with
emulators/virtualbox-ose 6.1.18 as the host

* no such trouble with 12.0-RELEASE-p5 as a guest.

I hope to refine the bug report this weekend.



Had nothing but frequent guest lockups on 6.1.18 with my Win7 system. That
was right after 6.1.18 was put into ports. Fell back to legacy (v5) and
will try again shortly to see if it's any better.


Kevin,

Make sure you have these options in your /etc/sysctl.conf :

vfs.aio.max_buf_aio=8192
vfs.aio.max_aio_queue_per_proc=65536
vfs.aio.max_aio_per_proc=8192
vfs.aio.max_aio_queue=65536

...otherwise the guest I/O will random hang in VirtualBox. This issue was
mitigated in a late 5.x VirtualBox by patching to not use AIO, but the issue
came back in 6.x when that patch wasn't carried forward.


Sorry I lost that patch. Can you point me to the patch? Maybe it can be 
easily ported.


--
Guido Falsi 
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: Strange behavior after running under high load

2021-03-28 Thread Stefan Esser

Am 28.03.21 um 17:44 schrieb Andriy Gapon:

On 28/03/2021 17:39, Stefan Esser wrote:

After a period of high load, my now idle system needs 4 to 10 seconds to
run any trivial command - even after 20 minutes of no load ...


I have run some Monte-Carlo simulations for a few hours, with initially 35
processes running in parallel for some 10 seconds each.


I saw somewhat similar symptoms with 13-CURRENT some time ago.
To me it looked like even small kernel memory allocations took a very long time.
But it was hard to properly diagnose that as my favorite tool, dtrace, was also
affected by the same problem.


That could have been the case - but I had to reboot to recover the system.

I had let it sit idle fpr a few hours and the last "time uptime" before
the reboot took 15 second real time to complete.

Response from within the shell (e.g. "echo *") was instantaneous, though.

I tried to trace the program execution of "uptime" with truss and found,
that the loading of shared libraries proceeded at about one or two per
second until all were attached and then the program quickly printed the
expected results.

I could probably recreate the issue by running the same set of programs
that triggered it a few hours ago, but this is a production system and
I need it to be operational through the week ...

Regards, STefan



OpenPGP_signature
Description: OpenPGP digital signature


Re: 13.0 RC4 might be delayed

2021-03-28 Thread Kevin Oberman
On Sat, Mar 27, 2021 at 5:07 AM dmilith .  wrote:

> It may not only be Virtualbox, but also happens under Vmware VMs.
>
> I use Vmware Fusion 7 pro as my software build-host on top of my Mac
> Pro for years now, but I can't build much with 13.0 cause regular
> build processes (like sed, awk, grep, zsh) turn into zombies randomly.
> Example shot from my private CI from yesterday:
> http://s.verknowsys.com/12f14b0350ee3baeb8f153cd48764bc8.png
>
> The issue doesn't happen on 12.2, 12.1, 12.0 or older releases.
>
> I reported this issue (I'm testing it since 13-alpha) here:
> https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=253718
>
> In RC3 it feels it got even worse and happens even more often… It's a
> critical release blocker if you ask me…
>
> kind regards
> Daniel
>
> On 27/03/2021, David G Lawrence via freebsd-current
>  wrote:
> >> On Fri, Mar 26, 2021 at 1:01 PM Graham Perrin 
> >> wrote:
> >>
> >> > On 26/03/2021 03:40, The Doctor via freebsd-current wrote:
> >> > > ??? if people are having issues with ports like ???
> >> >
> >> > If I'm not mistaken:
> >> >
> >> > * 13.0-RC3 seems to be troublesome, as a guest machine, with
> >> > emulators/virtualbox-ose 6.1.18 as the host
> >> >
> >> > * no such trouble with 12.0-RELEASE-p5 as a guest.
> >> >
> >> > I hope to refine the bug report this weekend.
> >> >
> >>
> >> Had nothing but frequent guest lockups on 6.1.18 with my Win7 system.
> >> That
> >> was right after 6.1.18 was put into ports. Fell back to legacy (v5) and
> >> will try again shortly to see if it's any better.
> >
> > Kevin,
> >
> >Make sure you have these options in your /etc/sysctl.conf :
> >
> > vfs.aio.max_buf_aio=8192
> > vfs.aio.max_aio_queue_per_proc=65536
> > vfs.aio.max_aio_per_proc=8192
> > vfs.aio.max_aio_queue=65536
> >
> >...otherwise the guest I/O will random hang in VirtualBox. This issue
> > was
> > mitigated in a late 5.x VirtualBox by patching to not use AIO, but the
> > issue
> > came back in 6.x when that patch wasn't carried forward.
> >
> > -DG
> > ___
> > freebsd-current@freebsd.org mailing list
> > https://lists.freebsd.org/mailman/listinfo/freebsd-current
> > To unsubscribe, send any mail to "
> freebsd-current-unsubscr...@freebsd.org"
> >
>
>
> --
> --
> Daniel Dettlaff
> Versatile Knowledge Systems
> verknowsys.com
>

My problem was resolved when I spent the time to read the pkg-message in
the port. It is running just fine, now. Somehow I missed David's message,
as well.  Not good. Unfortunate that the patches to turn off AIO did not
make it into V6.
--
Kevin Oberman, Part time kid herder and retired Network Engineer
E-mail: rkober...@gmail.com
PGP Fingerprint: D03FB98AFA78E3B78C1694B318AB39EF1B055683
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: Strange behavior after running under high load

2021-03-28 Thread Andriy Gapon
On 28/03/2021 17:39, Stefan Esser wrote:
> After a period of high load, my now idle system needs 4 to 10 seconds to
> run any trivial command - even after 20 minutes of no load ...
> 
> 
> I have run some Monte-Carlo simulations for a few hours, with initially 35
> processes running in parallel for some 10 seconds each.

I saw somewhat similar symptoms with 13-CURRENT some time ago.
To me it looked like even small kernel memory allocations took a very long time.
But it was hard to properly diagnose that as my favorite tool, dtrace, was also
affected by the same problem.

> The load decreased over time since some parameter sets were faster to process.
> All in all 63000 processes ran within some 3 hours.
> 
> When the system became idle, interactive performance was very bad. Running
> any trivial command (e.g. uptime) takes some 5 to 10 seconds. Since I have
> to have this system working, I plan to reboot it later today, but will keep
> it in this state for some more time to see whether this state persists or
> whether the system recovers from it.
> 
> Any ideas what might cause such a system state???
> 
> 
> The system has a Ryzen 5 3600 CPU (6 core/12 threads) and 32 GB or RAM.
> 
> The following are a few commands that I have tried on this now practically
> idle system:
> 
> $ time vmstat -n 1
>   procs    memory    page  disks faults   cpu
>   r  b  w  avm  fre  flt  re  pi  po   fr   sr nv0   in   sy   cs us sy id
>   2  0  0  26G 922M 1.2K   1   4   0 1.4K  239   0  482 7.2K  934 11  1 88
> 
> real    0m9,357s
> user    0m0,001s
> sys    0m0,018
> 
>  wait 1 minute 
> 
> $ time vmstat -n 1
>   procs    memory    page  disks faults   cpu
>   r  b  w  avm  fre  flt  re  pi  po   fr   sr nv0   in   sy   cs us sy id
>   1  0  0  26G 925M 1.2K   1   4   0 1.4K  239   0  482 7.2K  933 11  1 88
> 
> real    0m9,821s
> user    0m0,003s
> sys    0m0,389s
> 
> $ systat -vm
> 
>  4 users    Load  0.10  0.72  3.57  Mar 28 16:15
>     Mem usage:  97%Phy 55%Kmem   VN PAGER   SWAP PAGER
> Mem:  REAL   VIRTUAL in   out in  out
>     Tot   Share Tot    Share Free   count
> Act  2387M    460K  26481M 460K 923M   pages
> All  2605M    218M  27105M 572M    ioflt  Interrupts
> Proc:  cow 132 total
>    r   p   d    s   w   Csw  Trp  Sys  Int  Sof  Flt    52 zfod 96 
> hpet0:t0
>   316   356   39  225  132   21   53   ozfod nvme0:admi
>   %ozfod nvme0:io0
>   0.1%Sys   0.0%Intr  0.0%User  0.0%Nice 99.9%Idle daefr nvme0:io1
> |    |    |    |    |    |    |    |    |    |    |    prcfr nvme0:io2
>    totfr nvme0:io3
>     dtbuf  react nvme0:io4
> Namei  Name-cache   Dir-cache    620370 maxvn  pdwak nvme0:io5
>     Calls    hits   %    hits   %    627486 numvn  168 pdpgs    27 xhci0 
> 66
>    18  14  78    65 frevn  intrn ahci0 67
>     17539M wire xhci1 68
> Disks  nvd0  ada0  ada1  ada2  ada3  ada4   cd0   430M act   9 re0 69
> KB/t   0.00  0.00  0.00  0.00  0.00  0.00  0.00 12696M inact hdac0 76
> tps   0 0 0 0 0 0 0 54276K laund vgapci0 78
> MB/s   0.00  0.00  0.00  0.00  0.00  0.00  0.00   923M free
> %busy 0 0 0 0 0 0 0  0 buf
> 
>  5 minutes later 
> 
> $ time vmstat -n 1
>  procs    memory    page  disks faults   cpu
>  r  b  w  avm  fre  flt  re  pi  po   fr   sr nv0   in   sy   cs us sy id
>  1  0  0  26G 922M 1.2K   1   4   0 1.4K  239   0  481 7.2K  931 11  1 88
> 
> real    0m4,270s
> user    0m0,000s
> sys    0m0,019s
> 
> $ time uptime
> 16:20  up 23:23, 4 users, load averages: 0,17 0,39 2,68
> 
> real    0m10,840s
> user    0m0,001s
> sys    0m0,374s
> 
> $ time uptime
> 16:37  up 23:40, 4 users, load averages: 0,29 0,27 0,96
> 
> real    0m9,273s
> user    0m0,000s
> sys    0m0,020s
> 


-- 
Andriy Gapon
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: system freeze on 14.0-CURRENT

2021-03-28 Thread Masachika ISHIZUKA
>>I have trouble with recent 14.0-CURRENT 146 (e.x. main-6a762cfae,
>> main-3ead60236, main-25bfa4486).
>>It works well on recent 14.0-CURRENT until starting firefox.
>>If I start firefox (v87.0), system freeze but no core dumps.
>>If it booted old kernel 145 (e.x. main-b5449c92b), firefox v87.0
>> is working well.
> 
> With 25bfa4486 (2021-03-22) as the oldest of your suspects:
> 
> 
> 
> Your other suspects:
> 
> 3ead60236 (2021-03-23)
> 
> 6a762cfae (2021-03-28)
> 
> I use Firefox 87 with 66f138563be (2021-03-24) without freezes.
> 
> Please, can you share hardware and other details?

  I use dell notebook XPS12 (9Q33) and dell desktop vostro 3267.
Both machines have freezed.

[XPS12]
   cpu: Core i7-4500U
memory: 8GB
  graphics: intel HD4400(i915kms)
  
[vostro 3267]
   cpu: Core i5-7500
memory: 12GB
  graphics: intel HD630(i915kms)

  And I also use dell studio 1558 laptop. This machine is working
fine, not freezed.

[studio 1558]
   cpu: core i7-Q840
memory: 8GB
  graphics: ATI mobile radeon HD5000 series(radeonkms)
-- 
Masachika ISHIZUKA
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Strange behavior after running under high load

2021-03-28 Thread Stefan Esser

After a period of high load, my now idle system needs 4 to 10 seconds to
run any trivial command - even after 20 minutes of no load ...


I have run some Monte-Carlo simulations for a few hours, with initially 35 
processes running in parallel for some 10 seconds each.


The load decreased over time since some parameter sets were faster to process.
All in all 63000 processes ran within some 3 hours.

When the system became idle, interactive performance was very bad. Running
any trivial command (e.g. uptime) takes some 5 to 10 seconds. Since I have
to have this system working, I plan to reboot it later today, but will keep
it in this state for some more time to see whether this state persists or
whether the system recovers from it.

Any ideas what might cause such a system state???


The system has a Ryzen 5 3600 CPU (6 core/12 threads) and 32 GB or RAM.

The following are a few commands that I have tried on this now practically
idle system:

$ time vmstat -n 1
  procsmemorypage  disks faults   cpu
  r  b  w  avm  fre  flt  re  pi  po   fr   sr nv0   in   sy   cs us sy id
  2  0  0  26G 922M 1.2K   1   4   0 1.4K  239   0  482 7.2K  934 11  1 88

real0m9,357s
user0m0,001s
sys 0m0,018

 wait 1 minute 

$ time vmstat -n 1
  procsmemorypage  disks faults   cpu
  r  b  w  avm  fre  flt  re  pi  po   fr   sr nv0   in   sy   cs us sy id
  1  0  0  26G 925M 1.2K   1   4   0 1.4K  239   0  482 7.2K  933 11  1 88

real0m9,821s
user0m0,003s
sys 0m0,389s

$ systat -vm

 4 usersLoad  0.10  0.72  3.57  Mar 28 16:15
Mem usage:  97%Phy 55%Kmem   VN PAGER   SWAP 
PAGER
Mem:  REAL   VIRTUAL in   out in  
out

Tot   Share TotShare Free   count
Act  2387M460K  26481M 460K 923M   pages
All  2605M218M  27105M 572Mioflt  Interrupts
Proc:  cow 132 total
   r   p   ds   w   Csw  Trp  Sys  Int  Sof  Flt52 zfod 96 hpet0:t0
  316   356   39  225  132   21   53   ozfod nvme0:admi
  %ozfod nvme0:io0
  0.1%Sys   0.0%Intr  0.0%User  0.0%Nice 99.9%Idle daefr nvme0:io1
|||||||||||prcfr nvme0:io2
   totfr nvme0:io3
dtbuf  react nvme0:io4
Namei  Name-cache   Dir-cache620370 maxvn  pdwak nvme0:io5
Callshits   %hits   %627486 numvn  168 pdpgs27 xhci0 66
   18  14  7865 frevn  intrn ahci0 67
17539M wire xhci1 68
Disks  nvd0  ada0  ada1  ada2  ada3  ada4   cd0   430M act   9 re0 69
KB/t   0.00  0.00  0.00  0.00  0.00  0.00  0.00 12696M inact hdac0 76
tps   0 0 0 0 0 0 0 54276K laund vgapci0 78
MB/s   0.00  0.00  0.00  0.00  0.00  0.00  0.00   923M free
%busy 0 0 0 0 0 0 0  0 buf

 5 minutes later 

$ time vmstat -n 1
 procsmemorypage  disks faults   cpu
 r  b  w  avm  fre  flt  re  pi  po   fr   sr nv0   in   sy   cs us sy id
 1  0  0  26G 922M 1.2K   1   4   0 1.4K  239   0  481 7.2K  931 11  1 88

real0m4,270s
user0m0,000s
sys 0m0,019s

$ time uptime
16:20  up 23:23, 4 users, load averages: 0,17 0,39 2,68

real0m10,840s
user0m0,001s
sys 0m0,374s

$ time uptime
16:37  up 23:40, 4 users, load averages: 0,29 0,27 0,96

real0m9,273s
user0m0,000s
sys 0m0,020s



OpenPGP_signature
Description: OpenPGP digital signature


Re: system freeze on 14.0-CURRENT

2021-03-28 Thread Graham Perrin

On 28/03/2021 06:03, Masachika ISHIZUKA wrote:

   I have trouble with recent 14.0-CURRENT 146 (e.x. main-6a762cfae,
main-3ead60236, main-25bfa4486).
   It works well on recent 14.0-CURRENT until starting firefox.
   If I start firefox (v87.0), system freeze but no core dumps.
   If it booted old kernel 145 (e.x. main-b5449c92b), firefox v87.0
is working well.

# I want to update to the newest 14.0-CURRENT because of ssl security
   problems.


With 25bfa4486 (2021-03-22) as the oldest of your suspects:



Your other suspects:

3ead60236 (2021-03-23)

6a762cfae (2021-03-28)

I use Firefox 87 with 66f138563be (2021-03-24) without freezes.

Please, can you share hardware and other details?

hw-probe -upload -all

The result of my most recent probe:


___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: system freeze on 14.0-CURRENT

2021-03-28 Thread Masachika ISHIZUKA
>>I have trouble with recent 14.0-CURRENT 146 (e.x. main-6a762cfae,
>> main-3ead60236, main-25bfa4486).
>>It works well on recent 14.0-CURRENT until starting firefox.
>>If I start firefox (v87.0), system freeze but no core dumps.
>>If it booted old kernel 145 (e.x. main-b5449c92b), firefox v87.0
>> is working well.
>> # I want to update to the newest 14.0-CURRENT because of ssl security
>>problems.
>> 
> 
> If you can ssh to the machine and do "procstat -akk" that would be
> helpful.

  Unfortunely, machine was frozen and all ssh connections were frozen too.
-- 
Masachika ISHIZUKA
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: freebsd 13 ryzen micro stutter

2021-03-28 Thread Hans Petter Selasky

On 3/27/21 11:54 AM, Santiago Martinez wrote:
Hi, i have the same output as @Nils B. If i run with steal =2 and dtrace 
the micro stutter doesn't happen but as soon as i stop the dtrace script 
it the stutters come back again.




Here is a patch which you can try. Not sure if it helps.
https://reviews.freebsd.org/D29467

--HPS

___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: system freeze on 14.0-CURRENT

2021-03-28 Thread Hans Petter Selasky

On 3/28/21 7:03 AM, Masachika ISHIZUKA wrote:

   I have trouble with recent 14.0-CURRENT 146 (e.x. main-6a762cfae,
main-3ead60236, main-25bfa4486).
   It works well on recent 14.0-CURRENT until starting firefox.
   If I start firefox (v87.0), system freeze but no core dumps.
   If it booted old kernel 145 (e.x. main-b5449c92b), firefox v87.0
is working well.

# I want to update to the newest 14.0-CURRENT because of ssl security
   problems.



If you can ssh to the machine and do "procstat -akk" that would be helpful.

--HPS
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"