[Bug 277389] Reproduceable low memory freeze on 14.0-RELEASE-p5

2024-03-17 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389

--- Comment #29 from Mark Millard  ---
On an aarch64 with 32 GiBytes of RAM (noswap enabled)
I got a panic:

panic: pmap_growkernel: no memory to grow kernel

So, I'd say: Reproduced.

I'll note that I rebooted between the iozone write
step and the iozone read step, avoiding Inact from
prior activity and such.

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 277771] ice driver report I2C error messages of E822 copper LAN ports

2024-03-17 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=21

Bug ID: 21
   Summary: ice driver report I2C error messages of E822 copper
LAN ports
   Product: Base System
   Version: 14.0-RELEASE
  Hardware: amd64
OS: Any
Status: New
  Severity: Affects Some People
  Priority: ---
 Component: kern
  Assignee: b...@freebsd.org
  Reporter: amy.s...@advantech.com.tw

Created attachment 249253
  --> https://bugs.freebsd.org/bugzilla/attachment.cgi?id=249253=edit
FreeBSD14 dmesg with build-in ice1.37.11-k

Hi Sir:

In FreeBSD 14, the E822 LAN ports (SFP & copper) are detected successfully.
However, when execute the command “ifconfig -v ice4” to get more verbose status
for E822 copper interface, the E822 copper ports reported the error messages
(ice_read_sff_eeprom: Error reading I2C data: err ICE_ERR_AQ_ERROR aq_err
AQ_RC_EINVAL).

Upon tracing the ice driver (version 1.37.11-k) in FreeBSD 14 kernel source
code, the error messages indicate that the attempt to read data from the SFF
EEPROM has failed (ice_lib.c function ice_read_sff_eeprom).

Considering that there are no SFF EEPROMs for copper ports, should these error
messages still be reported?

Please assist to check it. Thanks!

Best Regards,
Amy Shih
Advantech ICVG x86 Software
02-7732-3399 Ext. 1249

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 277389] Reproduceable low memory freeze on 14.0-RELEASE-p5

2024-03-17 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389

--- Comment #28 from pascal.guitier...@gmail.com ---
(In reply to Mark Millard from comment #27)

i've tried using a -stable kernel (FreeBSD 14.0-STABLE #0
stable/14-n266971-91c1c36102a6: Thu Mar 14 04:49:15 UTC 2024), same result.

command line used: iozone -w -i 1 -l 512 -r 4k -s 1g

system deadlocks from OOM after several seconds:

last pid: 96643;  load averages:  5.06,  1.15,  0.41   
   up 0+00:02:17  14:24:50
542 processes: 1 running, 541 sleeping
CPU:  1.3% user,  0.0% nice,  1.3% system,  0.0% interrupt, 97.4% idle
Mem: 4228K Active, 4096B Inact, 8236K Laundry, 31G Wired, 104M Free
ARC: 29G Total, 224M MFU, 29G MRU, 52M Header, 5449K Other
 28G Compressed, 28G Uncompressed, 1.01:1 Ratio


i did not set those nullfs values for this test.

i can reproduce reliably on 16GB and 32GB machines.

can anyone else reproduce this?

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 277389] Reproduceable low memory freeze on 14.0-RELEASE-p5

2024-03-17 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389

--- Comment #27 from Mark Millard  ---
(In reply to pascal.guitierrez from comment #26)

I'll note that I never used mount_nullfs explicitly at
any point but I still got the huge difference in VNODE
results in "vmstat -z" when doing your test iozone
runs, all based just on the:

nullfs_load="YES"
vfs.nullfs.cache_vnodes=0

vs. not distinction. There may be implicit/automatic
use involved? ( zfs_load="YES" both ways. )

Because of your hangup status, doing a "vmstat -z" and
saving the output before doing the read iozone test
would be good. Both with and without the 3 lines for
comparison/contrast: the vmstat -z output differences
should indicate if internally any difference is being
made.

I made the suggestion because I've no way to tell if
what I've been reporting contributes to your context
or not: I've never seen the OOM activity or hangups.
But the huge Wired figures that even survive the ARC
shrinking suggest potential contribution to an OOM
status.

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 277389] Reproduceable low memory freeze on 14.0-RELEASE-p5

2024-03-17 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389

--- Comment #26 from pascal.guitier...@gmail.com ---
(In reply to Mark Millard from comment #24)

Hi Mark,

I can try with those settings and on -stable, however I'm not using nullfs in
my tests so those loader settings won't have any effect?

-- 
You are receiving this mail because:
You are the assignee for the bug.


Problem reports for b...@freebsd.org that need special attention

2024-03-17 Thread bugzilla-noreply
To view an individual PR, use:
  https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=(Bug Id).

The following is a listing of current problems submitted by FreeBSD users,
which need special attention. These represent problem reports covering
all versions including experimental development code and obsolete releases.

Status  |Bug Id | Description
+---+---
New |252123 | fetch(3): Fix wrong usage of proxy when request i 
New |262764 | After DVD1 13.0-R install with ports tree, portsn 
New |262989 | sys/conf/files, sys/conf/options, sys/conf/NOTES: 
New |269994 | build options have different kernel and userland  
Open| 46441 | sh(1): Does not support PS1, PS2, PS4 parameter e 
Open|165059 | vtnet(4): Networking breaks with a router using v 
Open|177821 | sysctl: Some security.jail nodes are funky, dupli 
Open|220246 | syslogd does not send RFC3164-conformant messages 
Open|232914 | kern/kern_resource: Integer overflow in function  
Open|250309 | devmatch: panic: general protection fault: sysctl 
Open|255130 | Issue with rtsx driver
Open|256952 | kqueue(2): Improve epoll Linux compatibility (com 
Open|257149 | CFLAGS not passed to whole build  
Open|257646 | opensm: rc service is installed by default, but o 
Open|258665 | lib/libfetch: Add Happy Eyeballs (RFC8305) suppor 
Open|259292 | vmware/pvscsi: UNMAP fails on VMWare 6.7 thinly p 
Open|259636 | multiple components: Change "Take Affect" to "Tak 
Open|259655 | periodic: security/security.functions does not re 
Open|259703 | In sys/dev/pci/pci.c, error in do_power_nodriver  
Open|259808 | etc/periodic/daily/100.clean-disks: Fix error (Di 
Open|260214 | acpi_battery: Should provide current/max battery  
Open|260245 | swap/vm: Apparent memory leak: 100% swap usage
Open|261640 | sysctl: Add -F option to display sysctl format st 
Open|261641 | drm-kmod: Launch message is written into (possibl 
Open|261771 | nvme(4): Reports errors every 5 minutes: PRP OFFS 
Open|261971 | kernel crash launching bhyve guest on ZFS: #15 bu 
Open|262157 | su+j: Crashes during mmc(4) fsck after timeout: E 
Open|262192 | Crashes at boot with kern.random.initial_seeding. 
Open|264028 | loader: Incorrect (32gb) memory reported by BTX l 
Open|264075 | freebsd-update in 13.1-RELEASE detects an install 
Open|264188 | kinit(1): Ignores KRB5CCNAME environment variable 
Open|264226 | setting kern.vty=sc causes hang during UEFI boot  
Open|264757 | fetch: Show correct port number in -vv output 
Open|264833 | 12.3-STABLE panic on sync and reboot: panic: slee 
Open|266419 | mrsas: Corrupts memory (crashes) when reading dat 

35 problems total for which you should take action.


[Bug 277740] makefs -t msdos silently ignores hard links

2024-03-17 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277740

Mark Linimon  changed:

   What|Removed |Added

   Assignee|b...@freebsd.org|f...@freebsd.org

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 277764] daemon(8): high CPU usage after stopping and continuing child process

2024-03-17 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277764

Bug ID: 277764
   Summary: daemon(8): high CPU usage after stopping and
continuing child process
   Product: Base System
   Version: CURRENT
  Hardware: Any
OS: Any
Status: New
  Severity: Affects Some People
  Priority: ---
 Component: bin
  Assignee: b...@freebsd.org
  Reporter: 5mxnbm...@mozmail.com

Using daemon utility to execute a program (child) and then sending a stop
signal (ex. SIGSTOP) to the child process and afterwards sending the continue
signal (SIGCONT) can make the daemon process get stuck at 100% CPU usage.

Full setup:
create a script test.sh containing the following:
```
#!/bin/sh

while true; do echo hello; sleep 1; done
```
execute daemon utility: `daemon -f -p test.pid ./test.sh`
send stop signal: `pkill -STOP -F test.pid`
send continue signal: `pkill -CONT -F test.pid`
wait until test.sh executes `echo hello`
daemon process rises to 100% usage (single core)


As far as i can tell stopping and continuing the child process sends a SIGCHLD
signal and the daemon utility assumes that the child is closing. It then keeps
calling waitpid (which returns immediately, returning 0) inside an infinite
loop in the daemon_is_child_dead function and this causes the high CPU usage.
Also before entering daemon_is_child_dead function, the daemon process is
waiting / listening for child output in the listen_child function. That's why
the above script contains an echo so the daemon will exit that function and go
on to process the SIGCHLD event.

Maybe the fix could be to include WCONTINUED option and setting status output
parameter (currently set to NULL) when calling waitpid inside
daemon_is_child_dead and then checking if WIFCONTINUED(status) equals True and
in that case returning false?

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 277389] Reproduceable low memory freeze on 14.0-RELEASE-p5

2024-03-17 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389

--- Comment #25 from Mark Millard  ---
(In reply to Mark Millard from comment #23)

22941Mi Wired looks to be where it finally stabilized while
the system was left idle. The later decrements that I happened
to watch were in smaller sized chunks. (Mostly I was sleeping
after comment #24.)

FYI:
The UTC timestamps on my prior notes given an idea of the
timescale those reported amounts of freeing of Wired happened
over.

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 231574] [IF_BRIDGE(4)] improvements

2024-03-17 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=231574

--- Comment #2 from Chris Davidson  ---
After a little further pondering and some feedback, this may be something that
can be defined to be more clear and concise.

The examples section of a manual page is suppose to help guide the person,
https://docs.freebsd.org/en/books/fdp-primer/manual-pages/

Snippet from above url
-
Manual pages, commonly shortened to man pages, were conceived as
readily-available reminders for command syntax, device driver details, or
configuration file formats. They have become an extremely valuable
quick-reference from the command line for users, system administrators, and
programmers.

Although intended as reference material rather than tutorials, the EXAMPLES
sections of manual pages often provide detailed use case.

Manual pages are generally shown interactively by the man(1) command. When the
user types man ls, a search is performed for a manual page matching ls. The
first matching result is displayed.
---

Maybe we can take this approach to refining the example section of this manual
page?

(In examples area for clarity)

Bridge changes may be conducted with the
Dq Bridge Interface Parameters
of
Xr ifconfig 8

Thoughts?

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 277506] freebsd-update doesn't remove staled files/directories?

2024-03-17 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277506

Ed Maste  changed:

   What|Removed |Added

 CC||cperc...@freebsd.org,
   ||ema...@freebsd.org

--- Comment #2 from Ed Maste  ---
FreeBSD-EN-23:12.freebsd-update addressed the issue, except for the warning.
There is an additional change to freebsd-update that addresses the warning that
did not make it into the EN.  It is safe to ignore the warnings.

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 277655] random() *not* removed from 14.0-RELEASE

2024-03-17 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277655

Ed Maste  changed:

   What|Removed |Added

 CC||ema...@freebsd.org

--- Comment #1 from Ed Maste  ---
Where did you see that it would be removed?

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 277740] makefs -t msdos silently ignores hard links

2024-03-17 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277740

--- Comment #1 from Ed Maste  ---
> It should emit a warning or make multiple copies of the linked file.

IMO it should do both - create copies of the file for correctness, and emit a
warning for the undesired size increase.

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 277760] sh: cd prints path when prefixed with ./

2024-03-17 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277760

Bug ID: 277760
   Summary: sh: cd prints path when prefixed with ./
   Product: Base System
   Version: Unspecified
  Hardware: Any
OS: Any
Status: New
  Severity: Affects Some People
  Priority: ---
 Component: bin
  Assignee: b...@freebsd.org
  Reporter: na...@freebsd.org

When changing into a directory that is prefixed with "./", interactive sh(1)
prints the path:

$ cd /
$ (cd /usr)
$ (cd usr)
$ (cd ./usr)
/usr
$ 

This appears to be a bug in all versions of FreeBSD.

Printing the path is normal and standards-compliant for "cd -" or when CDPATH
is used.

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 271551] UPDATING files in releng/13.x state stable/13 users instead of releng/13.x

2024-03-17 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=271551

Anton Saietskii  changed:

   What|Removed |Added

 Status|Open|Closed
 Resolution|--- |Not Accepted

--- Comment #5 from Anton Saietskii  ---
Ah, got it -- something changed after 12 and now both 13 and 14 have "stable/"
line in UPDATING.

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 277389] Reproduceable low memory freeze on 14.0-RELEASE-p5

2024-03-17 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389

--- Comment #24 from Mark Millard  ---
(In reply to pascal.guitierrez from comment #11)

If you can test main , stable/14 , or stable/13 , then
testing with:

nullfs_load="YES"
vfs.nullfs.cache_vnodes=0
zfs_load="YES"

in /boot/loader.conf could prove interesting. vfs.nullfs.cache_vnodes
has been MFC'd but no releng/* has it yet.

nullfs.ko is what adds vfs.nullfs.cache_vnodes , thus the the explicit
load as early as possible, just before the assignment. In my context I
got:

# kldstat
Id Refs AddressSize Name
 1   84 0x8020  1d497b0 kernel
 21 0x81f4a000 9810 nullfs.ko
 31 0x81f55000   5e8fe0 zfs.ko
 41 0x8253e000 7718 cryptodev.ko
 . .

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 277389] Reproduceable low memory freeze on 14.0-RELEASE-p5

2024-03-17 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389

--- Comment #23 from Mark Millard  ---
Looks to me like what changed so far was mostly the decreases:

UMA Slabs 0: 80,  0,43974304,  40,44081542,   0,   0,   0
to:
UMA Slabs 0: 80,  0,19620210,24354134,44082204,   0,   0,   0

and:

vm pgcache:4096,  0,45110709,5482,45353298,2948,   0,   0
to:
vm pgcache:4096,  0,20780467,   38846,45358441,2948,   0,   0


(At least without vnode caching holding things in place?)

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 277389] Reproduceable low memory freeze on 14.0-RELEASE-p5

2024-03-17 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277389

--- Comment #22 from Mark Millard  ---
(In reply to Mark Millard from comment #21)

I has progressed to 85719Mi and looks to still be decreasing.
For reference:

# vmstat -z
ITEM   SIZE   LIMIT USED FREE  REQ FAIL SLEEP XDOM
kstack_cache: 16384,  0,1866,   4,1934,   0,   0,   0
buffer arena-40:   4096,  0,   0,   0,   0,   0,   0,   0
buffer arena-81:   8192,  0,   0,   0,   0,   0,   0,   0
buffer arena-12:  12288,  0,   0,   0,   0,   0,   0,   0
buffer arena-16:  16384,  0,  10,   0,  10,   0,   0,   0
buffer arena-20:  20480,  0,   0,   0,   0,   0,   0,   0
buffer arena-24:  24576,  0,   0,   0,   0,   0,   0,   0
buffer arena-28:  28672,  0,   0,   0,   0,   0,   0,   0
buffer arena-32:  32768,  0,   1,   0,   1,   0,   0,   0
buffer arena-36:  36864,  0,   0,   0,   0,   0,   0,   0
buffer arena-40:  40960,  0,   0,   0,   0,   0,   0,   0
buffer arena-45:  45056,  0,   0,   0,   0,   0,   0,   0
buffer arena-49:  49152,  0,   0,   0,   0,   0,   0,   0
buffer arena-53:  53248,  0,   0,   0,   0,   0,   0,   0
buffer arena-57:  57344,  0,   0,   0,   0,   0,   0,   0
buffer arena-61:  61440,  0,   0,   0,   0,   0,   0,   0
buffer arena-65:  65536,  0,   0,   0,   0,   0,   0,   0
buf free cache: 432,  0,  11, 199,  11,   0,   0,   0
vm pgcache:4096,  0,  160881,   21781,  320186, 632,   0,   0
vm pgcache:4096,  0,20780467,   38846,45358441,2948,   0,   0
UMA Kegs:   384,  0, 152,   1, 152,   0,   0,   0
UMA Zones: 4608,  0, 178,   0, 178,   0,   0,   0
UMA Slabs 0: 80,  0,19620210,24354134,44082204,   0,   0,   0
UMA Slabs 1:112,  0,  19,  16,  19,   0,   0,   0
UMA Hash:   256,  0,   0,   0,   0,   0,   0,   0
2 Bucket:32,  0, 512,9694,   65254,   0,   0,   0
4 Bucket:48,  0, 259,7385,2368,   0,   0,   0
8 Bucket:80,  0, 597,4503,   31165,  13,   0,   0
16 Bucket:  144,  0,5259,2021, 1567056,   0,   0,   0
32 Bucket:  256,  0, 703,3647, 1367531, 145,   0,   0
64 Bucket:  512,  0,   11369,8335,  645920,  21,   0,   0
128 Bucket:1024,  0,7106,8197,  394876, 109,   0,   0
256 Bucket:2048,  0,   82224,   68230, 2089346,7129,   0,   0
SMR SHARED:  24,  0,   7, 760,   7,   0,   0,   0
SMR CPU: 32,  0,   7, 760,   7,   0,   0,   0
vmem:  1856,  0,   1,   7,   1,   0,   0,   0
vmem btag:   56,  0,  142286,   30566,  170135,1200,   0,   0
VM OBJECT:  264,  0,1353,6477,   70891,   0,   0,   0
RADIX NODE: 144,  0,   97945,   92003,  343518,   0,   0,   0
KMAP ENTRY:  96,  0,  60,  21,  61,   0,   0,   0
MAP ENTRY:   96,  0,1434,   22296,  242238,   0,   0,   0
VMSPACE:616,  0,  30,1044,4829,   0,   0,   0
fakepg: 104,  0,   0,   0,   0,   0,   0,   0
pcpu-4:   4,  0,   0,   0,   0,   0,   0,   0
pcpu-8:   8,  0,4182,6058,4184,   0,   0,   0
pcpu-16: 16,  0,  96,5792,  96,   0,   0,   0
pcpu-32: 32,  0,   0,   0,   0,   0,   0,   0
pcpu-64: 64,  0, 471,1321, 471,   0,   0,   0
malloc-16:   16,  0,   19551,6405,265170334,   0,   0,   0
malloc-32:   32,  0,   14400,   78966,25538083,   0,   0,   0
malloc-64:   64,  0,   20567,   44323,274087256,   0,   0,   0
malloc-128: 128,  0,   48536,  541704,317128818,   0,   0,   0
malloc-256: 256,  0,4301,  267184,28522723,   0,   0,   0
malloc-384: 384,  0,2414, 1365926,12871301,   0,   0,   0
malloc-512: 512,  0, 146,   24526, 4568938,   0,   0,   0
malloc-1024:   1024,  0,2037, 975,   31524,   0,   0,   0
malloc-2048:   2048,  0, 967,1173,17379237,   0,   0,   0
malloc-4096:   4096,  0,1246, 453,   27332,   0,   0,   0
malloc-8192:   8192,  0, 144,  15,1210,   0,   0,   0
malloc-16384: 16384,  0,  23,  94,  230034,   0,   0,   0