Hi,

Hmm, I have never seen such a panic. If it is reproducible,
it should be worth to send a PR.

Thanks,
rin

On 2021/01/07 5:51, John Klos wrote:
Thank you for all your hard work!

The images work just fine out of the box. After a few hours of compiling, there 
was a panic. This is a 3B+ (the 1400 MHz one):

[ 18045.402869] ifree: dev = 0xa801, ino = 35088, fs = /
[ 18045.402869] panic: ffs_freefile_common: freeing free inode
[ 18045.414509] cpu0: Begin traceback...
[ 18045.414509] trace fp ffffc0004376f810
[ 18045.414509] fp ffffc0004376f840 vpanic() at ffffc000004e5d5c 
netbsd:vpanic+0x14c
[ 18045.424917] fp ffffc0004376f8a0 panic() at ffffc000004e5e54 
netbsd:panic+0x44
[ 18045.424917] fp ffffc0004376f930 ffs_freefile_common.isra.0() at 
ffffc00000422f44 netbsd:ffs_freefile_common.isra.0+0x2d4
[ 18045.436775] fp ffffc0004376f9a0 ffs_freefile() at ffffc00000427bb4 
netbsd:ffs_freefile+0xf4
[ 18045.445200] fp ffffc0004376fa00 ffs_reclaim() at ffffc00000434968 
netbsd:ffs_reclaim+0x110
[ 18045.445200] fp ffffc0004376fa40 VOP_RECLAIM() at ffffc00000556cfc 
netbsd:VOP_RECLAIM+0x34
[ 18045.455684] fp ffffc0004376fa70 vcache_reclaim() at ffffc00000548a6c 
netbsd:vcache_reclaim+0x14c
[ 18045.465203] fp ffffc0004376fb40 vrelel() at ffffc00000549520 
netbsd:vrelel+0x2a0
[ 18045.465203] fp ffffc0004376fba0 vn_close() at ffffc0000054dcfc 
netbsd:vn_close+0x44
[ 18045.477720] fp ffffc0004376fbd0 closef() at ffffc0000047c448 
netbsd:closef+0x60
[ 18045.477720] fp ffffc0004376fc10 fd_free() at ffffc0000047f3c0 
netbsd:fd_free+0xf8
[ 18045.488230] fp ffffc0004376fc90 exit1() at ffffc0000048a5cc 
netbsd:exit1+0xfc
[ 18045.488230] fp ffffc0004376fd80 sys_exit() at ffffc0000048af08 
netbsd:sys_exit+0x38
[ 18045.498527] fp ffffc0004376fdb0 syscall() at ffffc0000008ef1c 
netbsd:syscall+0x18c
[ 18045.508234] fp ffffc0004376fe60 trap_el0_sync() at ffffc000000903f0 
netbsd:trap_el0_sync+0x380
[ 18045.508234] tf ffffc0004376fed0 el0_trap() at ffffc000000927f0 
netbsd:el0_trap
[ 18045.518235] ---- trapframe 0xffffc0004376fed0 (304 bytes) ----
[ 18045.518235]     pc=0000fd5f181b0c14,   spsr=0000000080000000
[ 18045.518235]    esr=0000000056000001,    far=0000f53aaf49a000
[ 18045.529932]     x0=0000000000000000,     x1=0000000000000000
[ 18045.529932]     x2=000000020013e000,     x3=0000ffffff837b80
[ 18045.529932]     x4=0000000000000000,     x5=0000fd5f1844a3c0
[ 18045.541002]     x6=00000000ffffffff,     x7=4545524348363400
[ 18045.541002]     x8=0000fd5f00000000,     x9=0000000000000003
[ 18045.541002]    x10=0000fd5f18221000,    x11=0000000000000030
[ 18045.552073]    x12=0000fd5f18239a00,    x13=0000fd5f17c008c0
[ 18045.552073]    x14=0000000000000014,    x15=0000000000004008
[ 18045.552073]    x16=000000020013d1f8,    x17=0000fd5f181b0c10
[ 18045.563146]    x18=0000000000000041,    x19=0000000000000000
[ 18045.563146]    x20=000000020013d000,    x21=0000ffffff838fe0
[ 18045.563146]    x22=000000020013df80,    x23=0000000000000000
[ 18045.574215]    x24=0000ffffff838fe0,    x25=0000fffff33f0000
[ 18045.574215]    x26=0000000000000000,    x27=0000000000000000
[ 18045.574215]    x28=0000000000000000, fp=x29=0000ffffff837b80
[ 18045.585287] lr=x30=000000020011c890,     sp=0000ffffff837b80
[ 18045.585287] ------------------------------------------------
[ 18045.585287] cpu0: End traceback...
[ 18045.594926] rebooting...



This was a clean big endian filesystem on a USB attached SSD on its first boot. 
Trying to fully fsck didn't work (I don't have a record - it was on the local 
console), and trying to fsck on a little endian Pi gave this:

http://mail-index.netbsd.org/port-arm/2020/12/26/msg007132.html

I decided to make a clean little endian filesystem and restart. However, 
enabling WAPBL in fstab causes the boot to fail with messages saying that the 
filesystem is read-only, then:

mount /dev/sd0a /
mount_ffs: /dev/sd0a on /: specified device does not match mounted device

Perhaps it should be noted somewhere that WAPBL can't be used on other-endian 
systems, and a more meaningful error presented when it is attempted?

John Klos

Reply via email to