Re: 7 LOR dumps related to ufs, tmpfs, igmp

2016-02-16 Thread Mahmoud Al-Qudsi
Apologies. The mail server is truncating the text.

Attached as a TXT.

On Tue, Feb 16, 2016 at 7:30 PM, Mahmoud Al-Qudsi  wrote:
> Hello all,
>
> I'm including below a list of LORs that I experienced running an x86
> build of 10.2-RELEASE-p2.
>
> Some known LORs are also included for posterity's sake, as all are from
> a single session and appear in chronological order, should this
> information be of any additional use. I believe from a bit of hunting around
> that not all are documented/known-ok. Also, one is repeated below.
>
> I thought of submitting a separate email for each, but did not want to
> clutter the mailing list with needless "spam."
>
> Pardon the long lines.
>
> Thanks,
>
> Mahmoud
>
>
> 1st 0xc6aaa46c ufs (ufs) @ sys/kern/vfs_mount.c:1227
> 2nd 0xc6aaa5d4 devfs (devfs) @ sys/kern/vfs_subr.c:2285
> KDB: stack backtrace:
> db_trace_self_wrapper(c120a33c,632e7262,3832323a,ca35,db7df750,...)
> at db_trace_self_wrapper+0x2d/frame 0xdb7df708
> kdb_backtrace(c120e109,c6aaa5d4,c11ffba6,c65ae7f0,c1217ba8,...) at
> kdb_backtrace+0x30/frame 0xdb7df76c
> witness_checkorder(c6aaa5d4,9,c1217ba8,8ed,c6aaa640,...) at
> witness_checkorder+0xd4f/frame 0xdb7df7b8
> __lockmgr_args(c6aaa5d4,80100,c6aaa640,0,0,0,c1217ba8,8ed) at
> __lockmgr_args+0x8d4/frame 0xdb7df894
> vop_stdlock(db7df908,c1486da8,c16d26e8,8,c1759564,...) at
> vop_stdlock+0x53/frame 0xdb7df8c4
> VOP_LOCK1_APV(c14756ac,db7df908,c0b39b31,c1759554,c14c28c0,...) at
> VOP_LOCK1_APV+0x10a/frame 0xdb7df8f0
> _vn_lock(c6aaa5a0,80100,c1217ba8,8ed,db7df974,...) at
> _vn_lock+0xca/frame 0xdb7df930
> vputx(c6aaa5a0,0,c11faeb6,20e,0,...) at vputx+0x37a/frame 0xdb7df974
> cd9660_unmount(c6ed5000,808,db7df9e0,510,c1486da8,...) at
> cd9660_unmount+0x1dc/frame 0xdb7df9a8
> dounmount(c6ed5000,808,c6eab310,48f,db7dfa58,...) at
> dounmount+0x5fa/frame 0xdb7dfa08
> sys_unmount(c6eab310,db7dfb68,c6e9c304,d6,c108d221,...) at
> sys_unmount+0x3bb/frame 0xdb7dfad8
> syscall(db7dfba8) at syscall+0x336/frame 0xdb7dfb9c
> Xint0x80_syscall() at Xint0x80_syscall+0x21/frame 0xdb7dfb9c
> --- syscall (22, FreeBSD ELF32, sys_unmount), eip = 0x853d873, esp =
> 0xbfbfe6d4, ebp = 0xbfbfe7a0 ---
>
>
> lock order reversal:
> 1st 0xc7d9da0c tmpfs (tmpfs) @ sys/kern/vfs_mount.c:848
> 2nd 0xc7e9473c ufs (ufs) @ sys/kern/vfs_subr.c:2174
> KDB: stack backtrace:
> db_trace_self_wrapper(c120a33c,3731323a,34000a34,a38,0,...) at
> db_trace_self_wrapper+0x2d/frame 0xdb7e3578
> kdb_backtrace(c120e109,c7e9473c,c11f31cd,c65ae928,c1217ba8,...) at
> kdb_backtrace+0x30/frame 0xdb7e35dc
> witness_checkorder(c7e9473c,9,c1217ba8,87e,c7e947a8,...) at
> witness_checkorder+0xd4f/frame 0xdb7e3628
> __lockmgr_args(c7e9473c,80100,c7e947a8,0,0,...) at
> __lockmgr_args+0x8d4/frame 0xdb7e3708
> ffs_lock(db7e3788,c65a6248,c65acb80,c65a6248,c16d10e8,...) at
> ffs_lock+0x97/frame 0xdb7e3744
> VOP_LOCK1_APV(c14ae018,db7e3788,c65a8468,,c14c28c0,...) at
> VOP_LOCK1_APV+0x10a/frame 0xdb7e3770
> _vn_lock(c7e94708,80100,c1217ba8,87e,57,...) at _vn_lock+0xca/frame 0xdb7e37b0
> vget(c7e94708,80100,c6daec40,57,0,...) at vget+0x77/frame 0xdb7e37e4
> vfs_hash_get(c70025d8,2,8,c6daec40,db7e38a4,...) at
> vfs_hash_get+0xff/frame 0xdb7e3810
> ffs_vgetf(c70025d8,2,8,db7e38a4,0) at ffs_vgetf+0x44/frame 0xdb7e386c
> ffs_vget(c70025d8,2,8,db7e38a4,c16fe418,...) at
> ffs_vget+0x2f/frame 0xdb7e388c
> ufs_root(c70025d8,8,db7e3a90,359,c6f12390,...) at
> ufs_root+0x49/frame 0xdb7e38b0
> vfs_donmount(c6daec40,0,0,c6ea8100,c6ea8100,...) at
> vfs_donmount+0x13a7/frame 0xdb7e3ab0
> sys_nmount(c6daec40,db7e3b68,c6cd2000,d6,e2,...) at
> sys_nmount+0x78/frame 0xdb7e3ad8
> syscall(db7e3ba8) at syscall+0x336/frame 0xdb7e3b9c
> Xint0x80_syscall() at Xint0x80_syscall+0x21/frame 0xdb7e3b9c
> --- syscall (378, FreeBSD ELF32, sys_nmount), eip = 0x280dc9eb, esp =
> 0xbfbfdda0, ebp = 0xbfbfe2f8 ---
>
>
>
> lock order reversal:
> 1st 0xc69bba3c if_addr_lock (if_addr_lock) @ sys/netinet/igmp.c:1710
> 2nd 0xc175d718 ifnet_rw (ifnet_rw) @ sys/net/if.c:244
> KDB: stack backtrace:
> db_trace_self_wrapper(c120a33c,3a632e66,a343432,692f7400,2e706d67,...)
> at db_trace_self_wrapper+0x2d/frame 0xdab56720
> kdb_backtrace(c120e109,c175d718,c121cefe,c65aa960,c121c998,...) at
> kdb_backtrace+0x30/frame 0xdab56784
> witness_checkorder(c175d718,1,c121c998,f4,0,...) at
> witness_checkorder+0xd4f/frame 0xdab567d0
> __rw_rlock(c175d728,c121c998,f4,c771f500,dab568dc,...) at
> __rw_rlock+0x92/frame 0xdab56858
> ifnet_byindex(1,c12454ce,da,c65ab4c0,c26be200,...) at
> ifnet_byindex+0x23/frame 0xdab56870
> igmp_intr(c771f500,c6d95c00,dab56930,c0e06833,c26c3900,...) at
> igmp_intr+0x1e/frame 0xdab568dc
> netisr_

7 LOR dumps related to ufs, tmpfs, igmp

2016-02-16 Thread Mahmoud Al-Qudsi
Hello all,

I'm including below a list of LORs that I experienced running an x86
build of 10.2-RELEASE-p2.

Some known LORs are also included for posterity's sake, as all are from
a single session and appear in chronological order, should this
information be of any additional use. I believe from a bit of hunting around
that not all are documented/known-ok. Also, one is repeated below.

I thought of submitting a separate email for each, but did not want to
clutter the mailing list with needless "spam."

Pardon the long lines.

Thanks,

Mahmoud


1st 0xc6aaa46c ufs (ufs) @ sys/kern/vfs_mount.c:1227
2nd 0xc6aaa5d4 devfs (devfs) @ sys/kern/vfs_subr.c:2285
KDB: stack backtrace:
db_trace_self_wrapper(c120a33c,632e7262,3832323a,ca35,db7df750,...)
at db_trace_self_wrapper+0x2d/frame 0xdb7df708
kdb_backtrace(c120e109,c6aaa5d4,c11ffba6,c65ae7f0,c1217ba8,...) at
kdb_backtrace+0x30/frame 0xdb7df76c
witness_checkorder(c6aaa5d4,9,c1217ba8,8ed,c6aaa640,...) at
witness_checkorder+0xd4f/frame 0xdb7df7b8
__lockmgr_args(c6aaa5d4,80100,c6aaa640,0,0,0,c1217ba8,8ed) at
__lockmgr_args+0x8d4/frame 0xdb7df894
vop_stdlock(db7df908,c1486da8,c16d26e8,8,c1759564,...) at
vop_stdlock+0x53/frame 0xdb7df8c4
VOP_LOCK1_APV(c14756ac,db7df908,c0b39b31,c1759554,c14c28c0,...) at
VOP_LOCK1_APV+0x10a/frame 0xdb7df8f0
_vn_lock(c6aaa5a0,80100,c1217ba8,8ed,db7df974,...) at
_vn_lock+0xca/frame 0xdb7df930
vputx(c6aaa5a0,0,c11faeb6,20e,0,...) at vputx+0x37a/frame 0xdb7df974
cd9660_unmount(c6ed5000,808,db7df9e0,510,c1486da8,...) at
cd9660_unmount+0x1dc/frame 0xdb7df9a8
dounmount(c6ed5000,808,c6eab310,48f,db7dfa58,...) at
dounmount+0x5fa/frame 0xdb7dfa08
sys_unmount(c6eab310,db7dfb68,c6e9c304,d6,c108d221,...) at
sys_unmount+0x3bb/frame 0xdb7dfad8
syscall(db7dfba8) at syscall+0x336/frame 0xdb7dfb9c
Xint0x80_syscall() at Xint0x80_syscall+0x21/frame 0xdb7dfb9c
--- syscall (22, FreeBSD ELF32, sys_unmount), eip = 0x853d873, esp =
0xbfbfe6d4, ebp = 0xbfbfe7a0 ---


lock order reversal:
1st 0xc7d9da0c tmpfs (tmpfs) @ sys/kern/vfs_mount.c:848
2nd 0xc7e9473c ufs (ufs) @ sys/kern/vfs_subr.c:2174
KDB: stack backtrace:
db_trace_self_wrapper(c120a33c,3731323a,34000a34,a38,0,...) at
db_trace_self_wrapper+0x2d/frame 0xdb7e3578
kdb_backtrace(c120e109,c7e9473c,c11f31cd,c65ae928,c1217ba8,...) at
kdb_backtrace+0x30/frame 0xdb7e35dc
witness_checkorder(c7e9473c,9,c1217ba8,87e,c7e947a8,...) at
witness_checkorder+0xd4f/frame 0xdb7e3628
__lockmgr_args(c7e9473c,80100,c7e947a8,0,0,...) at
__lockmgr_args+0x8d4/frame 0xdb7e3708
ffs_lock(db7e3788,c65a6248,c65acb80,c65a6248,c16d10e8,...) at
ffs_lock+0x97/frame 0xdb7e3744
VOP_LOCK1_APV(c14ae018,db7e3788,c65a8468,,c14c28c0,...) at
VOP_LOCK1_APV+0x10a/frame 0xdb7e3770
_vn_lock(c7e94708,80100,c1217ba8,87e,57,...) at _vn_lock+0xca/frame 0xdb7e37b0
vget(c7e94708,80100,c6daec40,57,0,...) at vget+0x77/frame 0xdb7e37e4
vfs_hash_get(c70025d8,2,8,c6daec40,db7e38a4,...) at
vfs_hash_get+0xff/frame 0xdb7e3810
ffs_vgetf(c70025d8,2,8,db7e38a4,0) at ffs_vgetf+0x44/frame 0xdb7e386c
ffs_vget(c70025d8,2,8,db7e38a4,c16fe418,...) at
ffs_vget+0x2f/frame 0xdb7e388c
ufs_root(c70025d8,8,db7e3a90,359,c6f12390,...) at
ufs_root+0x49/frame 0xdb7e38b0
vfs_donmount(c6daec40,0,0,c6ea8100,c6ea8100,...) at
vfs_donmount+0x13a7/frame 0xdb7e3ab0
sys_nmount(c6daec40,db7e3b68,c6cd2000,d6,e2,...) at
sys_nmount+0x78/frame 0xdb7e3ad8
syscall(db7e3ba8) at syscall+0x336/frame 0xdb7e3b9c
Xint0x80_syscall() at Xint0x80_syscall+0x21/frame 0xdb7e3b9c
--- syscall (378, FreeBSD ELF32, sys_nmount), eip = 0x280dc9eb, esp =
0xbfbfdda0, ebp = 0xbfbfe2f8 ---



lock order reversal:
1st 0xc69bba3c if_addr_lock (if_addr_lock) @ sys/netinet/igmp.c:1710
2nd 0xc175d718 ifnet_rw (ifnet_rw) @ sys/net/if.c:244
KDB: stack backtrace:
db_trace_self_wrapper(c120a33c,3a632e66,a343432,692f7400,2e706d67,...)
at db_trace_self_wrapper+0x2d/frame 0xdab56720
kdb_backtrace(c120e109,c175d718,c121cefe,c65aa960,c121c998,...) at
kdb_backtrace+0x30/frame 0xdab56784
witness_checkorder(c175d718,1,c121c998,f4,0,...) at
witness_checkorder+0xd4f/frame 0xdab567d0
__rw_rlock(c175d728,c121c998,f4,c771f500,dab568dc,...) at
__rw_rlock+0x92/frame 0xdab56858
ifnet_byindex(1,c12454ce,da,c65ab4c0,c26be200,...) at
ifnet_byindex+0x23/frame 0xdab56870
igmp_intr(c771f500,c6d95c00,dab56930,c0e06833,c26c3900,...) at
igmp_intr+0x1e/frame 0xdab568dc
netisr_dispatch_src(2,0,c771f500) at netisr_dispatch_src+0xb6/frame 0xdab5691c
netisr_dispatch(2,c771f500,0,89a,dab56978,...) at
netisr_dispatch+0x20/frame 0xdab56930
igmp_v1v2_queue_report(c175d8dc,4,c122a604,6e4,c175a310,...) at
igmp_v1v2_queue_report+0x1a9/frame 0xdab56978
igmp_fasttimo(dab56a50,c0b3a7cc,c175a300,dab56a50,c0b6421d,...) at
igmp_fasttimo+0x417/frame 0xdab56a1c
pffasttimo(0,0,c12073f5,285,c0b813be,...) at pffasttimo+0x30/frame 0xdab56a50
softclock_call_cc(0,0,c12073f5,32b,0,...) at
softclock_call_cc+0x1ac/frame 0xdab56aec
softclock(c175a300,c11fe255,566,78e5b803,c66e5f48,...) at
softclock+0x40/frame 0xdab56b0c
intr_event_execute_handlers

Re: Status of NFS4.1 FS_RECLAIM in FreeBSD 10.1?

2015-05-23 Thread Mahmoud Al-Qudsi

> On May 21, 2015, at 8:19 AM, Rick Macklem  wrote:
> 
> Well, if you are just doing an NFSv4.1 mount, you could capture
> packets during the failed mount attaempt with tcpdump and then
> email me the raw packet capture, I can take a look at it.
> (tcpdump doesn't handle nfs packets well, but wireshark will accept
> a raw packet capture) Something like:
> # tcpdump -s 0 -w .pcap host 
> should work.
> 
> When I read RFC-5661 around page #567, it seems clear that the
> client should use RECLAIM_COMPLETE with the fs arg false after
> acquiring a noew clientid, which is what a fresh mount would normally be.
> (If the packet capture shows an EXCHANGEID followed by a RECLAIM_COMPLETE
> with the fs arg true, I think ESXi is broken, but I can send you a patch
> that will just ignore the "true", so it works.)
> I think the "true" case is only used when a file system has been "moved"
> by a server cluster, indicated to the client via a NFS4ERR_MOVED error
> when it is accessed at the old server, but the working in RFC-5661 isn't
> very clear.
> 
> rick


Thank you kindly.
I am travelling at the moment; but as soon as I can, I will get that to you.

Much appreciated,

Mahmoud

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Status of NFS4.1 FS_RECLAIM in FreeBSD 10.1?

2015-05-20 Thread Mahmoud Al-Qudsi
On May 20, 2015, at 8:57 PM, Rick Macklem  wrote:
> Only the global RECLAIM_COMPLETE is implemented. I'll be honest that
> I don't even really understand what the "single fs reclaim_complete"
> semantics are and, as such, it isn't implemented.

Thanks for verifying that.

> I think it is meant to be used when a file system is migrated from
> one server to another (transferring the locks to the new server) or
> something like that.
> Migration/replication isn't supported. Maybe someday if I figure out
> what the RFC expects the server to do for this case.

I wasn’t clear on if this was lock reclaiming or block reclaiming. Thanks.

>> I can mount and use NFSv3 shares just fine with ESXi from this same
>> server, and
>> can mount the same shares as NFSv4 from other clients (e.g. OS X) as
>> well.
>> 
> This is NFSv4.1 specific, so NFSv4.0 should work, I think. Or just use NFSv3.
> 
> rick

For some reason, ESXi doesn’t do ESXi 4.0, only v3 or v4.1.

I am using NFS v3 for now, but unless I’m mistaken, since FreeBSD supports
neither “nohide” nor “crossmnt” there is no way for a single export(/import)
to cross ZFS filesystem boundaries. 

I am using ZFS snapshots to manage virtual machine images, each machine
has its own ZFS filesystem so I can snapshot and rollback individually. But 
this means that under NFSv3 (so far as I can tell), each “folder” (ZFS fs) 
must be mounted separately on the ESXi host. I can get around exporting 
them each individually with the -alldirs parameter, but client-side, there does
not seem to be a way of traversing ZFS filesystem mounts without explicitly
mounting each and every one - a maintenance nightmare if there ever was one.

The only thing I can think of would be unions for the top-level directory, but 
I’m
very, very leery of the the nullfs/unionfs modules as they’ve been a source of 
system instability for us in the past (deadlocks, undetected lock inversions, 
etc).
That and I really rather a maintenance nightmare than a hack.

Would you have any other suggestions?

Thanks,

Mahmoud

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Status of NFS4.1 FS_RECLAIM in FreeBSD 10.1?

2015-05-20 Thread Mahmoud Al-Qudsi
Hello,

I have not delved too deeply into either the NFS spec or the FreeBSD nfsd
code, but from my admittedly-limited understanding, it seems that reclaim is
both a mandatory feature and one that is present in the current FreeBSD NFS
v4.1 implementation. Is my understanding of this correct?

My reason for asking is when attempting to migrate an ESXi server to a FreeBSD
NFSv4.1 datastore, ESXi throws the following error:

> WARNING: NFS41: NFS41FSCompleteMount:3601: RECLAIM_COMPLETE FS failed: Not
> supported; forcing read-only operation

VMware ESXi 6.0 is able to mount NFSv4.1 shares exported from other 
operating systems, so I figured I would ask here on the list before digging
out a copy of tcpdump and going down that rabbit hole.

I can mount and use NFSv3 shares just fine with ESXi from this same server, and 
can mount the same shares as NFSv4 from other clients (e.g. OS X) as well.

Thanks,

Mahmoud Al-Qudsi
NeoSmart Technologies

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"