Peter Zijlstra wrote: > On Sat, 2007-09-01 at 01:05 +1000, David Chinner wrote: > >>> Trouble is, we'd like to have a sane upper bound on the amount of held >>> locks at any one time, obviously this is just wanting, because a lot of >>> lock chains also depend on the number of online cpus... >> Sure - this is an obvious case where it is valid to take >30 locks at >> once in a single thread. In fact, worst case here we are taking twice this >> number of locks - we actually take 2 per inode (ilock and flock) so a >> full 32 inode cluster free would take >60 locks in the middle of this >> function and we should be busting this depth couter limit all the >> time. > > I think this started because jeffpc couldn't boot without XFS busting > lockdep :-) > >> Do semaphores (the flush locks) contribute to the lock depth >> counters? > > No, alas, we cannot handle semaphores in lockdep.
That explains why 40 was enough for me, I guess :) -Eric - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/