On 10/06/2014 12:25 AM, Al Viro wrote:
> On Sun, Oct 05, 2014 at 11:42:40PM -0400, Sasha Levin wrote:
>> On 10/05/2014 11:13 PM, Al Viro wrote:
>>> On Sun, Oct 05, 2014 at 08:27:47PM -0400, Sasha Levin wrote:
>>>
>>>> [  434.580818] BUG: unable to handle kernel NULL pointer dereference at 
>>>> 0000000000000090
>>>> [  434.582208] IP: do_raw_spin_trylock 
>>>> (./arch/x86/include/asm/spinlock.h:108 kernel/locking/spinlock_debug.c:143)
>>> [snip]
>>> spin_lock((void *)0x90)
>>>> [  434.590025] ? _raw_spin_trylock (include/linux/spinlock_api_smp.h:89 
>>>> kernel/locking/spinlock.c:135)
>>>> [  434.590025] ? lockref_put_or_lock (lib/lockref.c:131)
>>>> [  434.590025] dput (fs/dcache.c:513 fs/dcache.c:616)
>>>
>>> ummm...  lockref_put_or_lock(&dentry->d_lockref) ending up with 0x90 passed
>>> to lockref_put_or_lock()...  What offset does d_lockref have on your build?
>>
>> 0x90
> 
> Huh???  It means that we got to that lockref_put_or_lock with dentry == NULL.
> But that makes no sense at all - we have
> void dput(struct dentry *dentry)
> {
>         if (unlikely(!dentry))
>                 return;
> 
> repeat:
>         if (lockref_put_or_lock(&dentry->d_lockref))
>                 return;
>       ...
> and the only branch to repeat: is
>         if (dentry)
>                 goto repeat;
> 
> If we get to that lockref_put_or_lock() with dentry == NULL, something's
> very wrong with compiler.  And the only other lockref_put_or_lock() in
> there is
>                 while (dentry && !lockref_put_or_lock(&dentry->d_lockref)) {
> which would also make NULL dentry a miscompile.
> 
> Could you put fs/dcache.s for your build on some anonftp?  That really
> smells like compiler breakage; had the address it tried to access been
> close but not equal to that offsetof(), we would be dealing with bogus
> ->f_path.dentry (close to, but not quite NULL).  As it is, it looks like
> dput() somehow getting to that line with NULL dentry, which should've
> been prevented by the checks there...

I think you've misread the stack trace. The lockref_put_or_lock isn't reliable
(probably just a derelict from the previous, successful call to it), and the
stack trace's reliable symbols takes us elsewhere.

Stack trace shows that:

[  434.590025] dput (fs/dcache.c:513 fs/dcache.c:616)
[  434.590025] __fput (fs/file_table.c:235)
[  434.590025] ____fput (fs/file_table.c:253)

Looking at fs/dcache.c:616 we see:

        kill_it:
                dentry = dentry_kill(dentry); <=== This
                if (dentry)
                        goto repeat;

And within dentry_kill() (fs/dcache.c:513):

        if (!IS_ROOT(dentry)) {
                parent = dentry->d_parent;
                if (unlikely(!spin_trylock(&parent->d_lock))) { <=== here
                        if (inode)
                                spin_unlock(&inode->i_lock);
                        goto failed;

We're trying to deref a NULL 'parent'.

Looking at git history, this check was slightly changed to remove the
'parent != NULL' condition in e55fd01154 ("split dentry_kill()"):

        -       }
                if (!IS_ROOT(dentry))
                        parent = dentry->d_parent;
        -       if (parent && !spin_trylock(&parent->d_lock)) {
        -               if (inode)
        -                       spin_unlock(&inode->i_lock);
        [...]
        +       if (!IS_ROOT(dentry)) {
        +               parent = dentry->d_parent;
        +               if (unlikely(!spin_trylock(&parent->d_lock))) {
        +                       if (inode)
        +                               spin_unlock(&inode->i_lock);
        +                       goto failed;
        +               }
        +       }

So this is a case where dentry is not root, but has parent set to NULL.

Although I don't see how that can happen.


Thanks,
Sasha


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to