On 4/4/13 1:46 PM, Zach Brown wrote:
>> It's because btrfs_open_devices() may open some devices, and still
>> return failure.  So the error unwinding needs to close any open
>> devices in fs_devices before returning.
> 
> Yeah, looks like.
> 
>> Note, __btrfs_open_devices is weird; it seems to return success or
>> failure based on the outcome of the result of the last call
>> to btrfs_get_bdev_and_sb().  But that's a different bug...
> 
> I disagree that this is a different bug, I think it's the root cause of
> this bug.
> 
>> @@ -1125,7 +1125,7 @@ static struct dentry *btrfs_mount(struct 
>> file_system_type *fs_type, int flags,
>>  
>>      error = btrfs_open_devices(fs_devices, mode, fs_type);
>>      if (error)
>> -            goto error_fs_info;
>> +            goto error_close_devices;
> 
> Wouldn't open_seed_devices() also need a change like this?
> 
> I'd just rework __btrfs_open_devices to clean up after itself when it
> returns an error.
> 
>>  error_close_devices:
>> -    btrfs_close_devices(fs_devices);
>> +    if (fs_devices->open_devices)
>> +            btrfs_close_devices(fs_devices);
> 
> I guess that ->open_devices is supposed to be protected by the
> uuid_mutex so it shouldn't be tested out here.  In any case, it wouldn't
> be needed if btrfs_open_devices() cleaned up as it failed.

I guess I had a feeling that in something like a degraded mount scenario
you might live with failures.  But I guess that is initiated on the mount
commandline, i.e. "mount this subset; it's degraded" not "mount these devices,
and if some fail that's cool."

Right?

Thanks,
-Eric

> - z
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to