On 06/24/13 19:38, James Bottomley wrote:
> On Wed, 2013-06-12 at 14:52 +0200, Bart Van Assche wrote:
>> SCSI devices are added to the shost->__devices list from inside
>> scsi_alloc_sdev(). If something goes wrong during LUN scanning,
>> e.g. a transport layer failure occurs, then __scsi_remove_device()
>> can get invoked by the LUN scanning code for a SCSI device in
>> state SDEV_CREATED_BLOCK or SDEV_BLOCKED. If this happens then
>> the SCSI device has not yet been added to sysfs (is_visible == 0).
>> Make sure that if this happens these devices are transitioned
>> into state SDEV_DEL. This avoids that __scsi_remove_device()
>> gets invoked a second time by scsi_forget_host().
> 
> The current principle is that scsi_remove_device can fail, so the
> condition you're avoiding is expected.  If you want to make it always
> succeed, we have to worry about any device state racing with an
> asynchronous remove, which looks like a whole nasty can of worms.
> 
> The change log makes it sound like what you actually want to enable is
> the ability to remove devices which fail probing but which are in the
> blocked state, so why not just respin with only that, which is just
> adding the blocked states to the ->SDEV_DEL state transitions?

If what you had in mind is the patch below, I think we agree:

diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index e3d6276..eaea242 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -2185,6 +2185,8 @@ scsi_device_set_state(struct scsi_device *sdev, enum 
scsi_device_state state)
                case SDEV_OFFLINE:
                case SDEV_TRANSPORT_OFFLINE:
                case SDEV_CANCEL:
+               case SDEV_BLOCK:
+               case SDEV_CREATED_BLOCK:
                        break;
                default:
                        goto illegal;


--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to