Hi James, 

> On Nov 7, 2017, at 9:05 AM, James Smart <james.sm...@broadcom.com> wrote:
> 
> On 11/6/2017 11:55 AM, Himanshu Madhani wrote:
>> From: Anil Gurumurthy <anil.gurumur...@cavium.com>
>> 
>> +static struct nvmet_fc_target_template qla_nvmet_fc_transport = {
>> +    .targetport_delete      = qla_nvmet_targetport_delete,
>> +    .xmt_ls_rsp             = qla_nvmet_ls_rsp,
>> +    .fcp_op                 = qla_nvmet_fcp_op,
>> +    .fcp_abort              = qla_nvmet_fcp_abort,
>> +    .fcp_req_release        = qla_nvmet_fcp_req_release,
>> +    .max_hw_queues          = 8,
>> +    .max_sgl_segments       = 128,
>> +    .max_dif_sgl_segments   = 64,
>> +    .dma_boundary           = 0xFFFFFFFF,
>> +    .target_features        = NVMET_FCTGTFEAT_READDATA_RSP |
>> +                                    NVMET_FCTGTFEAT_CMD_IN_ISR |
>> +                                    NVMET_FCTGTFEAT_OPDONE_IN_ISR,
>> +    .target_priv_sz = sizeof(struct nvme_private),
>> +};
>> +#endif
>> 
> 
> Do you really need the xxx_IN_ISR features ?  e.g. are you calling the 
> nvmet_fc cmd receive and op done calls in ISR context or something that 
> can't/shouldn't continue into the nvmet layers ?
> 
> I was looking to remove those flags and the work_q items behind it as I 
> believed the qla2xxx driver called everything in a deferred callback.
> 

Agree we do nvme_fc* callbacks in deferred context, but without the xxx_IN_ISR 
flag during NVMe Target template registration, we were running into crash due 
to recursive spin_lock held as part of CTIO response in our driver.


> -- james
> 
> 

Thanks,
- Himanshu

Reply via email to