> -----Original Message-----
> From: Christoph Hellwig [mailto:h...@lst.de]
> Sent: Thursday, July 20, 2017 1:18 PM
> To: Shivasharan Srikanteshwara
> Cc: Christoph Hellwig; linux-scsi@vger.kernel.org;
martin.peter...@oracle.com;
> the...@redhat.com; j...@linux.vnet.ibm.com; Sumit Saxena; h...@suse.com;
> Kashyap Desai
> Subject: Re: [PATCH v2 11/15] megaraid_sas: Set device queue_depth same
as
> HBA can_queue value in scsi-mq mode
>
> I still don't understand why you don't want to do the same for the
non-mq path.

Hi Christoph,

Sorry for delay in response.

MQ case -
If there is any block layer requeue happens, we see performance drop. So
we avoid re-queue increasing Device QD = HBA QD. Performance drop due to
block layer re-queue is more in case of HDD (sequential IO converted into
random IO).

Non-MQ case.
If we increase Device QD = HBA QD for no-mq case, we see performance drop
for certain profiles.
For example SATA SSD, previous driver in non-mq set Device QD=32. In this
case, if we have more outstanding IO per device (more than 32), block
layer attempts soft merge and eventually end user experience higher
performance due to block layer attempting soft merge.  Same is not correct
in MQ case, as IO scheduler in MQ adds overhead if at all there is any
throttling or staging due to device QD.

Below is example of single SATA SSD, Sequential Read, BS=4K, IO depth =
256

MQ enable, Device QD = 32 achieves 137K IOPS
MQ enable, Device QD = 916 (HBA QD) achieves 145K IOPS

MQ disable, Device QD = 32 achieves 237K IOPS
MQ disable, Device QD = 916 (HBA QD) achieves 145K IOPS

Ideally we want to keep same QD settings in non-MQ mode, but trying to
avoid as we may face some regression from end user as explained.

Thanks,
Shivasharan

Reply via email to