Hi,

On Thu, 10 Sep 2020 at 09:25, Bjorn Helgaas <helg...@kernel.org> wrote:
>
> On Mon, Aug 24, 2020 at 01:20:25PM +0800, Jiang Biao wrote:
> > From: Jiang Biao <benbji...@tencent.com>
> >
> > pci_read_config() could block several ms in kernel space, mainly
> > caused by the while loop to call pci_user_read_config_dword().
> > Singel pci_user_read_config_dword() loop could consume 130us+,
> >               |    pci_user_read_config_dword() {
> >               |      _raw_spin_lock_irq() {
> > ! 136.698 us  |        native_queued_spin_lock_slowpath();
> > ! 137.582 us  |      }
> >               |      pci_read() {
> >               |        raw_pci_read() {
> >               |          pci_conf1_read() {
> >   0.230 us    |            _raw_spin_lock_irqsave();
> >   0.035 us    |            _raw_spin_unlock_irqrestore();
> >   8.476 us    |          }
> >   8.790 us    |        }
> >   9.091 us    |      }
> > ! 147.263 us  |    }
> > and dozens of the loop could consume ms+.
> >
> > If we execute some lspci commands concurrently, ms+ scheduling
> > latency could be detected.
> >
> > Add scheduling chance in the loop to improve the latency.
>
> Thanks for the patch, this makes a lot of sense.
>
> Shouldn't we do the same in pci_write_config()?
Yes, IMHO, that could be helpful too.
I'll send v2 to include that. :)
Thanks a lot for your comment.

Regards,
Jiang

Reply via email to