Re: [PATCH] virtiofs: Enable multiple request queues

2021-05-11 Thread Connor Kuehl
On 5/11/21 5:23 AM, Stefan Hajnoczi wrote:
> On Fri, May 07, 2021 at 05:15:27PM -0500, Connor Kuehl wrote:
>> @@ -1245,7 +1262,8 @@ __releases(fiq->lock)
>>   req->in.h.nodeid, req->in.h.len,
>>   fuse_len_args(req->args->out_numargs, req->args->out_args));
>>  
>> -fsvq = >vqs[queue_id];
>> +fsvq = this_cpu_read(this_cpu_fsvq);
> 
> Please check how CPU hotplug affects this patch. If the current CPU
> doesn't have a vq because it was hotplugged, then it may be necessary to
> pick another vq.

I'll fix this in the next revision.

Thanks,

Connor

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH] virtiofs: Enable multiple request queues

2021-05-11 Thread Stefan Hajnoczi
On Fri, May 07, 2021 at 05:15:27PM -0500, Connor Kuehl wrote:
> @@ -1245,7 +1262,8 @@ __releases(fiq->lock)
>req->in.h.nodeid, req->in.h.len,
>fuse_len_args(req->args->out_numargs, req->args->out_args));
>  
> - fsvq = >vqs[queue_id];
> + fsvq = this_cpu_read(this_cpu_fsvq);

Please check how CPU hotplug affects this patch. If the current CPU
doesn't have a vq because it was hotplugged, then it may be necessary to
pick another vq.

Stefan


signature.asc
Description: PGP signature
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH] virtiofs: Enable multiple request queues

2021-05-10 Thread Vivek Goyal
On Mon, May 10, 2021 at 11:15:21AM -0500, Connor Kuehl wrote:
> On 5/10/21 10:25 AM, Vivek Goyal wrote:
> > On Fri, May 07, 2021 at 03:15:27PM -0700, Connor Kuehl wrote:
> >> Distribute requests across the multiqueue complex automatically based
> >> on the IRQ affinity.
> > 
> > Hi Connor,
> > 
> > Thanks for the patch. I will look into it and also test it.
> > 
> > How did you test it? Did you modify vitiofsd to support multiqueue. Did
> > you also run some performance numbers. Does it provide better/worse
> > performance as compared to single queue.
> 
> Thanks, Vivek! I need to NACK this version of the patch for inclusion
> though since I think the way I did per-CPU state will not work for
> multiple virtio-fs mounts because it will be overwritten with each new
> mount, but for testing purposes this should be OK with just one mount.

Hi Connor,

Ok. Will wait for next version which fixes the multiple mount issue.

> 
> I need to do more benchmarking on this.

That would be nice.

> 
> I had to hack multiqueue support into virtiofsd, which runs against the
> warning in the virtiofsd source code that instructs people to *not*
> enable multiqueue due to thread-safety concerns. I didn't audit
> virtiofsd for correctness, so I also worry this has the potential of
> affecting benchmarks if there are races.

filesystem code already can handle multiple threads because on a single
queue we can have a thread pool processing requests in parallel. I am
not aware of any issues about supporting multiple queues. I think
may be fuse_virtio.c might require a little closer inspection to make
sure nothing is dependent on single queue. 

> 
> For testing, QEMU needs to be invoked with `num-request-queues` like
> this:
> 
>   -device vhost-user-fs-pci,chardev=char0,tag=myfs,num-request-queues=2 
> 
> And obviously you can choose any value >= 1 for num-request-queues.
> 
> and I also made a quick-and-dirty hack to let me pass in the number of
> total queues to virtiofsd on the command line:

Ok. May be there is some inspiration to take from virtio-blk. How do they
specific number of queues by default and how many. I thought stefan mentioned
that by default there is one queue per vcpu.

Vivek

> 
> diff --git a/tools/virtiofsd/fuse_lowlevel.c b/tools/virtiofsd/fuse_lowlevel.c
> index 58e32fc963..cf8f132efd 100644
> --- a/tools/virtiofsd/fuse_lowlevel.c
> +++ b/tools/virtiofsd/fuse_lowlevel.c
> @@ -2565,9 +2565,9 @@ out1:
>  return NULL;
>  }
>  
> -int fuse_session_mount(struct fuse_session *se)
> +int fuse_session_mount(struct fuse_session *se, unsigned int num_queues)
>  {
> -return virtio_session_mount(se);
> +return virtio_session_mount(se, num_queues);
>  }
>  
>  int fuse_session_fd(struct fuse_session *se)
> diff --git a/tools/virtiofsd/fuse_lowlevel.h b/tools/virtiofsd/fuse_lowlevel.h
> index 3bf786b034..50bf86113d 100644
> --- a/tools/virtiofsd/fuse_lowlevel.h
> +++ b/tools/virtiofsd/fuse_lowlevel.h
> @@ -1842,7 +1842,7 @@ struct fuse_session *fuse_session_new(struct fuse_args 
> *args,
>   *
>   * @return 0 on success, -1 on failure.
>   **/
> -int fuse_session_mount(struct fuse_session *se);
> +int fuse_session_mount(struct fuse_session *se, unsigned int num_queues);
>  
>  /**
>   * Enter a single threaded, blocking event loop.
> diff --git a/tools/virtiofsd/fuse_virtio.c b/tools/virtiofsd/fuse_virtio.c
> index 3e13997406..8622c3dce6 100644
> --- a/tools/virtiofsd/fuse_virtio.c
> +++ b/tools/virtiofsd/fuse_virtio.c
> @@ -747,20 +747,6 @@ static void fv_queue_set_started(VuDev *dev, int qidx, 
> bool started)
>   started);
>  assert(qidx >= 0);
>  
> -/*
> - * Ignore additional request queues for now.  passthrough_ll.c must be
> - * audited for thread-safety issues first.  It was written with a
> - * well-behaved client in mind and may not protect against all types of
> - * races yet.
> - */
> -if (qidx > 1) {
> -fuse_log(FUSE_LOG_ERR,
> - "%s: multiple request queues not yet implemented, please 
> only "
> - "configure 1 request queue\n",
> - __func__);
> -exit(EXIT_FAILURE);
> -}
> -
>  if (started) {
>  /* Fire up a thread to watch this queue */
>  if (qidx >= vud->nqueues) {
> @@ -997,7 +983,7 @@ static int fv_create_listen_socket(struct fuse_session 
> *se)
>  return 0;
>  }
>  
> -int virtio_session_mount(struct fuse_session *se)
> +int virtio_session_mount(struct fuse_session *se, unsigned int num_queues)
>  {
>  int ret;
>  
> @@ -1048,8 +1034,8 @@ int virtio_session_mount(struct fuse_session *se)
>  se->vu_socketfd = data_sock;
>  se->virtio_dev->se = se;
>  pthread_rwlock_init(>virtio_dev->vu_dispatch_rwlock, NULL);
> -if (!vu_init(>virtio_dev->dev, 2, se->vu_socketfd, fv_panic, NULL,
> - fv_set_watch, fv_remove_watch, _iface)) {
> +if (!vu_init(>virtio_dev->dev, num_queues, se->vu_socketfd,
> +  

Re: [PATCH] virtiofs: Enable multiple request queues

2021-05-10 Thread Connor Kuehl
On 5/10/21 10:25 AM, Vivek Goyal wrote:
> On Fri, May 07, 2021 at 03:15:27PM -0700, Connor Kuehl wrote:
>> Distribute requests across the multiqueue complex automatically based
>> on the IRQ affinity.
> 
> Hi Connor,
> 
> Thanks for the patch. I will look into it and also test it.
> 
> How did you test it? Did you modify vitiofsd to support multiqueue. Did
> you also run some performance numbers. Does it provide better/worse
> performance as compared to single queue.

Thanks, Vivek! I need to NACK this version of the patch for inclusion
though since I think the way I did per-CPU state will not work for
multiple virtio-fs mounts because it will be overwritten with each new
mount, but for testing purposes this should be OK with just one mount.

I need to do more benchmarking on this.

I had to hack multiqueue support into virtiofsd, which runs against the
warning in the virtiofsd source code that instructs people to *not*
enable multiqueue due to thread-safety concerns. I didn't audit
virtiofsd for correctness, so I also worry this has the potential of
affecting benchmarks if there are races.

For testing, QEMU needs to be invoked with `num-request-queues` like
this:

-device vhost-user-fs-pci,chardev=char0,tag=myfs,num-request-queues=2 

And obviously you can choose any value >= 1 for num-request-queues.

and I also made a quick-and-dirty hack to let me pass in the number of
total queues to virtiofsd on the command line:

diff --git a/tools/virtiofsd/fuse_lowlevel.c b/tools/virtiofsd/fuse_lowlevel.c
index 58e32fc963..cf8f132efd 100644
--- a/tools/virtiofsd/fuse_lowlevel.c
+++ b/tools/virtiofsd/fuse_lowlevel.c
@@ -2565,9 +2565,9 @@ out1:
 return NULL;
 }
 
-int fuse_session_mount(struct fuse_session *se)
+int fuse_session_mount(struct fuse_session *se, unsigned int num_queues)
 {
-return virtio_session_mount(se);
+return virtio_session_mount(se, num_queues);
 }
 
 int fuse_session_fd(struct fuse_session *se)
diff --git a/tools/virtiofsd/fuse_lowlevel.h b/tools/virtiofsd/fuse_lowlevel.h
index 3bf786b034..50bf86113d 100644
--- a/tools/virtiofsd/fuse_lowlevel.h
+++ b/tools/virtiofsd/fuse_lowlevel.h
@@ -1842,7 +1842,7 @@ struct fuse_session *fuse_session_new(struct fuse_args 
*args,
  *
  * @return 0 on success, -1 on failure.
  **/
-int fuse_session_mount(struct fuse_session *se);
+int fuse_session_mount(struct fuse_session *se, unsigned int num_queues);
 
 /**
  * Enter a single threaded, blocking event loop.
diff --git a/tools/virtiofsd/fuse_virtio.c b/tools/virtiofsd/fuse_virtio.c
index 3e13997406..8622c3dce6 100644
--- a/tools/virtiofsd/fuse_virtio.c
+++ b/tools/virtiofsd/fuse_virtio.c
@@ -747,20 +747,6 @@ static void fv_queue_set_started(VuDev *dev, int qidx, 
bool started)
  started);
 assert(qidx >= 0);
 
-/*
- * Ignore additional request queues for now.  passthrough_ll.c must be
- * audited for thread-safety issues first.  It was written with a
- * well-behaved client in mind and may not protect against all types of
- * races yet.
- */
-if (qidx > 1) {
-fuse_log(FUSE_LOG_ERR,
- "%s: multiple request queues not yet implemented, please only 
"
- "configure 1 request queue\n",
- __func__);
-exit(EXIT_FAILURE);
-}
-
 if (started) {
 /* Fire up a thread to watch this queue */
 if (qidx >= vud->nqueues) {
@@ -997,7 +983,7 @@ static int fv_create_listen_socket(struct fuse_session *se)
 return 0;
 }
 
-int virtio_session_mount(struct fuse_session *se)
+int virtio_session_mount(struct fuse_session *se, unsigned int num_queues)
 {
 int ret;
 
@@ -1048,8 +1034,8 @@ int virtio_session_mount(struct fuse_session *se)
 se->vu_socketfd = data_sock;
 se->virtio_dev->se = se;
 pthread_rwlock_init(>virtio_dev->vu_dispatch_rwlock, NULL);
-if (!vu_init(>virtio_dev->dev, 2, se->vu_socketfd, fv_panic, NULL,
- fv_set_watch, fv_remove_watch, _iface)) {
+if (!vu_init(>virtio_dev->dev, num_queues, se->vu_socketfd,
+fv_panic, NULL, fv_set_watch, fv_remove_watch, _iface)) {
 fuse_log(FUSE_LOG_ERR, "%s: vu_init failed\n", __func__);
 return -1;
 }
diff --git a/tools/virtiofsd/fuse_virtio.h b/tools/virtiofsd/fuse_virtio.h
index 111684032c..a0e78b9b84 100644
--- a/tools/virtiofsd/fuse_virtio.h
+++ b/tools/virtiofsd/fuse_virtio.h
@@ -18,7 +18,7 @@
 
 struct fuse_session;
 
-int virtio_session_mount(struct fuse_session *se);
+int virtio_session_mount(struct fuse_session *se, unsigned int num_queues);
 void virtio_session_close(struct fuse_session *se);
 int virtio_loop(struct fuse_session *se);
 
diff --git a/tools/virtiofsd/passthrough_ll.c b/tools/virtiofsd/passthrough_ll.c
index 1553d2ef45..9fd4e34980 100644
--- a/tools/virtiofsd/passthrough_ll.c
+++ b/tools/virtiofsd/passthrough_ll.c
@@ -161,6 +161,7 @@ struct lo_data {
 int allow_direct_io;
 int announce_submounts;
 bool use_statx;
+int 

Re: [PATCH] virtiofs: Enable multiple request queues

2021-05-10 Thread Vivek Goyal
On Fri, May 07, 2021 at 03:15:27PM -0700, Connor Kuehl wrote:
> Distribute requests across the multiqueue complex automatically based
> on the IRQ affinity.

Hi Connor,

Thanks for the patch. I will look into it and also test it.

How did you test it? Did you modify vitiofsd to support multiqueue. Did
you also run some performance numbers. Does it provide better/worse
performance as compared to single queue.

Thanks
Vivek

> 
> Suggested-by: Stefan Hajnoczi 
> Signed-off-by: Connor Kuehl 
> ---
>  fs/fuse/virtio_fs.c | 30 --
>  1 file changed, 24 insertions(+), 6 deletions(-)
> 
> diff --git a/fs/fuse/virtio_fs.c b/fs/fuse/virtio_fs.c
> index bcb8a02e2d8b..dcdc8b7b1ad5 100644
> --- a/fs/fuse/virtio_fs.c
> +++ b/fs/fuse/virtio_fs.c
> @@ -30,6 +30,10 @@
>  static DEFINE_MUTEX(virtio_fs_mutex);
>  static LIST_HEAD(virtio_fs_instances);
>  
> +struct virtio_fs_vq;
> +
> +DEFINE_PER_CPU(struct virtio_fs_vq *, this_cpu_fsvq);
> +
>  enum {
>   VQ_HIPRIO,
>   VQ_REQUEST
> @@ -673,6 +677,7 @@ static int virtio_fs_setup_vqs(struct virtio_device *vdev,
>   struct virtqueue **vqs;
>   vq_callback_t **callbacks;
>   const char **names;
> + struct irq_affinity desc = { .pre_vectors = 1, .nr_sets = 1, };
>   unsigned int i;
>   int ret = 0;
>  
> @@ -681,6 +686,9 @@ static int virtio_fs_setup_vqs(struct virtio_device *vdev,
>   if (fs->num_request_queues == 0)
>   return -EINVAL;
>  
> + fs->num_request_queues = min_t(unsigned int, nr_cpu_ids,
> +fs->num_request_queues);
> +
>   fs->nvqs = VQ_REQUEST + fs->num_request_queues;
>   fs->vqs = kcalloc(fs->nvqs, sizeof(fs->vqs[VQ_HIPRIO]), GFP_KERNEL);
>   if (!fs->vqs)
> @@ -710,12 +718,24 @@ static int virtio_fs_setup_vqs(struct virtio_device 
> *vdev,
>   names[i] = fs->vqs[i].name;
>   }
>  
> - ret = virtio_find_vqs(vdev, fs->nvqs, vqs, callbacks, names, NULL);
> + ret = virtio_find_vqs(vdev, fs->nvqs, vqs, callbacks, names, );
>   if (ret < 0)
>   goto out;
>  
> - for (i = 0; i < fs->nvqs; i++)
> + for (i = 0; i < fs->nvqs; i++) {
> + const struct cpumask *mask;
> + unsigned int cpu;
> +
>   fs->vqs[i].vq = vqs[i];
> + if (i == VQ_HIPRIO)
> + continue;
> +
> + mask = vdev->config->get_vq_affinity(vdev, i);
> + for_each_cpu(cpu, mask) {
> + struct virtio_fs_vq **cpu_vq = 
> per_cpu_ptr(_cpu_fsvq, cpu);
> + *cpu_vq = >vqs[i];
> + }
> + }
>  
>   virtio_fs_start_all_queues(fs);
>  out:
> @@ -877,8 +897,6 @@ static int virtio_fs_probe(struct virtio_device *vdev)
>   if (ret < 0)
>   goto out;
>  
> - /* TODO vq affinity */
> -
>   ret = virtio_fs_setup_dax(vdev, fs);
>   if (ret < 0)
>   goto out_vqs;
> @@ -1225,7 +1243,6 @@ static int virtio_fs_enqueue_req(struct virtio_fs_vq 
> *fsvq,
>  static void virtio_fs_wake_pending_and_unlock(struct fuse_iqueue *fiq)
>  __releases(fiq->lock)
>  {
> - unsigned int queue_id = VQ_REQUEST; /* TODO multiqueue */
>   struct virtio_fs *fs;
>   struct fuse_req *req;
>   struct virtio_fs_vq *fsvq;
> @@ -1245,7 +1262,8 @@ __releases(fiq->lock)
>req->in.h.nodeid, req->in.h.len,
>fuse_len_args(req->args->out_numargs, req->args->out_args));
>  
> - fsvq = >vqs[queue_id];
> + fsvq = this_cpu_read(this_cpu_fsvq);
> +
>   ret = virtio_fs_enqueue_req(fsvq, req, false);
>   if (ret < 0) {
>   if (ret == -ENOMEM || ret == -ENOSPC) {
> -- 
> 2.30.2
> 

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH] virtiofs: Enable multiple request queues

2021-05-08 Thread Connor Kuehl
On 5/7/21 5:15 PM, Connor Kuehl wrote:
> Distribute requests across the multiqueue complex automatically based
> on the IRQ affinity.
> 
> Suggested-by: Stefan Hajnoczi 
> Signed-off-by: Connor Kuehl 
> ---
>  fs/fuse/virtio_fs.c | 30 --
>  1 file changed, 24 insertions(+), 6 deletions(-)
> 
> diff --git a/fs/fuse/virtio_fs.c b/fs/fuse/virtio_fs.c
> index bcb8a02e2d8b..dcdc8b7b1ad5 100644
> --- a/fs/fuse/virtio_fs.c
> +++ b/fs/fuse/virtio_fs.c
> @@ -30,6 +30,10 @@
>  static DEFINE_MUTEX(virtio_fs_mutex);
>  static LIST_HEAD(virtio_fs_instances);
>  
> +struct virtio_fs_vq;
> +
> +DEFINE_PER_CPU(struct virtio_fs_vq *, this_cpu_fsvq);
> [..]
> +
> + for_each_cpu(cpu, mask) {
> + struct virtio_fs_vq **cpu_vq = 
> per_cpu_ptr(_cpu_fsvq, cpu);
> + *cpu_vq = >vqs[i];
> + }
> + }

Hmm, actually, it's just occurred to me that the per-CPU state could be
problematic with multiple virtio-fs mounts.

I'll workshop this some more.

Connor

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH] virtiofs: Enable multiple request queues

2021-05-07 Thread kernel test robot
Hi Connor,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on fuse/for-next]
[also build test WARNING on linux/master linus/master v5.12 next-20210507]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:
https://github.com/0day-ci/linux/commits/Connor-Kuehl/virtiofs-Enable-multiple-request-queues/20210508-061611
base:   https://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/fuse.git 
for-next
config: microblaze-randconfig-r001-20210507 (attached as .config)
compiler: microblaze-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
wget 
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O 
~/bin/make.cross
chmod +x ~/bin/make.cross
# 
https://github.com/0day-ci/linux/commit/6ffd4543401bc990353404b556d91cce34b017fc
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review 
Connor-Kuehl/virtiofs-Enable-multiple-request-queues/20210508-061611
git checkout 6ffd4543401bc990353404b556d91cce34b017fc
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross W=1 
ARCH=microblaze 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot 

All warnings (new ones prefixed by >>):

   fs/fuse/virtio_fs.c: In function 'virtio_fs_wake_pending_and_unlock':
>> fs/fuse/virtio_fs.c:1246:20: warning: variable 'fs' set but not used 
>> [-Wunused-but-set-variable]
1246 |  struct virtio_fs *fs;
 |^~


vim +/fs +1246 fs/fuse/virtio_fs.c

a62a8ef9d97da2 Stefan Hajnoczi 2018-06-12  1242  
a62a8ef9d97da2 Stefan Hajnoczi 2018-06-12  1243  static void 
virtio_fs_wake_pending_and_unlock(struct fuse_iqueue *fiq)
a62a8ef9d97da2 Stefan Hajnoczi 2018-06-12  1244  __releases(fiq->lock)
a62a8ef9d97da2 Stefan Hajnoczi 2018-06-12  1245  {
a62a8ef9d97da2 Stefan Hajnoczi 2018-06-12 @1246 struct virtio_fs *fs;
a62a8ef9d97da2 Stefan Hajnoczi 2018-06-12  1247 struct fuse_req *req;
51fecdd2555b3e Vivek Goyal 2019-10-15  1248 struct virtio_fs_vq 
*fsvq;
a62a8ef9d97da2 Stefan Hajnoczi 2018-06-12  1249 int ret;
a62a8ef9d97da2 Stefan Hajnoczi 2018-06-12  1250  
a62a8ef9d97da2 Stefan Hajnoczi 2018-06-12  1251 
WARN_ON(list_empty(>pending));
a62a8ef9d97da2 Stefan Hajnoczi 2018-06-12  1252 req = 
list_last_entry(>pending, struct fuse_req, list);
a62a8ef9d97da2 Stefan Hajnoczi 2018-06-12  1253 clear_bit(FR_PENDING, 
>flags);
a62a8ef9d97da2 Stefan Hajnoczi 2018-06-12  1254 
list_del_init(>list);
a62a8ef9d97da2 Stefan Hajnoczi 2018-06-12  1255 
WARN_ON(!list_empty(>pending));
a62a8ef9d97da2 Stefan Hajnoczi 2018-06-12  1256 spin_unlock(>lock);
a62a8ef9d97da2 Stefan Hajnoczi 2018-06-12  1257  
a62a8ef9d97da2 Stefan Hajnoczi 2018-06-12  1258 fs = fiq->priv;
a62a8ef9d97da2 Stefan Hajnoczi 2018-06-12  1259  
a62a8ef9d97da2 Stefan Hajnoczi 2018-06-12  1260 pr_debug("%s: opcode %u 
unique %#llx nodeid %#llx in.len %u out.len %u\n",
a62a8ef9d97da2 Stefan Hajnoczi 2018-06-12  1261   __func__, 
req->in.h.opcode, req->in.h.unique,
a62a8ef9d97da2 Stefan Hajnoczi 2018-06-12  1262  
req->in.h.nodeid, req->in.h.len,
a62a8ef9d97da2 Stefan Hajnoczi 2018-06-12  1263  
fuse_len_args(req->args->out_numargs, req->args->out_args));
a62a8ef9d97da2 Stefan Hajnoczi 2018-06-12  1264  
6ffd4543401bc9 Connor Kuehl2021-05-07  1265 fsvq = 
this_cpu_read(this_cpu_fsvq);
6ffd4543401bc9 Connor Kuehl2021-05-07  1266  
a9bfd9dd341756 Vivek Goyal 2019-10-15  1267 ret = 
virtio_fs_enqueue_req(fsvq, req, false);
a62a8ef9d97da2 Stefan Hajnoczi 2018-06-12  1268 if (ret < 0) {
a62a8ef9d97da2 Stefan Hajnoczi 2018-06-12  1269 if (ret == 
-ENOMEM || ret == -ENOSPC) {
a9bfd9dd341756 Vivek Goyal 2019-10-15  1270 /*
a9bfd9dd341756 Vivek Goyal 2019-10-15  1271  * 
Virtqueue full. Retry submission from worker
a9bfd9dd341756 Vivek Goyal 2019-10-15  1272  * 
context as we might be holding fc->bg_lock.
a9bfd9dd341756 Vivek Goyal 2019-10-15  1273  */
a9bfd9dd341756 Vivek Goyal 2019-10-15  1274 
spin_lock(>lock);
a9bfd9dd341756 Vivek Goyal 2019-10-15  1275 
list_add_tail(>list, >queued_reqs);
a9bfd9dd341756 Vivek Goyal 2019-10-15  1276 
inc_in_flight_req(fsvq);
a9bfd9dd341756 Vivek Goyal 2019-10-15  1277 
schedule_delayed_work(>dispatch_work,
a9bfd9dd341756 Vivek Goyal 2019-10-15  1278 
msecs_to_jiffies(1));
a9bfd9dd341756