On Mon, 8 Nov 2021 08:49:27 -0500, Michael S. Tsirkin <m...@redhat.com> wrote:
>
> Hmm a bunch of comments got ignored. See e.g.
> https://lore.kernel.org/r/20211027043851-mutt-send-email-mst%40kernel.org
> if they aren't relevant add code comments or commit log text explaining the
> design choice please.

I should have responded to related questions, I am guessing whether some emails
have been lost.

I have sorted out the following 6 questions, if there are any missing questions,
please let me know.

1. use list_head
  In the earliest version, I used pointers directly. You suggest that I use
  llist_head, but considering that llist_head has atomic operations. There is no
  competition problem here, so I used list_head.

  In fact, I did not increase the allocated space for list_head.

  use as desc array: | vring_desc | vring_desc | vring_desc | vring_desc |
  use as queue item: | list_head ........................................|

2.
> > +   if (vq->use_desc_cache && total_sg <= VIRT_QUEUE_CACHE_DESC_NUM) {
> > +           if (vq->desc_cache_chain) {
> > +                   desc = vq->desc_cache_chain;
> > +                   vq->desc_cache_chain = (void *)desc->addr;
> > +                   goto got;
> > +           }
> > +           n = VIRT_QUEUE_CACHE_DESC_NUM;
>
> Hmm. This will allocate more entries than actually used. Why do it?


This is because the size of each cache item is fixed, and the logic has been
modified in the latest code. I think this problem no longer exists.


3.
> What bothers me here is what happens if cache gets
> filled on one numa node, then used on another?

I'm thinking about another question, how did the cross-numa appear here, and
virtio desc queue also has the problem of cross-numa. So is it necessary for us
to deal with the cross-numa scene?

Indirect desc is used as virtio desc, so as long as it is in the same numa as
virito desc. So we can allocate indirect desc cache at the same time when
allocating virtio desc queue.

4.
> So e.g. for rx, we are wasting memory since indirect isn't used.

In the current version, desc cache is set up based on pre-queue.

So if the desc cache is not used, we don't need to set the desc cache.

For example, virtio-net, as long as the tx queue and the rx queue in big packet
mode enable desc cache.

5.
> Would a better API be a cache size in bytes? This controls how much
> memory is spent after all.

My design is to set a threshold. When total_sg is greater than this threshold,
it will fall back to kmalloc/kfree. When total_sg is less than or equal to
this threshold, use the allocated cache.


6. kmem_cache_*

I have tested these, the performance is not as good as the method used in this
patch.


Thanks.
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Reply via email to