On 2018/01/18 1:44, Michael S. Tsirkin wrote:
>> +static void add_one_sg(struct virtqueue *vq, unsigned long pfn, uint32_t 
>> len)
>> +{
>> +    struct scatterlist sg;
>> +    unsigned int unused;
>> +    int err;
>> +
>> +    sg_init_table(&sg, 1);
>> +    sg_set_page(&sg, pfn_to_page(pfn), len, 0);
>> +
>> +    /* Detach all the used buffers from the vq */
>> +    while (virtqueue_get_buf(vq, &unused))
>> +            ;
>> +
>> +    /*
>> +     * Since this is an optimization feature, losing a couple of free
>> +     * pages to report isn't important.
>> We simply resturn
> 
> return
> 
>> without adding
>> +     * the page if the vq is full. We are adding one entry each time,
>> +     * which essentially results in no memory allocation, so the
>> +     * GFP_KERNEL flag below can be ignored.
>> +     */
>> +    if (vq->num_free) {
>> +            err = virtqueue_add_inbuf(vq, &sg, 1, vq, GFP_KERNEL);
> 
> Should we kick here? At least when ring is close to
> being full. Kick at half way full?
> Otherwise it's unlikely ring will
> ever be cleaned until we finish the scan.

Since this add_one_sg() is called between spin_lock_irqsave(&zone->lock, flags)
and spin_unlock_irqrestore(&zone->lock, flags), it is not permitted to sleep.
And walk_free_mem_block() is not ready to handle resume.

By the way, specifying GFP_KERNEL here is confusing even though it is never 
used.
walk_free_mem_block() says:

  * The callback itself must not sleep or perform any operations which would
  * require any memory allocations directly (not even GFP_NOWAIT/GFP_ATOMIC)
  * or via any lock dependency. 

> 
>> +            /*
>> +             * This is expected to never fail, because there is always an
>> +             * entry available on the vq.
>> +             */
>> +            BUG_ON(err);
>> +    }
>> +}

Reply via email to