Dor Laor wrote:
> Anthony Liguori wrote:
>> This patch implements a very naive virtio network device backend in 
>> QEMU.
>> Even with this simple implementation, it get's about 3x the 
>> throughput of
>> rtl8139.  Of course, there's a whole lot of room for optimization to 
>> eliminate
>> the unnecessary copying and support more advanced features.
>>
>> To use a virtio network device, simply specific '-net nic,model=virtio'
>>
>> Signed-off-by: Anthony Liguori <[EMAIL PROTECTED]>
>>   
> [snip]
>
>> +/* TX */
>> +static void virtio_net_handle_tx(VirtIODevice *vdev, VirtQueue *vq)
>> +{
>> +    VirtIONet *n = to_virtio_net(vdev);
>> +    VirtQueueElement elem;
>> +
>> +    while (virtqueue_pop(vq, &elem)) {
>> +    int i;
>> +    size_t len = 0;
>> +
>> +    /* ignore the header for now */
>> +    for (i = 1; i < elem.out_num; i++) {
>> +        qemu_send_packet(n->vc, elem.out_sg[i].iov_base,
>> +                 elem.out_sg[i].iov_len);
>> +        len += elem.out_sg[i].iov_len;
>> +    }
>> +
>> +    virtqueue_push(vq, &elem, sizeof(struct virtio_net_hdr) + len);
>> +    virtio_notify(&n->vdev, vq);
>> +    }
>> +}
>>   
> Get the notify out of the while please.
>
> Regardless of above, at the moment you copy the receive packets into 
> virtio sg's.
> I had a did an incomplete job of adding vectored read to qemu.
> What's your opinion about it? Maybe using qemu's pending dma functions?

I did a quick hack where I had direct access to the tun's fd so that I 
could readv/writev directly.  It didn't seem to help all that much but 
I'm not too surprised by that, it's was still synchronous.

The dma API is definitely a step in the right direction.  The way I'm 
approaching optimization is to first hack things up so that my 
particular case works, and then figure out how to fit it in within the 
rest of QEMU.

For instance, I'm playing around right now with having the block driver 
open a file directly, and using the most recent linux-aio routines.  It 
apparently now supports but fd notification (through eventfd) and 
asynchronous fdsync support (for barriers).   I think we should be able 
to get very good performance with this.  Once I get promising 
performance results, I'll figure out how to work it into the existing 
block API.

Regards,

Anthony Liguori

> Regards & cheers,
> Dor.
>> +
>> +VirtIODevice *virtio_net_init(PCIBus *bus, uint16_t vendor, uint16_t 
>> device,
>> +                  NICInfo *nd, int devfn)
>> +{
>> +    VirtIONet *n;
>> +
>> +    n = (VirtIONet *)virtio_init_pci(bus, "virtio-net", vendor, device,
>> +                     vendor, VIRTIO_ID_NET,
>> +                     6, sizeof(VirtIONet));
>> +
>> +    n->vdev.update_config = virtio_net_update_config;
>> +    n->vdev.get_features = virtio_net_get_features;
>> +    n->rx_vq = virtio_add_queue(&n->vdev, virtio_net_handle_rx);
>> +    n->tx_vq = virtio_add_queue(&n->vdev, virtio_net_handle_tx);
>> +    n->can_receive = 0;
>> +    memcpy(n->mac, nd->macaddr, 6);
>> +    n->vc = qemu_new_vlan_client(nd->vlan, virtio_net_receive,
>> +                 virtio_net_can_receive, n);
>> +
>> +    return &n->vdev;
>> +}
>> diff --git a/qemu/vl.h b/qemu/vl.h
>> index 7b5cc8d..4f26fbb 100644
>> --- a/qemu/vl.h
>> +++ b/qemu/vl.h
>> @@ -1401,6 +1401,9 @@ VirtIODevice *virtio_blk_init(PCIBus *bus, 
>> uint16_t vendor, uint16_t device,
>>  
>>  VirtIODevice *virtio_9p_init(PCIBus *bus, uint16_t vendor, uint16_t 
>> device);
>>  
>> +VirtIODevice *virtio_net_init(PCIBus *bus, uint16_t vendor, uint16_t 
>> device,
>> +                  NICInfo *nd, int devfn);
>> +
>>  /* buf = NULL means polling */
>>  typedef int ADBDeviceRequest(ADBDevice *d, uint8_t *buf_out,
>>                                const uint8_t *buf, int len);
>>
>>   
>


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel

Reply via email to