Re: [PATCH 0/6] Kill off the virtio_net tx mitigation timer

2008-10-30 Thread Anthony Liguori

Mark McLoughlin wrote:

Hey,

The main patch in this series is 5/6 - it just kills off the
virtio_net tx mitigation timer and does all the tx I/O in the
I/O thread.

Below are the results I got from benchmarking guest->host and
host->guest on my machine.

There's enough numbers there to make anyone blind, but basically
there are results for current kvm-userspace.git, with the
no-tx-timer patch applied and with the drop-the-mutex patch
applied.

Also, I've included results that show what difference some tuning
makes with all the patches applied. The tuning basically just
involves pinning the I/O thread and the netperf/netserver processes
in both the host and guest to two physical CPUs which share a L2
cache.

(Yes, the 1k buffer size results are weird - we think there's a bug
in recent kernels that causes us not to coalesce these small buffers
into a large GSO packet before sending)

Anyway, the results in all their glory:

|   guest->host tput |  host->guest 
tput
  netperf, 10x20s runs (Gb/s)   |min/ mean/   max/stddev |   min/  mean/   
max/stddev
  
--++---
  kvm-userspace.git, 1k | 0.600/ 0.645/ 0.670/ 0.025 | 5.170/ 5.285/ 
5.470/ 0.087
  kvm-userspace.git, 16k| 3.070/ 3.350/ 3.710/ 0.248 | 5.950/ 6.374/ 
6.760/ 0.261
  kvm-userspace.git, 65k| 4.950/ 6.041/ 7.170/ 0.639 | 5.480/ 5.642/ 
5.810/ 0.092

  no tx timer, 1k   | 0.720/ 0.790/ 0.850/ 0.040 | 4.950/ 5.172/ 
5.370/ 0.128
  no tx timer, 16k  | 4.120/ 4.512/ 4.740/ 0.190 | 4.900/ 5.480/ 
6.230/ 0.416
  no tx timer, 65k  | 5.510/ 7.702/ 9.600/ 1.153 | 4.490/ 5.208/ 
5.690/ 0.408
  


Okay, I don't see 3/6 yet, but does no tx timer mean just no tx timer or 
no tx timer + handling IO in the IO thread?


Regards,

Anthony Liguori

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/6] Kill off the virtio_net tx mitigation timer

2008-10-31 Thread Mark McLoughlin
On Thu, 2008-10-30 at 14:20 -0500, Anthony Liguori wrote:
> Mark McLoughlin wrote:

> > |   guest->host tput |  
> > host->guest tput
> >   netperf, 10x20s runs (Gb/s)   |min/ mean/   max/stddev |   min/  
> > mean/   max/stddev
> >   
> > --++---
> >   kvm-userspace.git, 1k | 0.600/ 0.645/ 0.670/ 0.025 | 5.170/ 
> > 5.285/ 5.470/ 0.087
> >   kvm-userspace.git, 16k| 3.070/ 3.350/ 3.710/ 0.248 | 5.950/ 
> > 6.374/ 6.760/ 0.261
> >   kvm-userspace.git, 65k| 4.950/ 6.041/ 7.170/ 0.639 | 5.480/ 
> > 5.642/ 5.810/ 0.092
> >
> >   no tx timer, 1k   | 0.720/ 0.790/ 0.850/ 0.040 | 4.950/ 
> > 5.172/ 5.370/ 0.128
> >   no tx timer, 16k  | 4.120/ 4.512/ 4.740/ 0.190 | 4.900/ 
> > 5.480/ 6.230/ 0.416
> >   no tx timer, 65k  | 5.510/ 7.702/ 9.600/ 1.153 | 4.490/ 
> > 5.208/ 5.690/ 0.408
> >   
> 
> Okay, I don't see 3/6 yet, but does no tx timer mean just no tx timer or 
> no tx timer + handling IO in the IO thread?

The latter.

Removing the tx timer and doing all the tx I/O in the vcpu thread gives
poor results because you basically just exit the guest and wait for the
send to complete for each individual packet.

Cheers,
Mark.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/6] Kill off the virtio_net tx mitigation timer

2008-11-02 Thread Avi Kivity

Mark McLoughlin wrote:

Hey,

The main patch in this series is 5/6 - it just kills off the
virtio_net tx mitigation timer and does all the tx I/O in the
I/O thread.

  


What will it do to small packet, multi-flow loads (simulated by ping -f 
-l 30 $external)?


Where does the benefit come from?  Is the overhead of managing the timer 
too high, or does it fire too late and so we sleep?  If the latter, can 
we tune it dynamically?


For example, if the guest sees it is making a lot of progress without 
the host catching up (waiting on the tx timer), it can 
kick_I_really_mean_this_now(), to get the host to notice.



--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/6] Kill off the virtio_net tx mitigation timer

2008-11-02 Thread Avi Kivity

Mark McLoughlin wrote:

Hey,

The main patch in this series is 5/6 - it just kills off the
virtio_net tx mitigation timer and does all the tx I/O in the
I/O thread.

Below are the results I got from benchmarking guest->host and
host->guest on my machine.

There's enough numbers there to make anyone blind, but basically
there are results for current kvm-userspace.git, with the
no-tx-timer patch applied and with the drop-the-mutex patch
applied.

Also, I've included results that show what difference some tuning
makes with all the patches applied. The tuning basically just
involves pinning the I/O thread and the netperf/netserver processes
in both the host and guest to two physical CPUs which share a L2
cache.

(Yes, the 1k buffer size results are weird - we think there's a bug
in recent kernels that causes us not to coalesce these small buffers
into a large GSO packet before sending)
  


Applied 1-2 while we debate the rest.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/6] Kill off the virtio_net tx mitigation timer

2008-11-03 Thread Mark McLoughlin
On Sun, 2008-11-02 at 11:48 +0200, Avi Kivity wrote:
> Mark McLoughlin wrote:
> > Hey,
> >
> > The main patch in this series is 5/6 - it just kills off the
> > virtio_net tx mitigation timer and does all the tx I/O in the
> > I/O thread.
> >
> >   
> 
> What will it do to small packet, multi-flow loads (simulated by ping -f 
> -l 30 $external)?

It should improve the latency - the packets will be flushed more quickly
than the 150us timeout without blocking the guest.

I've a crappy external network setup locally atm, so the improvement for
guest->external gets lost in the noise there, but it does show up with
that workload and guest->host.

> Where does the benefit come from?

There are two things going on here, I think.

First is that the timer affects latency, removing the timeout helps
that.

Second is that currently when we fill up the ring we block the guest
vcpu and flush. Thus, while we're copying a entire ring full of packets
that guest isn't making progress. Doing the copying in the I/O thread
helps there.

Note - the only net I/O we currently do in the vcpu thread at the moment
is when the guest is saturating the link. Any other timer, all the I/O
is done in the I/O thread by virtue of the timer.

> Is the overhead of managing the timer too high, or does it fire too
> late and so we sleep?  If the latter, can we tune it dynamically?
> 
> For example, if the guest sees it is making a lot of progress without 
> the host catching up (waiting on the tx timer), it can 
> kick_I_really_mean_this_now(), to get the host to notice.

It does that already - if the ring fills up the guests forces a kick
which causes the host to flush the ring in the vcpu thread.

Cheers,
Mark.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/6] Kill off the virtio_net tx mitigation timer

2008-11-03 Thread Mark McLoughlin
On Mon, 2008-11-03 at 14:40 +0200, Avi Kivity wrote:
> Mark McLoughlin wrote:
> > On Sun, 2008-11-02 at 11:48 +0200, Avi Kivity wrote:
> >   
> >> Mark McLoughlin wrote:
> >>> The main patch in this series is 5/6 - it just kills off the
> >>> virtio_net tx mitigation timer and does all the tx I/O in the
> >>> I/O thread.
> >>>
> >>>   
> >>>   
> >> What will it do to small packet, multi-flow loads (simulated by ping -f 
> >> -l 30 $external)?
> >> 
> >
> > It should improve the latency - the packets will be flushed more quickly
> > than the 150us timeout without blocking the guest.
> >
> >   
> 
> But it will increase overhead, since suddenly we aren't queueing 
> anymore.  One vmexit per small packet.

Yes in theory, but the packet copies are acting to mitigate exits since
we don't re-enable notifications again until we're sure the ring is
empty.

With copyless, though, we'd have an unacceptable vmexit rate.

> >> Where does the benefit come from?
> >> 
> >
> > There are two things going on here, I think.
> >
> > First is that the timer affects latency, removing the timeout helps
> > that.
> >   
> 
> If the timer affects latency, then something is very wrong.  We're 
> lacking an adjustable window.
> 
> The way I see it, the notification window should be adjusted according 
> to the current workload.  If the link is idle, the window should be one 
> packet -- notify as soon as something is queued.  As the workload 
> increases, the window increases to (safety_factor * allowable_latency / 
> packet_rate).  The timer is set to allowable_latency to catch changes in 
> workload.
> 
> For example:
> 
> - allowable_latency 1ms (implies 1K vmexits/sec desired)
> - current packet_rate 20K packets/sec
> - safety_factor 0.8
> 
> So we request notifications every 0.8 * 20K * 1m = 16 packets, and set 
> the timer to 1ms.  Usually we get a notification every 16 packets, just 
> before timer expiration.  If the workload increases, we get 
> notifications sooner, so we increase the window.  If the workload drops, 
> the timer fires and we decrease the window.
> 
> The timer should never fire on an all-out benchmark, or in a ping test.

Yeah, I do like the sound of this.

However, since it requires a new guest feature and I don't expect it'll
improve the situation over the proposed patch until we have copyless
transmit, I think we should do this as part of the copyless effort.

One thing I'd worry about with this scheme is all-out receive - e.g. any
delay in returning a TCP ACK to the sending side, might cause us to hit
the TCP window size.

> > Second is that currently when we fill up the ring we block the guest
> > vcpu and flush. Thus, while we're copying a entire ring full of packets
> > that guest isn't making progress. Doing the copying in the I/O thread
> > helps there.
> >   
> 
> We're hurting our cache, and this won't work well with many nics.  At 
> the very least this should be done in a dedicated thread.

A thread per nic is doable, but it'd be especially tricky on the receive
side without more "short-cut the one producer, one consumer case" work.

Cheers,
Mark.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/6] Kill off the virtio_net tx mitigation timer

2008-11-03 Thread Avi Kivity

Mark McLoughlin wrote:
But it will increase overhead, since suddenly we aren't queueing 
anymore.  One vmexit per small packet.



Yes in theory, but the packet copies are acting to mitigate exits since
we don't re-enable notifications again until we're sure the ring is
empty.
  


You mean, the guest and the copy proceed in parallel, and while they do, 
exits are disabled?



With copyless, though, we'd have an unacceptable vmexit rate.
  


Right.

 

If the timer affects latency, then something is very wrong.  We're 
lacking an adjustable window.


The way I see it, the notification window should be adjusted according 
to the current workload.  If the link is idle, the window should be one 
packet -- notify as soon as something is queued.  As the workload 
increases, the window increases to (safety_factor * allowable_latency / 
packet_rate).  The timer is set to allowable_latency to catch changes in 
workload.


For example:

- allowable_latency 1ms (implies 1K vmexits/sec desired)
- current packet_rate 20K packets/sec
- safety_factor 0.8

So we request notifications every 0.8 * 20K * 1m = 16 packets, and set 
the timer to 1ms.  Usually we get a notification every 16 packets, just 
before timer expiration.  If the workload increases, we get 
notifications sooner, so we increase the window.  If the workload drops, 
the timer fires and we decrease the window.


The timer should never fire on an all-out benchmark, or in a ping test.



Yeah, I do like the sound of this.

However, since it requires a new guest feature and I don't expect it'll
improve the situation over the proposed patch until we have copyless
transmit, I think we should do this as part of the copyless effort.
  


Hopefully copyless and this can be done in parallel.  I think they have 
value independently.



One thing I'd worry about with this scheme is all-out receive - e.g. any
delay in returning a TCP ACK to the sending side, might cause us to hit
the TCP window size.
  


Consider a real NIC, that also has latency for ACKs that is determined 
by the queue length.  The proposal doesn't change that, except 
momentarily when transitioning from high throughput to low throughput.


In any case, latency is never more than allowable_latency (not including 
time spent in the guest network stack queues, but we aren't responsible 
for that).


(one day we can add a queue for acks and other high priority stuff, but 
we have enough on our hands now)


We're hurting our cache, and this won't work well with many nics.  At 
the very least this should be done in a dedicated thread.



A thread per nic is doable, but it'd be especially tricky on the receive
side without more "short-cut the one producer, one consumer case" work.
  


We can start with transmit.  I'm somewhat worried about further 
divergence from qemu mainline (just completed a merge...).


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/6] Kill off the virtio_net tx mitigation timer

2008-11-03 Thread Avi Kivity

Mark McLoughlin wrote:

On Sun, 2008-11-02 at 11:48 +0200, Avi Kivity wrote:
  

Mark McLoughlin wrote:


Hey,

The main patch in this series is 5/6 - it just kills off the
virtio_net tx mitigation timer and does all the tx I/O in the
I/O thread.

  
  
What will it do to small packet, multi-flow loads (simulated by ping -f 
-l 30 $external)?



It should improve the latency - the packets will be flushed more quickly
than the 150us timeout without blocking the guest.

  


But it will increase overhead, since suddenly we aren't queueing 
anymore.  One vmexit per small packet.




Where does the benefit come from?



There are two things going on here, I think.

First is that the timer affects latency, removing the timeout helps
that.
  


If the timer affects latency, then something is very wrong.  We're 
lacking an adjustable window.


The way I see it, the notification window should be adjusted according 
to the current workload.  If the link is idle, the window should be one 
packet -- notify as soon as something is queued.  As the workload 
increases, the window increases to (safety_factor * allowable_latency / 
packet_rate).  The timer is set to allowable_latency to catch changes in 
workload.


For example:

- allowable_latency 1ms (implies 1K vmexits/sec desired)
- current packet_rate 20K packets/sec
- safety_factor 0.8

So we request notifications every 0.8 * 20K * 1m = 16 packets, and set 
the timer to 1ms.  Usually we get a notification every 16 packets, just 
before timer expiration.  If the workload increases, we get 
notifications sooner, so we increase the window.  If the workload drops, 
the timer fires and we decrease the window.


The timer should never fire on an all-out benchmark, or in a ping test.


Second is that currently when we fill up the ring we block the guest
vcpu and flush. Thus, while we're copying a entire ring full of packets
that guest isn't making progress. Doing the copying in the I/O thread
helps there.
  


We're hurting our cache, and this won't work well with many nics.  At 
the very least this should be done in a dedicated thread.  It's also 
going to damage latency.


The only real fix is to avoid the copy altogether.


Note - the only net I/O we currently do in the vcpu thread at the moment
is when the guest is saturating the link. Any other timer, all the I/O
is done in the I/O thread by virtue of the timer.
  


This is fundamental brokenness, as mentioned above, in my 
non-networking-expert opinion.



Is the overhead of managing the timer too high, or does it fire too
late and so we sleep?  If the latter, can we tune it dynamically?

For example, if the guest sees it is making a lot of progress without 
the host catching up (waiting on the tx timer), it can 
kick_I_really_mean_this_now(), to get the host to notice.



It does that already - if the ring fills up the guests forces a kick
which causes the host to flush the ring in the vcpu thread.
  


Should happen some time before the ring fills up.  Especially if we make 
the flushing aync by offloading to some other thread.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/6] Kill off the virtio_net tx mitigation timer

2008-11-06 Thread Mark McLoughlin
Hey,
So, I went off and spent some time gathering more data on this stuff
and putting it together in a more consumable fashion.

Here are some graphs showing the effect some of these changes have on
throughput, cpu utilization and vmexit rate:

  http://markmc.fedorapeople.org/virtio-netperf/2008-11-06/

The results are a little surprising, and I'm not sure I've fully
digested them yet but some conclusions:

  1) Disabling notifications from the guest for longer helps; you see 
 an increase in cpu utilization and vmexit rate, but that can be 
 accounted for by the extra data we're transferring

  2) Flushing (when the ring is full) in the I/O thread doesn't seem to
 help anything; strangely, it has a detrimental effect on 
 host->guest traffic where I would expect us to hit this case at 
 all.

 I suspect we may not actually be hitting the full ring condition
 in these tests at all.

  3) The catch-more-io thing helps a little, especially host->guest, 
 without any real detrimental impact.

  4) Removing the tx timer doesn't have a huge affect on guest->host, 
 except for 32 byte buffers where we see a huge increase in vmexits 
 and a drop in throughput. Bizarrely, we don't see this effect with 
 64 byte buffers.

 However, it does have a pretty significant impact on host->guest, 
 which makes sense since in that case we'll just have a steady
 stream of TCP ACK packets so if small guest->host packets are 
 affected badly, so will the ACK packets.

  5) The drop-mutex patch is a nice win overall, except for a huge 
 increase in vmexits for sub-4k guest->host packets. Strange.

Cheers,
Mark.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/6] Kill off the virtio_net tx mitigation timer

2008-11-06 Thread Avi Kivity

Mark McLoughlin wrote:

Hey,
So, I went off and spent some time gathering more data on this stuff
and putting it together in a more consumable fashion.

Here are some graphs showing the effect some of these changes have on
throughput, cpu utilization and vmexit rate:

  http://markmc.fedorapeople.org/virtio-netperf/2008-11-06/

  


This is very helpful.


The results are a little surprising, and I'm not sure I've fully
digested them yet but some conclusions:

  1) Disabling notifications from the guest for longer helps; you see 
 an increase in cpu utilization and vmexit rate, but that can be 
 accounted for by the extra data we're transferring


  


Graphing cpu/bandwidth (cycles/bit) will show that nicely.


  2) Flushing (when the ring is full) in the I/O thread doesn't seem to
 help anything; strangely, it has a detrimental effect on 
 host->guest traffic where I would expect us to hit this case at 
 all.


 I suspect we may not actually be hitting the full ring condition
 in these tests at all.

  


That's good; ring full == stall, especially with smp guests.


  4) Removing the tx timer doesn't have a huge affect on guest->host, 
 except for 32 byte buffers where we see a huge increase in vmexits 
 and a drop in throughput. Bizarrely, we don't see this effect with 
 64 byte buffers.
  


Wierd.  Cacheline size effects?  the host must copy twice the number of 
cachelines for the same throughput, when moving between 32 and 64.




 However, it does have a pretty significant impact on host->guest, 
 which makes sense since in that case we'll just have a steady
 stream of TCP ACK packets so if small guest->host packets are 
 affected badly, so will the ACK packets.
  


no-tx-timer is good for two workloads: streaming gso packets, where the 
packet is so large the vmexit count is low anyway, and small, latency 
sensitive packets, where you need the vmexits.  I'm worried about the 
workloads in between, which is why I'm pushing for the dynamic window.


  5) The drop-mutex patch is a nice win overall, except for a huge 
 increase in vmexits for sub-4k guest->host packets. Strange.
  


What types of vmexits are these?  virtio pio or mmu?  and what's the 
test length (interested in vmexits/sec and vmexits/bit).


Maybe the allocator changes its behavior and we're faulting in pages.

--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/6] Kill off the virtio_net tx mitigation timer

2008-11-06 Thread Mark McLoughlin
Hi Avi,

Just thinking about your variable window suggestion ...

On Mon, 2008-11-03 at 14:40 +0200, Avi Kivity wrote:
> Mark McLoughlin wrote:
> > On Sun, 2008-11-02 at 11:48 +0200, Avi Kivity wrote:

> >> Where does the benefit come from?
> >> 
> >
> > There are two things going on here, I think.
> >
> > First is that the timer affects latency, removing the timeout helps
> > that.
> >   
> 
> If the timer affects latency, then something is very wrong.  We're 
> lacking an adjustable window.
> 
> The way I see it, the notification window should be adjusted according 
> to the current workload.  If the link is idle, the window should be one 
> packet -- notify as soon as something is queued.  As the workload 
> increases, the window increases to (safety_factor * allowable_latency / 
> packet_rate).  The timer is set to allowable_latency to catch changes in 
> workload.
> 
> For example:
> 
> - allowable_latency 1ms (implies 1K vmexits/sec desired)
> - current packet_rate 20K packets/sec
> - safety_factor 0.8
> 
> So we request notifications every 0.8 * 20K * 1m = 16 packets, and set 
> the timer to 1ms.  Usually we get a notification every 16 packets, just 
> before timer expiration.  If the workload increases, we get 
> notifications sooner, so we increase the window.  If the workload drops, 
> the timer fires and we decrease the window.
> 
> The timer should never fire on an all-out benchmark, or in a ping test.

The way I see this (continuing with your example figures) playing out
is:

  - If we have a packet rate of <2.5K packets/sec, we essentially have 
zero added latency - each packet causes a vmexit and the packet is 
dispatched immediately

  - As soon as we go above 2.5k we add, on average, an additional 
~400us delay to each packet

  - This is almost identical to our current scheme with an 800us timer, 
except that flushes are typically triggered by a vmexit instead of
the timer expiring

I don't think this is the effect you're looking for? Am I missing
something?

Cheers,
Mark.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/6] Kill off the virtio_net tx mitigation timer

2008-11-09 Thread Avi Kivity

Mark McLoughlin wrote:

The way I see this (continuing with your example figures) playing out
is:

  - If we have a packet rate of <2.5K packets/sec, we essentially have 
zero added latency - each packet causes a vmexit and the packet is 
dispatched immediately


  - As soon as we go above 2.5k we add, on average, an additional 
~400us delay to each packet


  - This is almost identical to our current scheme with an 800us timer, 
except that flushes are typically triggered by a vmexit instead of

the timer expiring

I don't think this is the effect you're looking for? Am I missing
something?
  


No.  While it's what my description implies, it's not what I want.

Let's step back for a bit.  What do we want?

Let's assume the virtio queue is connected to a real queue.  The 
guest->host scenario is easier, and less important.


So:

1. we never want to have a situation where the host queue is empty, but 
the guest queue has unkicked entries.  That will drop us below line rate 
and add latencies.
2. we want to avoid situations where the host queue is non-empty, and we 
kick the guest queue.  The won't improve latency, and will increase cpu 
utilization
 - if the host queue is close to depletion, then we _do_ want the kick, 
to avoid violating the first requirement (which is more important)


Does this seem sane as high-level goals?  If so we can think about how 
to implement it.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html