Jason Wang wrote on 11/25/2011 08:51:57 AM:
>
> My description is not clear again :(
> I mean the same vhost thead:
>
> vhost thread #0 transmits packets of flow A on processor M
> ...
> vhost thread #0 move to another process N and start to transmit packets
> of flow A
Thanks for clarifying. Yes
"Michael S. Tsirkin" wrote on 11/24/2011 09:44:31 PM:
> > As far as I can see, ixgbe binds queues to physical cpu, so let
consider:
> >
> > vhost thread transmits packets of flow A on processor M
> > during packet transmission, ixgbe driver programs the card to
> > deliver the packet of flow A to
jasowang wrote on 11/24/2011 06:30:52 PM:
>
> >> On Thu, Nov 24, 2011 at 01:47:14PM +0530, Krishna Kumar wrote:
> >>> It was reported that the macvtap device selects a
> >>> different vhost (when used with multiqueue feature)
> >>> for incoming packets of a single connection. Use
> >>> packet has
"Michael S. Tsirkin" wrote on 11/24/2011 03:29:03 PM:
> Subject Re: [PATCH] macvtap: Fix macvtap_get_queue to use rxhash first
>
> On Thu, Nov 24, 2011 at 01:47:14PM +0530, Krishna Kumar wrote:
> > It was reported that the macvtap device selects a
> > different vhost (when used with multiqueue fe
jason wang wrote on 11/16/2011 11:40:45 AM:
Hi Jason,
> Have any thought in mind to solve the issue of flow handling?
So far nothing concrete.
> Maybe some performance numbers first is better, it would let us know
> where we are. During the test of my patchset, I find big regression of
> small
Sasha Levin wrote on 11/14/2011 03:45:40 PM:
> > Why both the bandwidth and latency performance are dropping so
> > dramatically with multiple VQ?
>
> It looks like theres no hash sync between host and guest, which makes
> the RX VQ change for every packet. This is my guess.
Yes, I confirmed thi
"Michael S. Tsirkin" wrote on 06/13/2011 07:05:13 PM:
> > I ran the latest patches with 1K I/O (guest->local host) and
> > the results are (60 sec run for each test case):
>
> Hi!
> Did you apply this one:
> [PATCHv2 RFC 4/4] Revert "virtio: make add_buf return capacity remaining"
> ?
>
> It turn
> Krishna Kumar2/India/IBM@IBMIN wrote on 06/13/2011 07:02:27 PM:
...
> With 16K, there was an improvement in SD, but
> higher sessions seem to slightly degrade BW/SD:
I meant to say "With 16K, there was an improvement in BW"
above. Again the numbers are not very reproducibl
"Michael S. Tsirkin" wrote on 06/07/2011 09:38:30 PM:
> > This is on top of the patches applied by Rusty.
> >
> > Warning: untested. Posting now to give people chance to
> > comment on the API.
>
> OK, this seems to have survived some testing so far,
> after I dropped patch 4 and fixed build for
"Michael S. Tsirkin" wrote on 06/02/2011 09:04:23 PM:
> > > Is this where the bug was?
> >
> > Return value in free_old_xmit() was wrong. I will re-do against the
> > mainline kernel.
> >
> > Thanks,
> >
> > - KK
>
> Just noting that I'm working on that patch as well, it might
> be more efficient
"Michael S. Tsirkin" wrote on 06/02/2011 08:13:46 PM:
> > Please review this patch to see if it looks reasonable:
>
> Hmm, since you decided to work on top of my patch,
> I'd appreciate split-up fixes.
OK (that also explains your next comment).
> > 1. Picked comments/code from MST's code and Ru
> OK, I have something very similar, but I still dislike the screw the
> latency part: this path is exactly what the IBM guys seem to hit. So I
> created two functions: one tries to free a constant number and another
> one up to capacity. I'll post that now.
Please review this patch to see if it
Krishna Kumar2/India/IBM wrote on 05/26/2011 09:51:32 PM:
> > Could you please try TCP_RRs as well?
>
> Right. Here's the result for TCP_RR:
The actual transaction rate/second numbers are:
_
# RR1 RR2
Shirley Ma wrote on 05/26/2011 09:12:22 PM:
> Could you please try TCP_RRs as well?
Right. Here's the result for TCP_RR:
__
# RR% SD% CPU%
__
1 4.5 -31.4-27.9
2 5.1 -9.7 -5.4
4 6
"Michael S. Tsirkin" wrote on 05/20/2011 04:40:07 AM:
> OK, here is the large patchset that implements the virtio spec update
> that I sent earlier (the spec itself needs a minor update, will send
> that out too next week, but I think we are on the same page here
> already). It supercedes the PUB
"Michael S. Tsirkin" wrote on 05/24/2011 04:59:39 PM:
> > > > Maybe Rusty means it is a simpler model to free the amount
> > > > of space that this xmit needs. We will still fail anyway
> > > > at some time but it is unlikely, since earlier iteration
> > > > freed up atleast the space that it was
"Michael S. Tsirkin" wrote on 05/24/2011 02:42:55 PM:
> > > > To do this properly, we should really be using the actual number of
sg
> > > > elements needed, but we'd have to do most of xmit_skb beforehand so
we
> > > > know how many.
> > > >
> > > > Cheers,
> > > > Rusty.
> > >
> > > Maybe I'm c
"Michael S. Tsirkin" wrote on 05/23/2011 04:49:00 PM:
> > To do this properly, we should really be using the actual number of sg
> > elements needed, but we'd have to do most of xmit_skb beforehand so we
> > know how many.
> >
> > Cheers,
> > Rusty.
>
> Maybe I'm confused here. The problem isn't
"Michael S. Tsirkin" wrote on 05/05/2011 02:20:18 AM:
> [PATCH 00/18] virtio and vhost-net performance enhancements
>
> OK, here's a large patchset that implements the virtio spec update that I
> sent earlier. It supercedes the PUBLISH_USED_IDX patches
> I sent out earlier.
>
> I know it's a lot
eing an
extra skb.
>
> On Mon, Jun 22, 2009 at 11:16:03AM +0530, Krishna Kumar2 wrote:
> >
> > I was curious about "queueing it in the driver" part: why is this bad?
Do
> > you
> > anticipate any performance problems, or does it break QoS, or something
> > else
Hi Herbert,
> Herbert Xu wrote on 06/19/2009 10:06:13 AM:
>
> > We either remove the API, or fix it. I think fixing it is better,
because my
> > driver will be simpler and it's obvious noone wants to rewrite 50
drivers and
> > break several of them.
>
> My preference is obviously in the long ter
21 matches
Mail list logo