From: jamal <[EMAIL PROTECTED]>
Date: Fri, 06 Jul 2007 10:39:15 -0400
> If the issue is usability of listing 1024 netdevices, i can think of
> many ways to resolve it.
I would agree with this if there were a reason for it, it's totally
unnecessary complication as far as I can see.
These virtual
On Fri, 2007-07-06 at 10:39 -0400, jamal wrote:
> The first thing that crossed my mind was "if you want to select a
> destination port based on a destination MAC you are talking about a
> switch/bridge". You bring up the issue of "a huge number of virtual NICs
> if you wanted arbitrary guests" whic
jamal wrote:
If the issue is usability of listing 1024 netdevices, i can think of
many ways to resolve it.
One way we can resolve the listing is with a simple tag to the netdev
struct i could say "list netdevices for guest 0-10" etc etc.
This would be a useful feature, not only for virtualizati
On Fri, 2007-06-07 at 17:32 +1000, Rusty Russell wrote:
[..some good stuff deleted here ..]
> Hope that adds something,
It does - thanks.
I think i was letting my experience pollute my thinking earlier when
Dave posted. The copy-avoidance requirement is clear to me[1].
I had another issue wh
On Tue, 2007-07-03 at 22:20 -0400, jamal wrote:
> On Tue, 2007-03-07 at 14:24 -0700, David Miller wrote:
> [.. some useful stuff here deleted ..]
>
> > That's why you have to copy into a purpose-built set of memory
> > that is composed of pages that _ONLY_ contain TX packet buffers
> > and nothing
On Tue, 2007-03-07 at 14:24 -0700, David Miller wrote:
[.. some useful stuff here deleted ..]
> That's why you have to copy into a purpose-built set of memory
> that is composed of pages that _ONLY_ contain TX packet buffers
> and nothing else.
>
> The cost of going through the switch is too high
From: jamal <[EMAIL PROTECTED]>
Date: Tue, 03 Jul 2007 08:42:33 -0400
> (likely not in the case of hypervisor based virtualization like Xen)
> just have their skbs cloned when crossing domains, is that not the
> case?[1]
> Assuming they copy, the balance that needs to be stricken now is
> between:
On Sat, 2007-30-06 at 13:33 -0700, David Miller wrote:
> It's like twice as fast, since the switch doesn't have to copy
> the packet in, switch it, then the destination guest copies it
> into it's address space.
>
> There is approximately one copy for each hop you go over through these
> virtual
From: jamal <[EMAIL PROTECTED]>
Date: Sat, 30 Jun 2007 10:52:44 -0400
> On Fri, 2007-29-06 at 21:35 -0700, David Miller wrote:
>
> > Awesome, but let's concentrate on the client since I can actually
> > implement and test anything we come up with :-)
>
> Ok, you need to clear one premise for me
On Fri, 2007-29-06 at 21:35 -0700, David Miller wrote:
> Awesome, but let's concentrate on the client since I can actually
> implement and test anything we come up with :-)
Ok, you need to clear one premise for me then ;->
You said the model is for the guest/client to hook have a port to the
host
> "DM" == David Miller <[EMAIL PROTECTED]> writes:
DM> And some people still use hubs, believe it or not.
Hubs are 100Mbps at most. You could of course make a flooding Gbps
switch, but it would be rather silly. If you care about multicast
performance, you get a switch with IGMP snooping.
/B
From: jamal <[EMAIL PROTECTED]>
Date: Fri, 29 Jun 2007 21:30:53 -0400
> On Fri, 2007-29-06 at 14:31 -0700, David Miller wrote:
> > Maybe for the control node switch, yes, but not for the guest network
> > devices.
>
> And that is precisely what i was talking about - and i am sure thats how
> the
On Fri, 2007-29-06 at 14:31 -0700, David Miller wrote:
> This conversation begins to go into a pointless direction already, as
> I feared it would.
>
> Nobody is going to configure bridges, classification, tc, and all of
> this other crap just for a simple virtualized guest networking device.
>
>
From: Ben Greear <[EMAIL PROTECTED]>
Date: Fri, 29 Jun 2007 08:33:06 -0700
> Patrick McHardy wrote:
> > Right, but the current bridging code always uses promiscous mode
> > and its nice to avoid that if possible. Looking at the code, it
> > should be easy to avoid though by disabling learning (and
This conversation begins to go into a pointless direction already, as
I feared it would.
Nobody is going to configure bridges, classification, tc, and all of
this other crap just for a simple virtualized guest networking device.
It's a confined and well defined case that doesn't need any of that
Patrick McHardy wrote:
Ben Greear wrote:
Could someone give a quick example of when I am wrong and promisc mode
would allow
a NIC to receive a significant number of packets not really destined for
it?
In a switched environment it won't have a big effect, I agree.
It might help avoid r
Ben Greear wrote:
> Patrick McHardy wrote:
>
>> Right, but the current bridging code always uses promiscous mode
>> and its nice to avoid that if possible. Looking at the code, it
>> should be easy to avoid though by disabling learning (and thus
>> promisous mode) and adding unicast filters for al
Patrick McHardy wrote:
Right, but the current bridging code always uses promiscous mode
and its nice to avoid that if possible. Looking at the code, it
should be easy to avoid though by disabling learning (and thus
promisous mode) and adding unicast filters for all static fdb entries.
I am cur
On Fri, 2007-29-06 at 15:08 +0200, Patrick McHardy wrote:
> jamal wrote:
> > On Fri, 2007-29-06 at 13:59 +0200, Patrick McHardy wrote:
> Right, but the current bridging code always uses promiscous mode
> and its nice to avoid that if possible.
> Looking at the code, it
> should be easy to avoid t
jamal wrote:
> On Fri, 2007-29-06 at 13:59 +0200, Patrick McHardy wrote:
>
>
>>The difference to a real bridge is that the
>>all addresses are completely known in advance, so it doesn't need
>>promiscous mode for learning.
>
>
> You mean the per-virtual MAC addresses are known in advance, right
On Fri, 2007-29-06 at 13:59 +0200, Patrick McHardy wrote:
> I'm guessing that that wouldn't allow to do unicast filtering for
> the guests on the real device without hacking the bridge code for
> this special case.
For ingress (i guess you could say for egress as well): we can do it as
well toda
jamal wrote:
> On Thu, 2007-28-06 at 21:20 -0700, David Miller wrote:
>
>>Each guest gets a unique MAC address. There is a queue per-port
>>that can fill up.
>>
>>What all the drivers like this do right now is stop the queue if
>>any of the per-port queues fill up, and that's why my sunvnet
>>dri
Ive changed the topic for you friend - otherwise most people wont follow
(as youve said a few times yourself ;->).
On Thu, 2007-28-06 at 21:20 -0700, David Miller wrote:
> Now I get to pose a problem for everyone, prove to me how useful
> this new code is by showing me how it can be used to solv
23 matches
Mail list logo