Hi, Wes
I worte something to see if it can make the "DPDK like" software discuess
clearer. For the sake of simplicity, I call them "DP", and refer the design I
talked about as SRT(Semi-RT).
When I said "Instlling vswitch inside VM", For example, hypervisor is a
Microsoft system, who runs a VM running Linux. Each level runs other
applications, or even other vswitches in the same CPU Cores. So we start a
process inside this Linux, and make it every thing a hardware box should be,
for example , per packet lantancy. I am sorry if I haven't make myself clear.
So how shall we achieve this lantancy goal(10us each packet) which is far
shorter than a normal TICK? Inherited scheduling has not matured enough to
satify this requirement last time I checked, and unlikely to become so. We are
not supposed to do CPU-shielding, which doesn't make any sense in the VM
solution context.
Every RT thread appeares somehow aggresive from scheduling point of view, which
makes them rare and expensive. But problems like this one can be solved very
easily if we introduce a semi-RT middleware.
When VMs becomes bigger, RT applications shares the same problem same as SMP OS
it self. In a SMP OS, sharing data requires spin lock, which causes problem
when used in user space. The current solution is not good and will face more
problems in future. Now this two(DP, SMP OS) face the same problem, so what?
make every VM process believe they are a dataplane? Of course not. No
well-designed spinlock is supposed to last 10us, it is unfair to treat the
whole process as a RT-application. The right way I believe is to make
small-time RT context avaliable to all user-spaces applications, no matter in
which level of VM.
Speaking of v-wire, of course it be implemented with encapsulation . Trust me
no one is going to consider to use it if we do so. A simpile cross-Process data
trans costs far more than a packet forwarding is supposed to do. And we are
talking about two processes cross-VM. A v-wire should cost ingorablely, no
matter where to where, as the physical wire does. So people will use it freely.
It is the most reasonable way to make it based on a semi-RT middleware, if it
is not the only way.
Assuming we have done things above, and every running enviroment a vswitch can
ask for is avaliable in every common process, we can do RT packet forwarding
everywhere. Only a vswitch architect can tell what the flexibility can it bring
into virt-net.
All DP platforms, including what you talked , or more boldly, fit themself into
host kernel, seems to make "Dataplane" threated specially. That is the
difference between them and SRT, which I think will be not necessary in future.
So they make ships for dataplanes, some of them are very good. But I think
vswitch should evolve as mermaid, so abundon ship. DP and SRT start for
different direction, so the do different things.
On Friday, June 13, 2014 3:00 AM, Wes Felter <[email protected]> wrote:
On 6/12/14, 3:12 AM, Shen Li wrote:
> I worked on OS software, and know little about networking
> virtualization. I have a question. Assuming the following technologies
> come true, then we can let users install their own choice of
> vswitches(or such kind of "devices") in their own VM, without risk to
> influence CSP's hypervisor software; and we can allow users to build
> their own network construction overlaying upon the existing net, give
> them more flexibilities and control.
Most of this has already been done in the NFV and GENI worlds. Both
Snabb and DPDK are implementing low-overhead paths into VMs. Virtual
wires can be implemented with encapsulation.
--
Wes Felter
IBM Research - Austin
_______________________________________________
discuss mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/discuss
_______________________________________________
discuss mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/discuss