Re: [Pv-drivers] RFC: Network Plugin Architecture (NPA) for vmxnet3
On Wed, Jul 14, 2010 at 01:42:59PM -0700, Shreyas Bhatewara wrote: > +/* vmkernel and device backend shared definitions */ > + > +#define VMXNET3_PLUGIN_NAME_LEN 256 > +#define VMXNET3_PLUGIN_REPOSITORY "/usr/lib/vmware/npa_plugins" Why would the kernel care about this file path? And since when do we hard-code file paths in the kernel in the first place (yeah, in some places we do, but not like this...) > +#define NPA_MEMIO_REGIONS_u64X6 > + > +typedef u32 VF_ID; > + > +struct Vmxnet3_VFInfo { > + char pluginName[VMXNET3_PLUGIN_NAME_LEN]; This is never used. > + u32 deviceInfo[VMXNET3_PLUGIN_INFO_LEN]; /* opaque data returned > + * by PF driver */ This is happily copied around and zeroed out, but never actually used by anything. > + u64 memioAddr; > + u32 memioLen; This field is never used. Why have fields in a structure that are never used? > +}; <...> > +/* > + * Easy shell API calling macros. > + */ > +#define Shell_AllocSmallBuffer(_state, _handle, _ringOffset) \ > + ((_state)->shellApi.allocSmallBuffer((_handle), (_ringOffset))) > +#define Shell_AllocLargeBuffer(_state, _handle, _ringOffset) \ > + ((_state)->shellApi.allocLargeBuffer((_handle), (_ringOffset))) > +#define Shell_FreeBuffer(_state, _handle, _ringOffset) > \ > + ((_state)->shellApi.freeBuffer((_handle), (_ringOffset))) > +#define Shell_CompleteSend(_state, _handle, _numPkt) \ > + ((_state)->shellApi.completeSend((_handle), (_numPkt))) > +#define Shell_IndicateRecv(_state, _handle, _frame) \ > + ((_state)->shellApi.indicateRecv((_handle), (_frame))) > +#define Shell_Log(_state, _loglevel, _n, _fmt, ...) \ > + do {\ > + if (logEnabled && (_loglevel) <= (u32)logLevel) { \ > + (_state)->shellApi.log((_n) + 1,\ > + "%s: " _fmt,\ > + __func__, \ > +##__VA_ARGS__); \ > + } \ > + } while (0) This hiding of functions kind of implies that something odd is going on here, right? At the least, make them inline functions so you get the proper typechecking warnings/errors in a format that you can understand. > +/* > + * Some standard definitions > + */ > +#ifndef NULL > +#define NULL (void *)0 > +#endif What's wrong with the kernel-provided version of this? > +/* > + * Utility macro to write a register's value (BAR0) > + */ > +#define VMXNET3_WRITE_REG(_state, _offset, _value) \ > + (*(u32 *)((u8 *)(_state)->memioAddr + (_offset)) = \ > + (_value)) This will never work, sorry. Please use the proper functions for doing this type of access. I'm amazed that anyone even thought this would succeed... > +/* > + * Utility macro to align a virtual address > + */ > +#define ALIGN_VA(_ptr, _align) ((void *)(((uintptr_t)(_ptr) + ((_align) - > 1)) &\ > + ~((_align) - 1))) What's wrong with the kernel provided function for this? Anyway, just randomly poking at the code like this turns up these types of trivial issues, has this code ever been run? wierd, greg k-h ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linux-foundation.org/mailman/listinfo/virtualization
Re: [Pv-drivers] RFC: Network Plugin Architecture (NPA) for vmxnet3
On Wed, Jul 14, 2010 at 10:18:22AM -0700, Pankaj Thakkar wrote: > The plugin is guest agnostic and hence we did not want to rely on any > kernel provided functions. The plugin uses only the interface provided > by the shell. Really? vmxnet3_plugin.c is no supposed to use any kernel-provided functions at all? Then why have it in the kernel at all? Seriously, why? > The assumption is that since the plugin is really simple and straight > forward (all the control/init complexity lies in the PF driver in the > hypervisor) we should be able to get by for most of the things and for > things like memcpy/memset the plugin can write simple functions like > this. If it's so simple, then why does it need to be separate? Why not just put it in your driver as-is to handle the ring-buffer logic (as that's all it looks to be doing), and then you don't need any plugin code at all? It looks like you are linking this file into your "main" driver module, so I fail to see any type of separation at all happening with this patch. Or am I totally missing something here? thanks, greg k-h ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linux-foundation.org/mailman/listinfo/virtualization
Re: [Pv-drivers] RFC: Network Plugin Architecture (NPA) for vmxnet3
On 07/14/2010 10:54 AM, David Miller wrote: > And doing what you're doing is foolish on so many levels. One more > duplication of code, one more place for unnecessary bugs to live, one > more place that might need optimizations and thus require duplication > of even more work people have done over the years. > Not to mention calling a function "MoveMemory" when it doesn't do a memmove is just cruel. J ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linux-foundation.org/mailman/listinfo/virtualization
Re: [Pv-drivers] RFC: Network Plugin Architecture (NPA) for vmxnet3
From: Pankaj Thakkar Date: Wed, 14 Jul 2010 10:18:22 -0700 > The plugin is guest agnostic and hence we did not want to rely on > any kernel provided functions. While I disagree entirely with this kind of approach, even that doesn't justify what you're doing here. memcpy() and memset() are on a much more fundamental ground than "kernel provided functions". They had better be available no matter where you build this thing. And doing what you're doing is foolish on so many levels. One more duplication of code, one more place for unnecessary bugs to live, one more place that might need optimizations and thus require duplication of even more work people have done over the years. ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linux-foundation.org/mailman/listinfo/virtualization
RE: [Pv-drivers] RFC: Network Plugin Architecture (NPA) for vmxnet3
The plugin is guest agnostic and hence we did not want to rely on any kernel provided functions. The plugin uses only the interface provided by the shell. The assumption is that since the plugin is really simple and straight forward (all the control/init complexity lies in the PF driver in the hypervisor) we should be able to get by for most of the things and for things like memcpy/memset the plugin can write simple functions like this. -p From: Greg KH [g...@kroah.com] Sent: Wednesday, July 14, 2010 2:49 AM To: Shreyas Bhatewara Cc: Christoph Hellwig; Stephen Hemminger; Pankaj Thakkar; pv-driv...@vmware.com; net...@vger.kernel.org; linux-ker...@vger.kernel.org; virtualization@lists.linux-foundation.org Subject: Re: [Pv-drivers] RFC: Network Plugin Architecture (NPA) for vmxnet3 Is there some reason that our in-kernel functions that do this type of logic are not working for you to require you to reimplement this? thanks, greg k-h ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linux-foundation.org/mailman/listinfo/virtualization
Re: [Pv-drivers] RFC: Network Plugin Architecture (NPA) for vmxnet3
On Wed, 14 Jul 2010, Greg KH wrote: > On Mon, Jul 12, 2010 at 08:06:28PM -0700, Shreyas Bhatewara wrote: > > drivers/net/vmxnet3/vmxnet3_drv.c | 1845 > > +++-- > > Your patch is line-wrapped and can not be applied :( > > Care to fix your email client? > > One thing just jumped out at me when glancing at this: > > > +static INLINE void > > +MoveMemory(void *dst, > > + void *src, > > + size_t length) > > +{ > > + size_t i; > > + for (i = 0; i < length; ++i) > > + ((u8 *)dst)[i] = ((u8 *)src)[i]; > > +} > > + > > +static INLINE void > > +ZeroMemory(void *memory, > > + size_t length) > > +{ > > + size_t i; > > + for (i = 0; i < length; ++i) > > + ((u8 *)memory)[i] = 0; > > +} > > Is there some reason that our in-kernel functions that do this type of > logic are not working for you to require you to reimplement this? > > thanks, > > greg k-h > Greg, Thanks for pointing out. I will fix both these issues and repost the patch. ->Shreyas ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linux-foundation.org/mailman/listinfo/virtualization
Re: [Pv-drivers] RFC: Network Plugin Architecture (NPA) for vmxnet3
On Mon, Jul 12, 2010 at 08:06:28PM -0700, Shreyas Bhatewara wrote: > drivers/net/vmxnet3/vmxnet3_drv.c | 1845 > +++-- Your patch is line-wrapped and can not be applied :( Care to fix your email client? One thing just jumped out at me when glancing at this: > +static INLINE void > +MoveMemory(void *dst, > + void *src, > + size_t length) > +{ > + size_t i; > + for (i = 0; i < length; ++i) > + ((u8 *)dst)[i] = ((u8 *)src)[i]; > +} > + > +static INLINE void > +ZeroMemory(void *memory, > + size_t length) > +{ > + size_t i; > + for (i = 0; i < length; ++i) > + ((u8 *)memory)[i] = 0; > +} Is there some reason that our in-kernel functions that do this type of logic are not working for you to require you to reimplement this? thanks, greg k-h ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linux-foundation.org/mailman/listinfo/virtualization
Re: [Pv-drivers] RFC: Network Plugin Architecture (NPA) for vmxnet3
On Mon, 12 Jul 2010 20:06:28 -0700 Shreyas Bhatewara wrote: > > On Thu, 2010-05-06 at 13:21 -0700, Christoph Hellwig wrote: > > On Wed, May 05, 2010 at 10:52:53AM -0700, Stephen Hemminger wrote: > > > Let me put it bluntly. Any design that allows external code to run > > > in the kernel is not going to be accepted. Out of tree kernel modules > > > are enough > > > of a pain already, why do you expect the developers to add another > > > interface. > > > > Exactly. Until our friends at VMware get this basic fact it's useless > > to continue arguing. > > > > Pankaj and Dmitry: you're fine to waste your time on this, but it's not > > going to go anywhere until you address that fundamental problem. The > > first thing you need to fix in your archicture is to integrate the VF > > function code into the kernel tree, and we can work from there. > > > > Please post patches doing this if you want to resume the discussion. > > > > ___ > > Pv-drivers mailing list > > pv-driv...@vmware.com > > http://mailman2.vmware.com/mailman/listinfo/pv-drivers > > > As discussed, following is the patch to give you an idea > about implementation of NPA for vmxnet3 driver. Although the > patch is big, I have verified it with checkpatch.pl. It gave > 0 errors / warnings. > > Signed-off-by: Matthieu Bucchaineri > Signed-off-by: Shreyas Bhatewara > --- I am surprised, the code seems to use lots of mixed case in places that don't really follow current kernel practice. ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linux-foundation.org/mailman/listinfo/virtualization
Re: [Pv-drivers] RFC: Network Plugin Architecture (NPA) for vmxnet3
On Mon, 12 Jul 2010 20:06:28 -0700 Shreyas Bhatewara wrote: > > On Thu, 2010-05-06 at 13:21 -0700, Christoph Hellwig wrote: > > On Wed, May 05, 2010 at 10:52:53AM -0700, Stephen Hemminger wrote: > > > Let me put it bluntly. Any design that allows external code to run > > > in the kernel is not going to be accepted. Out of tree kernel modules > > > are enough > > > of a pain already, why do you expect the developers to add another > > > interface. > > > > Exactly. Until our friends at VMware get this basic fact it's useless > > to continue arguing. > > > > Pankaj and Dmitry: you're fine to waste your time on this, but it's not > > going to go anywhere until you address that fundamental problem. The > > first thing you need to fix in your archicture is to integrate the VF > > function code into the kernel tree, and we can work from there. > > > > Please post patches doing this if you want to resume the discussion. > > > > ___ > > Pv-drivers mailing list > > pv-driv...@vmware.com > > http://mailman2.vmware.com/mailman/listinfo/pv-drivers > > > As discussed, following is the patch to give you an idea > about implementation of NPA for vmxnet3 driver. Although the > patch is big, I have verified it with checkpatch.pl. It gave > 0 errors / warnings. > > Signed-off-by: Matthieu Bucchaineri > Signed-off-by: Shreyas Bhatewara I think the concept won't fly. But you should really at least try running checkpatch to make sure the style conforms. -- ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linux-foundation.org/mailman/listinfo/virtualization
Re: [Pv-drivers] RFC: Network Plugin Architecture (NPA) for vmxnet3
On Wed, May 05, 2010 at 10:52:53AM -0700, Stephen Hemminger wrote: > Let me put it bluntly. Any design that allows external code to run > in the kernel is not going to be accepted. Out of tree kernel modules are > enough > of a pain already, why do you expect the developers to add another > interface. Exactly. Until our friends at VMware get this basic fact it's useless to continue arguing. Pankaj and Dmitry: you're fine to waste your time on this, but it's not going to go anywhere until you address that fundamental problem. The first thing you need to fix in your archicture is to integrate the VF function code into the kernel tree, and we can work from there. Please post patches doing this if you want to resume the discussion. ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linux-foundation.org/mailman/listinfo/virtualization
Re: [Pv-drivers] RFC: Network Plugin Architecture (NPA) for vmxnet3
On Thu, May 06, 2010 at 11:04:11AM -0700, Pankaj Thakkar wrote: > Plugin is x86 or x64 machine code. You write the plugin in C and compile it > using gcc/ld to get the object file, we map the relevant sections only to the > OS space. Which is simply not supportable for a cross-platform operating system like Linux. ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linux-foundation.org/mailman/listinfo/virtualization
Re: [Pv-drivers] RFC: Network Plugin Architecture (NPA) for vmxnet3
On Wed, May 05, 2010 at 10:47:10AM -0700, Pankaj Thakkar wrote: > > Forget about the licensing. Loading binary blobs written to a shim > > layer is a complete pain in the ass and totally unsupportable, and > > also uninteresting because of the overhead. > > [PT] Why do you think it is unsupportable? How different is it from any > module written against a well maintained interface? What overhead are you > talking about? We only support in-kernel drivers, everything else is subject to changes in the kernel API and ABI. What you do is basically introducing another wrapper layer not allowing full access to the normal Linux API. People have tried this before and we're not willing to add it. Do a little research on Project UDI if you're curious. > > (1) move the limited VF drivers directly into the kernel tree, > > talk to them through a normal ops vector > [PT] This assumes that all the VF drivers would always be available. Yes, absolutely. Just as we assume that for every other driver. > Also we have to support windows and our current design supports it nicely in > an OS agnostic manner. And that's not something we care about at all. The Linux kernel has traditionally a very hostile position against cross platform drivers for reasons well explained before at many occasions. > > (2) get rid of the whole shim crap and instead integrate the limited > > VF driver with the full VF driver we already have, instead of > > duplicating the code > [PT] Having a full VF driver adds a lot of dependency on the guest VM and > this is what NPA tries to avoid. Yes, of course it does. It's a normal driver at the point which it should have been from day one. > > (3) don't make the PV to VF integration VMware-specific but also > > provide an open reference implementation like virtio. We're not > > going to add massive amount of infrastructure that is not actually > > useable in a free software stack. > [PT] Today this is tied to vmxnet3 device and is intended to work on ESX > hypervisor only (vmxnet3 works on VMware hypervisor only). All the loading > support is inside the ESX hypervisor. I am going to post the interface > between the shell and the plugin soon and you can see that there is not a > whole lot of dependency or infrastructure requirements from the Linux kernel. > Please keep in mind that we don't use Linux as a hypervisor but as a guest VM. But we use Linux as the hypervisor, too. So if you want to target a major infrastructure you might better make it available for that case. ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linux-foundation.org/mailman/listinfo/virtualization
Re: [Pv-drivers] RFC: Network Plugin Architecture (NPA) for vmxnet3
On Thu, May 06, 2010 at 01:19:33AM -0700, Gleb Natapov wrote: > Overhead of interpreting bytecode plugin is written in. Or are you > saying plugin is x86 assembly (32bit or 64bit btw?) and other arches > will have to have in kernel x86 emulator to use the plugin (like some > of them had for vgabios)? > Plugin is x86 or x64 machine code. You write the plugin in C and compile it using gcc/ld to get the object file, we map the relevant sections only to the OS space. NPA is a way of enabling passthrough of SR-IOV NICs with live migration support on ESX Hypervisor which runs only on x86/x64 hardware. It only supports x86/x64 guest OS. So we don't have to worry about other architectures. If NPA approach needs to be extended and adopted by other hypervisors then we have to take care of that. Today we have two plugins images per VF (one for 32-bit, one for 64-bit). ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linux-foundation.org/mailman/listinfo/virtualization
Re: [Pv-drivers] RFC: Network Plugin Architecture (NPA) for vmxnet3
On 5/5/10 10:29 AM, "Dmitry Torokhov" wrote: > It would not be a binary blob but software properly released under GPL. > The current plan is for the shell to enforce GPL requirement on the > plugin code, similar to what module loaded does for regular kernel > modules. On 5/5/10 3:05 PM, "Shreyas Bhatewara" wrote: > The plugin image is not linked against Linux kernel. It is OS agnostic infact > (Eg. same plugin works for Linux and Windows VMs) Are there any issues with injecting the GPL-licensed plug-in into the Windows vmxnet3 NDIS driver? -scott ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linux-foundation.org/mailman/listinfo/virtualization
Re: [Pv-drivers] RFC: Network Plugin Architecture (NPA) for vmxnet3
On Wed, May 05, 2010 at 10:47:10AM -0700, Pankaj Thakkar wrote: > > > > -Original Message- > > From: Christoph Hellwig [mailto:h...@infradead.org] > > Sent: Wednesday, May 05, 2010 10:40 AM > > To: Dmitry Torokhov > > Cc: Christoph Hellwig; pv-driv...@vmware.com; Pankaj Thakkar; > > net...@vger.kernel.org; linux-ker...@vger.kernel.org; > > virtualization@lists.linux-foundation.org > > Subject: Re: [Pv-drivers] RFC: Network Plugin Architecture (NPA) for > > vmxnet3 > > > > On Wed, May 05, 2010 at 10:35:28AM -0700, Dmitry Torokhov wrote: > > > Yes, with the exception that the only body of code that will be > > > accepted by the shell should be GPL-licensed and thus open and > > available > > > for examining. This is not different from having a standard kernel > > > module that is loaded normally and plugs into a certain subsystem. > > > The difference is that the binary resides not on guest filesystem > > > but elsewhere. > > > > Forget about the licensing. Loading binary blobs written to a shim > > layer is a complete pain in the ass and totally unsupportable, and > > also uninteresting because of the overhead. > > [PT] Why do you think it is unsupportable? How different is it from any module > written against a well maintained interface? What overhead are you talking > about? > Overhead of interpreting bytecode plugin is written in. Or are you saying plugin is x86 assembly (32bit or 64bit btw?) and other arches will have to have in kernel x86 emulator to use the plugin (like some of them had for vgabios)? -- Gleb. ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linux-foundation.org/mailman/listinfo/virtualization
RE: [Pv-drivers] RFC: Network Plugin Architecture (NPA) for vmxnet3
> -Original Message- > From: Scott Feldman [mailto:scofe...@cisco.com] > Sent: Wednesday, May 05, 2010 7:04 PM > To: Shreyas Bhatewara; Arnd Bergmann; Dmitry Torokhov > Cc: Christoph Hellwig; pv-driv...@vmware.com; net...@vger.kernel.org; > linux-ker...@vger.kernel.org; virtualizat...@lists.linux- > foundation.org; Pankaj Thakkar > Subject: Re: [Pv-drivers] RFC: Network Plugin Architecture (NPA) for > vmxnet3 > > On 5/5/10 10:29 AM, "Dmitry Torokhov" wrote: > > > It would not be a binary blob but software properly released under > GPL. > > The current plan is for the shell to enforce GPL requirement on the > > plugin code, similar to what module loaded does for regular kernel > > modules. > > On 5/5/10 3:05 PM, "Shreyas Bhatewara" wrote: > > > The plugin image is not linked against Linux kernel. It is OS > agnostic infact > > (Eg. same plugin works for Linux and Windows VMs) > > Are there any issues with injecting the GPL-licensed plug-in into the > Windows vmxnet3 NDIS driver? > > -scott ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linux-foundation.org/mailman/listinfo/virtualization
RE: [Pv-drivers] RFC: Network Plugin Architecture (NPA) for vmxnet3
> -Original Message- > From: Scott Feldman [mailto:scofe...@cisco.com] > Sent: Wednesday, May 05, 2010 7:04 PM > To: Shreyas Bhatewara; Arnd Bergmann; Dmitry Torokhov > Cc: Christoph Hellwig; pv-driv...@vmware.com; net...@vger.kernel.org; > linux-ker...@vger.kernel.org; virtualizat...@lists.linux- > foundation.org; Pankaj Thakkar > Subject: Re: [Pv-drivers] RFC: Network Plugin Architecture (NPA) for > vmxnet3 > > On 5/5/10 10:29 AM, "Dmitry Torokhov" wrote: > > > It would not be a binary blob but software properly released under > GPL. > > The current plan is for the shell to enforce GPL requirement on the > > plugin code, similar to what module loaded does for regular kernel > > modules. > > On 5/5/10 3:05 PM, "Shreyas Bhatewara" wrote: > > > The plugin image is not linked against Linux kernel. It is OS > agnostic infact > > (Eg. same plugin works for Linux and Windows VMs) > > Are there any issues with injecting the GPL-licensed plug-in into the > Windows vmxnet3 NDIS driver? > > -scott Scott, Thanks for pointing out. This issue can be resolved by adding exception to the plugin license which allows it to link to a non-free program .(http://www.gnu.org/licenses/gpl-faq.html#GPLPluginsInNF) ->Shreyas ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linux-foundation.org/mailman/listinfo/virtualization
Re: [Pv-drivers] RFC: Network Plugin Architecture (NPA) for vmxnet3
On Wednesday 05 May 2010 01:09:48 pm Arnd Bergmann wrote: > > > If you have any interesting in developing this further, do: > > > > > > (1) move the limited VF drivers directly into the kernel tree, > > > talk to them through a normal ops vector > > > > [PT] This assumes that all the VF drivers would always be available. > > Also we have to support windows and our current design supports it > > nicely in an OS agnostic manner. > > Your approach assumes that the plugin is always available, which has > exactly the same implications. Since plugin[s] are carried by the host they are indeed always available. -- Dmitry ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linux-foundation.org/mailman/listinfo/virtualization
Re: [Pv-drivers] RFC: Network Plugin Architecture (NPA) for vmxnet3
On Wednesday 05 May 2010 10:31:20 am Christoph Hellwig wrote: > On Wed, May 05, 2010 at 10:29:40AM -0700, Dmitry Torokhov wrote: > > > We're not going to add any kind of loader for binry blobs into kernel > > > space, sorry. Don't even bother wasting your time on this. > > > > It would not be a binary blob but software properly released under GPL. > > The current plan is for the shell to enforce GPL requirement on the > > plugin code, similar to what module loaded does for regular kernel > > modules. > > The mechanism described in the document is loading a binary blob > coded to an abstract API. Yes, with the exception that the only body of code that will be accepted by the shell should be GPL-licensed and thus open and available for examining. This is not different from having a standard kernel module that is loaded normally and plugs into a certain subsystem. The difference is that the binary resides not on guest filesystem but elsewhere. > > That's something entirely different from having normal modules for > the Virtual Functions, which we already have for various pieces of > hardware anyway. -- Dmitry ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linux-foundation.org/mailman/listinfo/virtualization
Re: [Pv-drivers] RFC: Network Plugin Architecture (NPA) for vmxnet3
On Wednesday 05 May 2010 10:23:16 am Christoph Hellwig wrote: > On Tue, May 04, 2010 at 04:02:25PM -0700, Pankaj Thakkar wrote: > > The plugin image is provided by the IHVs along with the PF driver and is > > packaged in the hypervisor. The plugin image is OS agnostic and can be > > loaded either into a Linux VM or a Windows VM. The plugin is written > > against the Shell API interface which the shell is responsible for > > implementing. The API > > We're not going to add any kind of loader for binry blobs into kernel > space, sorry. Don't even bother wasting your time on this. > It would not be a binary blob but software properly released under GPL. The current plan is for the shell to enforce GPL requirement on the plugin code, similar to what module loaded does for regular kernel modules. -- Dmitry ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linux-foundation.org/mailman/listinfo/virtualization
RE: [Pv-drivers] RFC: Network Plugin Architecture (NPA) for vmxnet3
> -Original Message- > From: pv-drivers-boun...@vmware.com [mailto:pv-drivers- > boun...@vmware.com] On Behalf Of Arnd Bergmann > Sent: Wednesday, May 05, 2010 2:53 PM > To: Dmitry Torokhov > Cc: Christoph Hellwig; pv-driv...@vmware.com; net...@vger.kernel.org; > linux-ker...@vger.kernel.org; virtualizat...@lists.linux- > foundation.org; Pankaj Thakkar > Subject: Re: [Pv-drivers] RFC: Network Plugin Architecture (NPA) for > vmxnet3 > > On Wednesday 05 May 2010 22:36:31 Dmitry Torokhov wrote: > > > > On Wednesday 05 May 2010 01:09:48 pm Arnd Bergmann wrote: > > > > > If you have any interesting in developing this further, do: > > > > > > > > > > (1) move the limited VF drivers directly into the kernel tree, > > > > > talk to them through a normal ops vector > > > > > > > > [PT] This assumes that all the VF drivers would always be > available. > > > > Also we have to support windows and our current design supports > it > > > > nicely in an OS agnostic manner. > > > > > > Your approach assumes that the plugin is always available, which > has > > > exactly the same implications. > > > > Since plugin[s] are carried by the host they are indeed always > > available. > > But what makes you think that you can build code that can be linked > into arbitrary future kernel versions? The kernel does not define any > calling conventions that are stable across multiple versions or > configurations. For example, you'd have to provide different binaries > for each combination of The plugin image is not linked against Linux kernel. It is OS agnostic infact (Eg. same plugin works for Linux and Windows VMs) Plugin is built against the shell API interface. It is loaded by hypervisor in a set of pages provided by shell. Guest OS specific tasks (like allocation of pages for plugin to load) are handled by shell and this is the one which will be upstreamed in Linux kernel. Maintenance of shell is the same as for any other driver currently existing in Linux kernel. ->Shreyas > > - 32/64 bit code > - gcc -mregparm=? > - lockdep > - tracepoints > - stackcheck > - NOMMU > - highmem > - whatever new gets merged > > If you build the plugins only for specific versions of "enterprise" > Linux > kernels, the code becomes really hard to debug and maintain. > If you wrap everything in your own version of the existing interfaces, > your > code gets bloated to the point of being unmaintainable. > > So I have to correct myself: this is very different from assuming the > driver is available in the guest, it's actually much worse. > > Arnd > ___ > Pv-drivers mailing list > pv-driv...@vmware.com > http://mailman2.vmware.com/mailman/listinfo/pv-drivers ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linux-foundation.org/mailman/listinfo/virtualization
Re: [Pv-drivers] RFC: Network Plugin Architecture (NPA) for vmxnet3
On Wednesday 05 May 2010 22:36:31 Dmitry Torokhov wrote: > > On Wednesday 05 May 2010 01:09:48 pm Arnd Bergmann wrote: > > > > If you have any interesting in developing this further, do: > > > > > > > > (1) move the limited VF drivers directly into the kernel tree, > > > > talk to them through a normal ops vector > > > > > > [PT] This assumes that all the VF drivers would always be available. > > > Also we have to support windows and our current design supports it > > > nicely in an OS agnostic manner. > > > > Your approach assumes that the plugin is always available, which has > > exactly the same implications. > > Since plugin[s] are carried by the host they are indeed always > available. But what makes you think that you can build code that can be linked into arbitrary future kernel versions? The kernel does not define any calling conventions that are stable across multiple versions or configurations. For example, you'd have to provide different binaries for each combination of - 32/64 bit code - gcc -mregparm=? - lockdep - tracepoints - stackcheck - NOMMU - highmem - whatever new gets merged If you build the plugins only for specific versions of "enterprise" Linux kernels, the code becomes really hard to debug and maintain. If you wrap everything in your own version of the existing interfaces, your code gets bloated to the point of being unmaintainable. So I have to correct myself: this is very different from assuming the driver is available in the guest, it's actually much worse. Arnd ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linux-foundation.org/mailman/listinfo/virtualization
Re: [Pv-drivers] RFC: Network Plugin Architecture (NPA) for vmxnet3
On Wednesday 05 May 2010 19:47:10 Pankaj Thakkar wrote: > > > > Forget about the licensing. Loading binary blobs written to a shim > > layer is a complete pain in the ass and totally unsupportable, and > > also uninteresting because of the overhead. > > [PT] Why do you think it is unsupportable? How different is it from any module > written against a well maintained interface? What overhead are you talking > about? We have the right number of module loaders in the kernel: one. If you add another one, you're doubling the amount of code that anyone working on that code needs to know about. > > If you have any interesting in developing this further, do: > > > > (1) move the limited VF drivers directly into the kernel tree, > > talk to them through a normal ops vector > [PT] This assumes that all the VF drivers would always be available. > Also we have to support windows and our current design supports it > nicely in an OS agnostic manner. Your approach assumes that the plugin is always available, which has exactly the same implications. > > (2) get rid of the whole shim crap and instead integrate the limited > > VF driver with the full VF driver we already have, instead of > > duplicating the code > [PT] Having a full VF driver adds a lot of dependency on the guest VM > and this is what NPA tries to avoid. If you have the limited driver for some hardware that does not have the real thing, we could still ship just that. I would however guess that most vendors are interested in not just running in vmware but also other hypervisors that still require the full driver, so that case would be rare, especially in the long run. Arnd ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linux-foundation.org/mailman/listinfo/virtualization
Re: [Pv-drivers] RFC: Network Plugin Architecture (NPA) for vmxnet3
On Wed, 5 May 2010 13:39:51 -0400 Christoph Hellwig wrote: > On Wed, May 05, 2010 at 10:35:28AM -0700, Dmitry Torokhov wrote: > > Yes, with the exception that the only body of code that will be > > accepted by the shell should be GPL-licensed and thus open and available > > for examining. This is not different from having a standard kernel > > module that is loaded normally and plugs into a certain subsystem. > > The difference is that the binary resides not on guest filesystem > > but elsewhere. > > Forget about the licensing. Loading binary blobs written to a shim > layer is a complete pain in the ass and totally unsupportable, and > also uninteresting because of the overhead. > > If you have any interesting in developing this further, do: > > (1) move the limited VF drivers directly into the kernel tree, > talk to them through a normal ops vector > (2) get rid of the whole shim crap and instead integrate the limited > VF driver with the full VF driver we already have, instead of > duplicating the code > (3) don't make the PV to VF integration VMware-specific but also > provide an open reference implementation like virtio. We're not > going to add massive amount of infrastructure that is not actually > useable in a free software stack. Let me put it bluntly. Any design that allows external code to run in the kernel is not going to be accepted. Out of tree kernel modules are enough of a pain already, why do you expect the developers to add another interface. ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linux-foundation.org/mailman/listinfo/virtualization
RE: [Pv-drivers] RFC: Network Plugin Architecture (NPA) for vmxnet3
> -Original Message- > From: Christoph Hellwig [mailto:h...@infradead.org] > Sent: Wednesday, May 05, 2010 10:40 AM > To: Dmitry Torokhov > Cc: Christoph Hellwig; pv-driv...@vmware.com; Pankaj Thakkar; > net...@vger.kernel.org; linux-ker...@vger.kernel.org; > virtualization@lists.linux-foundation.org > Subject: Re: [Pv-drivers] RFC: Network Plugin Architecture (NPA) for > vmxnet3 > > On Wed, May 05, 2010 at 10:35:28AM -0700, Dmitry Torokhov wrote: > > Yes, with the exception that the only body of code that will be > > accepted by the shell should be GPL-licensed and thus open and > available > > for examining. This is not different from having a standard kernel > > module that is loaded normally and plugs into a certain subsystem. > > The difference is that the binary resides not on guest filesystem > > but elsewhere. > > Forget about the licensing. Loading binary blobs written to a shim > layer is a complete pain in the ass and totally unsupportable, and > also uninteresting because of the overhead. [PT] Why do you think it is unsupportable? How different is it from any module written against a well maintained interface? What overhead are you talking about? > > If you have any interesting in developing this further, do: > > (1) move the limited VF drivers directly into the kernel tree, > talk to them through a normal ops vector [PT] This assumes that all the VF drivers would always be available. Also we have to support windows and our current design supports it nicely in an OS agnostic manner. > (2) get rid of the whole shim crap and instead integrate the limited > VF driver with the full VF driver we already have, instead of > duplicating the code [PT] Having a full VF driver adds a lot of dependency on the guest VM and this is what NPA tries to avoid. > (3) don't make the PV to VF integration VMware-specific but also > provide an open reference implementation like virtio. We're not > going to add massive amount of infrastructure that is not actually > useable in a free software stack. [PT] Today this is tied to vmxnet3 device and is intended to work on ESX hypervisor only (vmxnet3 works on VMware hypervisor only). All the loading support is inside the ESX hypervisor. I am going to post the interface between the shell and the plugin soon and you can see that there is not a whole lot of dependency or infrastructure requirements from the Linux kernel. Please keep in mind that we don't use Linux as a hypervisor but as a guest VM. ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linux-foundation.org/mailman/listinfo/virtualization
Re: [Pv-drivers] RFC: Network Plugin Architecture (NPA) for vmxnet3
On Wed, May 05, 2010 at 10:35:28AM -0700, Dmitry Torokhov wrote: > Yes, with the exception that the only body of code that will be > accepted by the shell should be GPL-licensed and thus open and available > for examining. This is not different from having a standard kernel > module that is loaded normally and plugs into a certain subsystem. > The difference is that the binary resides not on guest filesystem > but elsewhere. Forget about the licensing. Loading binary blobs written to a shim layer is a complete pain in the ass and totally unsupportable, and also uninteresting because of the overhead. If you have any interesting in developing this further, do: (1) move the limited VF drivers directly into the kernel tree, talk to them through a normal ops vector (2) get rid of the whole shim crap and instead integrate the limited VF driver with the full VF driver we already have, instead of duplicating the code (3) don't make the PV to VF integration VMware-specific but also provide an open reference implementation like virtio. We're not going to add massive amount of infrastructure that is not actually useable in a free software stack. ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linux-foundation.org/mailman/listinfo/virtualization
Re: [Pv-drivers] RFC: Network Plugin Architecture (NPA) for vmxnet3
On Wed, May 05, 2010 at 10:29:40AM -0700, Dmitry Torokhov wrote: > > We're not going to add any kind of loader for binry blobs into kernel > > space, sorry. Don't even bother wasting your time on this. > > > > It would not be a binary blob but software properly released under GPL. > The current plan is for the shell to enforce GPL requirement on the > plugin code, similar to what module loaded does for regular kernel > modules. The mechanism described in the document is loading a binary blob coded to an abstract API. That's something entirely different from having normal modules for the Virtual Functions, which we already have for various pieces of hardware anyway. ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linux-foundation.org/mailman/listinfo/virtualization