Quoting riya khanna ([email protected]): > Hi, > > I'm a newbie trying to come up with a fuse/cuse-based solution to > device namespace virtualization. > I know there have been talks about this before. So before I go too far > with the implementation, I wanted to run the design by you all and get > your feedback. > > According to the current design, a fuse-based pseudo filesystem (let's > call it "vdevtmps" - virtual dev tmpfs) will be mounted on "/dev" of > container to create virtual device nodes. What virtual device nodes to > create can be determined by lxc.conf (through device cgroup). This
devices cgroup should regulate which real physical devices the cuse fs can hook up to, but I don't think it needs to have any effect on the virtual hardware being offered, right? > also requires appropriate device view from /sys inside the container > (effectively sysfs namespacing). > > Another option is to virtualize/namespacify "sysfs", so that something > like "mdev -s" (or udev) can scan files and auto create device nodes > for a container. > > Every device node belonging to this filesystem is accessed through > fuse/cuse operations and is multiplexed on actual device. Desired > container could also get passthrough access to the real devices (e.g. > single active/desired container directly accessing the frame buffer > /dev/fb0). > > This is similar to what was proposed at LPC last year, but I'm trying One of the features which was presented there was the ability to have multiple namespaces attach to a device, but have only one ns at a time be using the real device. The other namespaces would continue to update a virtual device until they were made the active ns again. Is this a part of your design too? It should be simple enough to write the driver for the non-active device, but how would you actually switch from one ns being active to another? (I.e., moving the graphics device from one container to another) > to do this in user space. Like I said, I'm a newbie at this, so please > let me know if this does not make sense and kindly suggest an > alternative. Thanks! As we've got nothing right now, I'd recommend pursuing it. Worst case you find out why it won't work, and come out of it with a better design. -serge _______________________________________________ lxc-devel mailing list [email protected] http://lists.linuxcontainers.org/listinfo/lxc-devel
