On Sad, 2004-03-13 at 16:35, Jon Smirl wrote: > Yes. Big chunks of Benh's driver and the mode parts of fb can easily be moved to > user space. It only took a couple of days. Mode setting code is seldom used as > compared to an interrupt handler, this gives the space back for more important > things and eliminates the need to security audit the code. It's also much easier > to debug.
It does not remove the need to security audit the code. It makes the mess slightly smaller if you get it wrong. You probably also want a minimal set of mode setup code in the kernel so you can get back to a known text mode configuration when a GUI server goes kaboom, or when the user hits SAK (and SAK is mandatory for many secure configurations). > software. For example SGI hardware does not have access to the framebuffer; PC > hardware with a similar architecture is coming soon. It also makes it easy for > ATI/Nvidia to provide monolithic driver stacks. In graphics modes PC hardware with this property for SVGA modes goes back as far as the IBM PS/2 - the 8514 was like this for example. > wants to the hardware. What if I am running X/DRI in one VT and another app that > does direct 3D drawing via the hardware on another VT. Does framebuffer preserve > the complete state of the hardware at VT switch? It needs to yes. Although it doesn't neccessarily need to do it all in kernel mode. > Although I haven't written the code yet, I intend to add a output-only console > driver entry point to DRM. Since mode setting is now tracked through the DRM > driver it will be possible to display a kernel oops that occurs while running > xserver. This is something framebuffer can't currently do. That makes sense, although because of the way the console layer works if you have output you also have input - good news for kernel debugging tools too. > The state of graphics on Linux needs to move forward. Microsoft Longhorn is > going to ensure that every PC built in the future has capable graphic > coprocessor support. Don't think of this as 3D vs 2D, think of it as a switch > from dumb to intelligent hardware. There is a lot of complicated code to be Think of it as a repeat of the dumb 2D -> smart 2D -> oversmart 2D -> fairly dumb 2D/dump 3D ... continuing life cycle. The dumb->intelligent thing is a cycle. The day the CPU is fast enough or parallel enough to do fast 3D it'll eat the low end 3D video card market. > designed and written and we've got a year to do it if we're going to beat > Microsoft. Let's all work together to achieve this transition in Linux. For high end hardware maybe. For the low end I actually expect we'll see something different, probably from Intel first (since Intel is continually trying to leverage die size for new features) which is on CPU operations to do texture walk and basic effects. In other words using the CPU to do the grunt work in the tight inner loops of the 3d library. This makes a lot of sense in a low price environment where you already have the framebuffer in CPU memory not over a PCI bus. Also don't lose sight of the fact large numbers of Linux systems, and probably a growing percentage over time will be phones, dvd players, pda's, post PC systems and the like. I don't think anything in your design is problematic here at all. It works fine for single mode, dumb 2D devices. Alan ------------------------------------------------------- This SF.Net email is sponsored by: IBM Linux Tutorials Free Linux tutorial presented by Daniel Robbins, President and CEO of GenToo technologies. Learn everything from fundamentals to system administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click -- _______________________________________________ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel