Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
Mike Mestnik wrote: --- Jon Smirl [EMAIL PROTECTED] wrote: There are two types of VTs - text and graphical. It is only practical to have a single graphical VT because of the complexity of state swapping a graphical VT at VT swap. Right now we can all-ready run X on multiple VTs and with DRI-reinit can run GL apps on all of them. It may noy be the most elegant thing but it workes. We need the OS to keep state, even graphical, GART and all. I don't see how a 128M GART is diffrent then 2Gb system memory. Should we have GART swap space on the HD, a GART partition? I'm for making the OS VT swap multiheaded DR?I? setups at whatever cost. An elegant implementation would not swap the entire GART at VT switches but only present the new VT framebuffer as new display on the screen while maintaining the AGP states. Check out e.g. MacOS-X's animated multiple login screen: At every new session start the current session just rotates smoothly animated into the background and a new one is brought up. In this model you can retain the entire graphics state at VT switch, only another (currently invisible) frambuffer/screen/display/VT is made visible. This allows straightforward multihead implementations, any frambuffer/screen/display/VT can get attached to any head, they are just a piece of framebuffer memory which is either located in graphics memory or system memory and can get relocated on request, even to other graphics cards. How about this for a new way to look at the problem? System based xterm, that's a new one. I don't see how it's better then what we have now. probably not -- the MacOS-X alike approach looks more promising. At SAK a new display framebuffer would get launched and brought to front, the currently running application is only need get killed if it locks the graphics engine in an locked state. Unfortunally that means that parts of the window stack implementation need to run in kernel space (or a tightly connected trusted agent in userspace). Don't know whether this is a problem, the DirectFB core showed that it's possible to implement this cleanly in a few thousand lines of code. Holger --- This SF.Net email is sponsored by: Oracle 10g Get certified on the hottest thing ever to hit the market... Oracle 10g. Take an Oracle 10g class now, and we'll give you the exam FREE. http://ads.osdn.com/?ad_id=3149alloc_id=8166op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
These seem to be good requierments of any conclusion that is reached. 1. On the fly context switching. 1a. Even if the GART is %100 full for the new/old context. 1b. Even if the VideoRam is %100 full for the new/old context. 1c. Even if the Card(s) are locked for exlusive use. 1d. Even if add you fav gripe here. 1e. Even if hell has frozen over and a nutron boom has damaged your video card. --- Holger Waechtler [EMAIL PROTECTED] wrote: Mike Mestnik wrote: --- Jon Smirl [EMAIL PROTECTED] wrote: There are two types of VTs - text and graphical. It is only practical to have a single graphical VT because of the complexity of state swapping a graphical VT at VT swap. Right now we can all-ready run X on multiple VTs and with DRI-reinit can run GL apps on all of them. It may noy be the most elegant thing but it workes. We need the OS to keep state, even graphical, GART and all. I don't see how a 128M GART is diffrent then 2Gb system memory. Should we have GART swap space on the HD, a GART partition? I'm for making the OS VT swap multiheaded DR?I? setups at whatever cost. An elegant implementation would not swap the entire GART at VT switches but only present the new VT framebuffer as new display on the screen while maintaining the AGP states. Check out e.g. MacOS-X's animated multiple login screen: At every new session start the current session just rotates smoothly animated into the background and a new one is brought up. In this model you can retain the entire graphics state at VT switch, only another (currently invisible) frambuffer/screen/display/VT is made visible. This allows straightforward multihead implementations, any frambuffer/screen/display/VT can get attached to any head, they are just a piece of framebuffer memory which is either located in graphics memory or system memory and can get relocated on request, even to other graphics cards. How about this for a new way to look at the problem? System based xterm, that's a new one. I don't see how it's better then what we have now. probably not -- the MacOS-X alike approach looks more promising. At SAK a new display framebuffer would get launched and brought to front, the currently running application is only need get killed if it locks the graphics engine in an locked state. Unfortunally that means that parts of the window stack implementation need to run in kernel space (or a tightly connected trusted agent in userspace). Don't know whether this is a problem, the DirectFB core showed that it's possible to implement this cleanly in a few thousand lines of code. Holger --- This SF.Net email is sponsored by: Oracle 10g Get certified on the hottest thing ever to hit the market... Oracle 10g. Take an Oracle 10g class now, and we'll give you the exam FREE. http://ads.osdn.com/?ad_id=3149alloc_id=8166op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel __ Do you Yahoo!? Friends. Fun. Try the all-new Yahoo! Messenger. http://messenger.yahoo.com/ --- This SF.Net email is sponsored by: Oracle 10g Get certified on the hottest thing ever to hit the market... Oracle 10g. Take an Oracle 10g class now, and we'll give you the exam FREE. http://ads.osdn.com/?ad_id=3149alloc_id=8166op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
On Gwe, 2004-05-21 at 17:48, Jon Smirl wrote: There are two types of VTs - text and graphical. It is only practical to have a single graphical VT because of the complexity of state swapping a graphical VT at VT swap. Could have fooled me. I can switch between multiple DRI using X servers and text consoles and it works currently. So clearly it is *not* too complex. If you have mode setting there is little else required since you can simply declare it to be the job of the client switched onto, to get its data back in order. Alan --- This SF.Net email is sponsored by: Oracle 10g Get certified on the hottest thing ever to hit the market... Oracle 10g. Take an Oracle 10g class now, and we'll give you the exam FREE. http://ads.osdn.com/?ad_id=3149alloc_id=8166op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
Alan Cox wrote: On Gwe, 2004-05-21 at 17:48, Jon Smirl wrote: There are two types of VTs - text and graphical. It is only practical to have a single graphical VT because of the complexity of state swapping a graphical VT at VT swap. Could have fooled me. I can switch between multiple DRI using X servers and text consoles and it works currently. So clearly it is *not* too complex. If you have mode setting there is little else required since you can simply declare it to be the job of the client switched onto, to get its data back in order. The trouble with that is that when a new client comes along and starts twiddling some new state that nobody else touched, suddenly you have to update all the old clients to get them to restore that state. Keith --- This SF.Net email is sponsored by: Oracle 10g Get certified on the hottest thing ever to hit the market... Oracle 10g. Take an Oracle 10g class now, and we'll give you the exam FREE. http://ads.osdn.com/?ad_id=3149alloc_id=8166op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
--- Keith Whitwell [EMAIL PROTECTED] wrote: Alan Cox wrote: On Gwe, 2004-05-21 at 17:48, Jon Smirl wrote: There are two types of VTs - text and graphical. It is only practical to have a single graphical VT because of the complexity of state swapping a graphical VT at VT swap. Could have fooled me. I can switch between multiple DRI using X servers and text consoles and it works currently. So clearly it is *not* too complex. If you have mode setting there is little else required since you can simply declare it to be the job of the client switched onto, to get its data back in order. The trouble with that is that when a new client comes along and starts twiddling some new state that nobody else touched, suddenly you have to update all the old clients to get them to restore that state. A perfect example of this is the fglx driver :( However These cards are farely static, so it should be posible to clear out all the state that could ever be set. At least to find out what state fglx uses and clear that, also gatos has some state too. Keith __ Do you Yahoo!? Yahoo! Domains Claim yours for only $14.70/year http://smallbusiness.promotions.yahoo.com/offer --- This SF.Net email is sponsored by: Oracle 10g Get certified on the hottest thing ever to hit the market... Oracle 10g. Take an Oracle 10g class now, and we'll give you the exam FREE. http://ads.osdn.com/?ad_id=3149alloc_id=8166op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
Michel Dnzer wrote: On Sat, 2004-05-22 at 01:45, Mike Mestnik wrote: --- Jon Smirl [EMAIL PROTECTED] wrote: There are two types of VTs - text and graphical. It is only practical to have a single graphical VT because of the complexity of state swapping a graphical VT at VT swap. Right now we can all-ready run X on multiple VTs and with DRI-reinit can run GL apps on all of them. It may noy be the most elegant thing but it workes. We need the OS to keep state, even graphical, GART and all. I don't see how a 128M GART is diffrent then 2Gb system memory. Should we have GART swap space on the HD, a GART partition? I don't think so. The current scheme simply keeps clients from touching the hardware while switched away by blocking the hardware lock and invalidates all their hardware state when switching back. Maybe this could be extended with per-VT hardware locks. If something needs to be preserved while switched away, (a copy of) it should be kept in good old normal virtual memory. Or better still, those pages could be swapped out of the GART apperture the pages of the incoming VT swapped in. Keith --- This SF.Net email is sponsored by: Oracle 10g Get certified on the hottest thing ever to hit the market... Oracle 10g. Take an Oracle 10g class now, and we'll give you the exam FREE. http://ads.osdn.com/?ad_id149alloc_id66op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
Can the GART apperture be moved physicaly? I don't think a logical move would be much help. --- Keith Whitwell [EMAIL PROTECTED] wrote: Michel Dänzer wrote: On Sat, 2004-05-22 at 01:45, Mike Mestnik wrote: --- Jon Smirl [EMAIL PROTECTED] wrote: There are two types of VTs - text and graphical. It is only practical to have a single graphical VT because of the complexity of state swapping a graphical VT at VT swap. Right now we can all-ready run X on multiple VTs and with DRI-reinit can run GL apps on all of them. It may noy be the most elegant thing but it workes. We need the OS to keep state, even graphical, GART and all. I don't see how a 128M GART is diffrent then 2Gb system memory. Should we have GART swap space on the HD, a GART partition? I don't think so. The current scheme simply keeps clients from touching the hardware while switched away by blocking the hardware lock and invalidates all their hardware state when switching back. Maybe this could be extended with per-VT hardware locks. If something needs to be preserved while switched away, (a copy of) it should be kept in good old normal virtual memory. Or better still, those pages could be swapped out of the GART apperture the pages of the incoming VT swapped in. Keith __ Do you Yahoo!? Yahoo! Domains Claim yours for only $14.70/year http://smallbusiness.promotions.yahoo.com/offer --- This SF.Net email is sponsored by: Oracle 10g Get certified on the hottest thing ever to hit the market... Oracle 10g. Take an Oracle 10g class now, and we'll give you the exam FREE. http://ads.osdn.com/?ad_id=3149alloc_id=8166op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
Yes the GART swap, if it needs to be done with memcpys, should be posponed untill the user has SOME type of interface. Thats the important thing, allowing the user to interact is above hardware based rendering. I never liked the way GLapps froze when they where not on the current VT. I think the answer too these problems is runnaway rendering, where the openGL calls simply return as thought they didn't do any thing. --- Holger Waechtler [EMAIL PROTECTED] wrote: Mike Mestnik wrote: --- Jon Smirl [EMAIL PROTECTED] wrote: There are two types of VTs - text and graphical. It is only practical to have a single graphical VT because of the complexity of state swapping a graphical VT at VT swap. Right now we can all-ready run X on multiple VTs and with DRI-reinit can run GL apps on all of them. It may noy be the most elegant thing but it workes. We need the OS to keep state, even graphical, GART and all. I don't see how a 128M GART is diffrent then 2Gb system memory. Should we have GART swap space on the HD, a GART partition? I'm for making the OS VT swap multiheaded DR?I? setups at whatever cost. An elegant implementation would not swap the entire GART at VT switches but only present the new VT framebuffer as new display on the screen while maintaining the AGP states. Check out e.g. MacOS-X's animated multiple login screen: At every new session start the current session just rotates smoothly animated into the background and a new one is brought up. In this model you can retain the entire graphics state at VT switch, only another (currently invisible) frambuffer/screen/display/VT is made visible. This allows straightforward multihead implementations, any frambuffer/screen/display/VT can get attached to any head, they are just a piece of framebuffer memory which is either located in graphics memory or system memory and can get relocated on request, even to other graphics cards. How about this for a new way to look at the problem? System based xterm, that's a new one. I don't see how it's better then what we have now. probably not -- the MacOS-X alike approach looks more promising. At SAK a new display framebuffer would get launched and brought to front, the currently running application is only need get killed if it locks the graphics engine in an locked state. Unfortunally that means that parts of the window stack implementation need to run in kernel space (or a tightly connected trusted agent in userspace). Don't know whether this is a problem, the DirectFB core showed that it's possible to implement this cleanly in a few thousand lines of code. Holger __ Do you Yahoo!? Yahoo! Domains Claim yours for only $14.70/year http://smallbusiness.promotions.yahoo.com/offer --- This SF.Net email is sponsored by: Oracle 10g Get certified on the hottest thing ever to hit the market... Oracle 10g. Take an Oracle 10g class now, and we'll give you the exam FREE. http://ads.osdn.com/?ad_id=3149alloc_id=8166op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
Yes the GART swap, if it needs to be done with memcpys, should be posponed untill the user has SOME type of interface. Thats the important thing, allowing the user to interact is above hardware based rendering. I never liked the way GLapps froze when they where not on the current VT. I think the answer too these problems is runnaway rendering, where the openGL calls simply return as thought they didn't do any thing. --- Holger Waechtler [EMAIL PROTECTED] wrote: Mike Mestnik wrote: --- Jon Smirl [EMAIL PROTECTED] wrote: There are two types of VTs - text and graphical. It is only practical to have a single graphical VT because of the complexity of state swapping a graphical VT at VT swap. Right now we can all-ready run X on multiple VTs and with DRI-reinit can run GL apps on all of them. It may noy be the most elegant thing but it workes. We need the OS to keep state, even graphical, GART and all. I don't see how a 128M GART is diffrent then 2Gb system memory. Should we have GART swap space on the HD, a GART partition? I'm for making the OS VT swap multiheaded DR?I? setups at whatever cost. An elegant implementation would not swap the entire GART at VT switches but only present the new VT framebuffer as new display on the screen while maintaining the AGP states. Check out e.g. MacOS-X's animated multiple login screen: At every new session start the current session just rotates smoothly animated into the background and a new one is brought up. In this model you can retain the entire graphics state at VT switch, only another (currently invisible) frambuffer/screen/display/VT is made visible. This allows straightforward multihead implementations, any frambuffer/screen/display/VT can get attached to any head, they are just a piece of framebuffer memory which is either located in graphics memory or system memory and can get relocated on request, even to other graphics cards. How about this for a new way to look at the problem? System based xterm, that's a new one. I don't see how it's better then what we have now. probably not -- the MacOS-X alike approach looks more promising. At SAK a new display framebuffer would get launched and brought to front, the currently running application is only need get killed if it locks the graphics engine in an locked state. Unfortunally that means that parts of the window stack implementation need to run in kernel space (or a tightly connected trusted agent in userspace). Don't know whether this is a problem, the DirectFB core showed that it's possible to implement this cleanly in a few thousand lines of code. Holger __ Do you Yahoo!? Yahoo! Domains Claim yours for only $14.70/year http://smallbusiness.promotions.yahoo.com/offer --- This SF.Net email is sponsored by: Oracle 10g Get certified on the hottest thing ever to hit the market... Oracle 10g. Take an Oracle 10g class now, and we'll give you the exam FREE. http://ads.osdn.com/?ad_id=3149alloc_id=8166op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
Well, X11 protocol was designed rather well. kind of overkill for this purpose, not? A command set that allows using the opcodes easily to jump directly into the verification function table and if the request is allowed into the function table that contains the i/o programming routines might get coded pretty compactly. AFAIK X11 protocol is very efficient at what it does. The overhead comes from other parts - like fonts, etc which we will not care about anyway. If one looks at the actual packets being sent back and force it should be pretty much what we are talking about here - modulo the fine technical points we have not started discussing yet :) We can simplify the matters quite a bit by requiring that writes to the fd always send N whole packets (and don't break on per-packet boundaries). On the other hand we would probably want to modify the protocol at least in the following way: 1) take into account modern hardware.. no short width anymore, more pixel formats. The pixel format is a source or destination surface format flag, there should be no need to encode it in rendering commands, only in surface creation/allocation commands. The reason I am talking about pixel format is that I was advocating having drawing and mode setting commands being separate and completely orthogonal to memory management. This way memory management code can be completely generic and the card should only tell it about different zones (i.e. here is where you can have framebuffer, here is where you can have cursor, here is AGP space, etc) 1) implies that we are not going to be binary compatible with usual X11 protocol, so we are implementing a new protocol nevertheless which means this whole point rather academic: if one designs a new protocol there is no reason not to take into account design of X11. (a full fledged X11 implementation in the kernel might have some problems to get accepted by the lkml codingstyle policemen;) It does not have to be all in the kernel. best Vladimir Dergachev Holger --- This SF.Net email is sponsored by: Oracle 10g Get certified on the hottest thing ever to hit the market... Oracle 10g. Take an Oracle 10g class now, and we'll give you the exam FREE. http://ads.osdn.com/?ad_id=3149alloc_id=8166op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
Keith Whitwell wrote: Holger Waechtler wrote: Keith Whitwell wrote: I don't think this needs to be that complex. We only need a few working functions in the kernel: * identification (In particular unique identifier to pass via X to apps so they can find the head again) * event reporting (i.e. IRQs and anything else that is relevant) * mode setting * memory management * bitblt Everything else is best done as device-specific with the true API belonging in user-space. Comments ? Just to say that bitblt covers a lot of ground; ie. there's lots of varieties of blits with quite a few parameters - are you talking about just a simple copy within a single framebuffer, or can source and destination have different pitches? what about different pixel formats? what about fill-blits? etc. And secondly, that an ioctl per blit probably isn't a useful interface if you're trying to do a lot of small blits, like I guess drawing text. So if you wanted this to be maximally useful, some way of saying 'do these n blits' would make sense. And what about cards with no 2d engine? A command buffer interface (either mmap()'d buffers or buffers copied using standardized ioctls) with a common command set might be a general approach working on all architectures -- not all card drivers would need to implement all command opcodes, a capability ioctl can return a bitfield of supported opcodes. Maybe we could use the X11 protocol *g* :) no -- I was thinking about something more leightweight: for everything data-intensive like blits, texture uploads or array rendering calls you just pass the array pointers which then get validated before getting converted to physical addresses and written into the i/o registers of the DMA engine. For 2D operations you should be able to encode almost all operations into a few bytes, you need only a few opcodes, most take 1...4 arguments, usually bytes or shorts. don't know whether it makes sense to create command opcodes for all possible commands -- probably it suffices to design a command set that contains the commonly used 2D operations only. Maybe it also makes sense to require command packets being aligned to say -- 16- or 32-byte boundaries, this simplifies parsing and might improve cache performance. The following 2D rendering commands come to my mind being useful/required by a console or simple 2D UI: /* fill a rectangle using the specified FG color */ FillRect(dst:short[4], color:byte[4]) /* set up a pointer in host memory as source for next blits */ /* the passed array addresses need only get to verified once */ SetBlitHostSrc(array:ptr*, width:int, heigth:int, pixfmt:enum) /* use the specified framebuffer surface as source */ SetBlitFBSrc(surface_id:int) /* actually execute the blits */ Blit/StretchBlit(dst:short[4], src:short[4]) /* render lines using the specified color */ DrawLines(array:ptr*, len:int) /* really required? */ /* only for double buffered framebuffer surfaces */ SwapBuffers() /* sync */ Flush()/Wait() This command set should be sufficient for common tasks like implementing window stacks, text rendering (even antialiased since the alpha component can get properly handled by blit commands) and UI rendering. :) or is this idea too heretical? Holger --- This SF.Net email is sponsored by: Oracle 10g Get certified on the hottest thing ever to hit the market... Oracle 10g. Take an Oracle 10g class now, and we'll give you the exam FREE. http://ads.osdn.com/?ad_id=3149alloc_id=8166op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
Vladimir Dergachev wrote: A command buffer interface (either mmap()'d buffers or buffers copied using standardized ioctls) with a common command set might be a general approach working on all architectures -- not all card drivers would need to implement all command opcodes, a capability ioctl can return a bitfield of supported opcodes. Maybe we could use the X11 protocol Well, X11 protocol was designed rather well. kind of overkill for this purpose, not? A command set that allows using the opcodes easily to jump directly into the verification function table and if the request is allowed into the function table that contains the i/o programming routines might get coded pretty compactly. We can simplify the matters quite a bit by requiring that writes to the fd always send N whole packets (and don't break on per-packet boundaries). On the other hand we would probably want to modify the protocol at least in the following way: 1) take into account modern hardware.. no short width anymore, more pixel formats. The pixel format is a source or destination surface format flag, there should be no need to encode it in rendering commands, only in surface creation/allocation commands. 2) only require parts essential for console implementation - everything else could be passed back to user-space daemon. (Note that if user-space daemon is not present this would mean that things like line-drawing packets would fail..) yes. 1) implies that we are not going to be binary compatible with usual X11 protocol, so we are implementing a new protocol nevertheless which means this whole point rather academic: if one designs a new protocol there is no reason not to take into account design of X11. (a full fledged X11 implementation in the kernel might have some problems to get accepted by the lkml codingstyle policemen;) Holger --- This SF.Net email is sponsored by: Oracle 10g Get certified on the hottest thing ever to hit the market... Oracle 10g. Take an Oracle 10g class now, and we'll give you the exam FREE. http://ads.osdn.com/?ad_id=3149alloc_id=8166op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
On Mer, 2004-05-19 at 20:49, Jon Smirl wrote: A rep from the SELinux group was at the Xdev conference. They are starting a project to verify X server. Verifying the X server isnt practical. Its large and its reliant on hardware behaviour for hardware where nobody has documentation and where the documentation is plain wrong. You may be able to put X over another thin layer that does the security work and keeps virtual X servers apart. --- This SF.Net email is sponsored by: Oracle 10g Get certified on the hottest thing ever to hit the market... Oracle 10g. Take an Oracle 10g class now, and we'll give you the exam FREE. http://ads.osdn.com/?ad_id=3149alloc_id=8166op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
On Iau, 2004-05-20 at 01:55, Jon Smirl wrote: It's not going to allow multiple login prompts on different VTs on the same head. In which case its completely useless. You might want to get away from a kernel virtualisation of video services but you just can't do it. You can pull a *lot* of the fancier stuff out of kernel as you've suggested but the basic VT and memory management just won't fit your model --- This SF.Net email is sponsored by: Oracle 10g Get certified on the hottest thing ever to hit the market... Oracle 10g. Take an Oracle 10g class now, and we'll give you the exam FREE. http://ads.osdn.com/?ad_id=3149alloc_id=8166op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
There are two types of VTs - text and graphical. It is only practical to have a single graphical VT because of the complexity of state swapping a graphical VT at VT swap. How about this for a new way to look at the problem? Current text VTs call into the kernel and ask it to draw on the video hardware. This could easily be replaced with a system where text VTs draw to a piece of RAM instead of the hardware. This is a small piece of RAM since these are text VTs. You would physically login into the graphical VT. This login could run xserver or a simple terminal emulator. Now build a system for compositing the text VT buffers onto the real physical screen. To emulate the current system Atl-x would select a different VT. Then on each vertical retrace extract the VTs buffer and paint it on the screen. Forward key strokes into it's input queue. A text VT without a task attached to it would draw a login display. I can use this same scheme from xserver. Each text VT would appear as a window. When you log out of the graphical console the tasks associated with a text VT don't get killed. SAK does not kill them either. This scheme is very close to what we have now. The only thing that is changed is that there is no way for a text VT to write to the graphics hardware without the help of a process running on the graphical console. The advantage of this scheme is that there only a single login ever on the graphics hardware. In a multiuser scheme there needs to be a more complex interface. The text VTs have to track who created them. You wouldn't want another user attaching an active text VT that isn't theirs. --- Alan Cox [EMAIL PROTECTED] wrote: On Iau, 2004-05-20 at 01:55, Jon Smirl wrote: It's not going to allow multiple login prompts on different VTs on the same head. In which case its completely useless. You might want to get away from a kernel virtualisation of video services but you just can't do it. You can pull a *lot* of the fancier stuff out of kernel as you've suggested but the basic VT and memory management just won't fit your model --- This SF.Net email is sponsored by: Oracle 10g Get certified on the hottest thing ever to hit the market... Oracle 10g. Take an Oracle 10g class now, and we'll give you the exam FREE. http://ads.osdn.com/?ad_id=3149alloc_id=8166op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel = Jon Smirl [EMAIL PROTECTED] __ Do you Yahoo!? Yahoo! Domains Claim yours for only $14.70/year http://smallbusiness.promotions.yahoo.com/offer --- This SF.Net email is sponsored by: Oracle 10g Get certified on the hottest thing ever to hit the market... Oracle 10g. Take an Oracle 10g class now, and we'll give you the exam FREE. http://ads.osdn.com/?ad_id=3149alloc_id=8166op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
--- Jon Smirl [EMAIL PROTECTED] wrote: There are two types of VTs - text and graphical. It is only practical to have a single graphical VT because of the complexity of state swapping a graphical VT at VT swap. Right now we can all-ready run X on multiple VTs and with DRI-reinit can run GL apps on all of them. It may noy be the most elegant thing but it workes. We need the OS to keep state, even graphical, GART and all. I don't see how a 128M GART is diffrent then 2Gb system memory. Should we have GART swap space on the HD, a GART partition? I'm for making the OS VT swap multiheaded DR?I? setups at whatever cost. How about this for a new way to look at the problem? System based xterm, that's a new one. I don't see how it's better then what we have now. This scheme is very close to what we have now. The only thing that is changed is that there is no way for a text VT to write to the graphics hardware without the help of a process running on the graphical console. The advantage of this scheme is that there only a single login ever on the graphics hardware. I don't see the advantage, look as MSs switch user functionality. I don't see WHY you would want to 'bg' then 'su' to a new user, like MS dose. I like the simplicity of a hot-key the gets you QWICKLY to another virtual-terminal. If you have a plan for that then why are you saying to get rid of VT-swaps? In a multiuser scheme there needs to be a more complex interface. The text VTs have to track who created them. You wouldn't want another user attaching an active text VT that isn't theirs. --- Alan Cox [EMAIL PROTECTED] wrote: On Iau, 2004-05-20 at 01:55, Jon Smirl wrote: It's not going to allow multiple login prompts on different VTs on the same head. Will it allow ONE login prompt on a different VT? I guess if done that way it'd be better then what we have now. In which case its completely useless. You might want to get away from a kernel virtualisation of video services but you just can't do it. You can pull a *lot* of the fancier stuff out of kernel as you've suggested but the basic VT and memory management just won't fit your model = Jon Smirl [EMAIL PROTECTED] __ Do you Yahoo!? Yahoo! Domains Claim yours for only $14.70/year http://smallbusiness.promotions.yahoo.com/offer --- This SF.Net email is sponsored by: Oracle 10g Get certified on the hottest thing ever to hit the market... Oracle 10g. Take an Oracle 10g class now, and we'll give you the exam FREE. http://ads.osdn.com/?ad_id=3149alloc_id=8166op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
On Sat, 2004-05-22 at 01:45, Mike Mestnik wrote: --- Jon Smirl [EMAIL PROTECTED] wrote: There are two types of VTs - text and graphical. It is only practical to have a single graphical VT because of the complexity of state swapping a graphical VT at VT swap. Right now we can all-ready run X on multiple VTs and with DRI-reinit can run GL apps on all of them. It may noy be the most elegant thing but it workes. We need the OS to keep state, even graphical, GART and all. I don't see how a 128M GART is diffrent then 2Gb system memory. Should we have GART swap space on the HD, a GART partition? I don't think so. The current scheme simply keeps clients from touching the hardware while switched away by blocking the hardware lock and invalidates all their hardware state when switching back. Maybe this could be extended with per-VT hardware locks. If something needs to be preserved while switched away, (a copy of) it should be kept in good old normal virtual memory. -- Earthling Michel Dnzer | Debian (powerpc), X and DRI developer Libre software enthusiast| http://svcs.affero.net/rm.php?r=daenzer --- This SF.Net email is sponsored by: Oracle 10g Get certified on the hottest thing ever to hit the market... Oracle 10g. Take an Oracle 10g class now, and we'll give you the exam FREE. http://ads.osdn.com/?ad_id=3149alloc_id=8166op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
1. Add a device independent version. Device independent code could be written that would test the version number the same way device dependent code does today. The drawback is that in order to advertise version X.N functionality, you also have to adverteise version [1.1, 1.N-1] functionality. Some hardware / drivers may not want to / be able to do that. 2. Add an extension query ioctl. Give each piece of functionality (i.e., all the related vblank functions) a unique number. Drivers would make a query like, Is extension 5 supported? If that ioctl returns true, then the driver could use that functionality. The disadvantage of this method is that it increases the number of ioctl calls that need to be made. Since the set of supported extensions can be tracked in the DRM with a bit string, the additional code size should be trivial. Thoughts? I don't think this needs to be that complex. We only need a few working functions in the kernel: * identification (In particular unique identifier to pass via X to apps so they can find the head again) * event reporting (i.e. IRQs and anything else that is relevant) * mode setting * memory management * bitblt Everything else is best done as device-specific with the true API belonging in user-space. Comments ? best Vladimir Dergachev --- This SF.Net email is sponsored by: SourceForge.net Broadband Sign-up now for SourceForge Broadband and get the fastest 6.0/768 connection for only $19.95/mo for the first 3 months! http://ads.osdn.com/?ad_id=2562alloc_id=6184op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel --- This SF.Net email is sponsored by: Oracle 10g Get certified on the hottest thing ever to hit the market... Oracle 10g. Take an Oracle 10g class now, and we'll give you the exam FREE. http://ads.osdn.com/?ad_id=3149alloc_id=8166op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
I don't think this needs to be that complex. We only need a few working functions in the kernel: * identification (In particular unique identifier to pass via X to apps so they can find the head again) * event reporting (i.e. IRQs and anything else that is relevant) * mode setting * memory management * bitblt Everything else is best done as device-specific with the true API belonging in user-space. Comments ? Just to say that bitblt covers a lot of ground; ie. there's lots of varieties of blits with quite a few parameters - are you talking about just a simple copy within a single framebuffer, or can source and destination have different pitches? what about different pixel formats? what about fill-blits? etc. And secondly, that an ioctl per blit probably isn't a useful interface if you're trying to do a lot of small blits, like I guess drawing text. So if you wanted this to be maximally useful, some way of saying 'do these n blits' would make sense. And what about cards with no 2d engine? Keith --- This SF.Net email is sponsored by: Oracle 10g Get certified on the hottest thing ever to hit the market... Oracle 10g. Take an Oracle 10g class now, and we'll give you the exam FREE. http://ads.osdn.com/?ad_id=3149alloc_id=8166op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
Keith Whitwell wrote: I don't think this needs to be that complex. We only need a few working functions in the kernel: * identification (In particular unique identifier to pass via X to apps so they can find the head again) * event reporting (i.e. IRQs and anything else that is relevant) * mode setting * memory management * bitblt Everything else is best done as device-specific with the true API belonging in user-space. Comments ? Just to say that bitblt covers a lot of ground; ie. there's lots of varieties of blits with quite a few parameters - are you talking about just a simple copy within a single framebuffer, or can source and destination have different pitches? what about different pixel formats? what about fill-blits? etc. And secondly, that an ioctl per blit probably isn't a useful interface if you're trying to do a lot of small blits, like I guess drawing text. So if you wanted this to be maximally useful, some way of saying 'do these n blits' would make sense. And what about cards with no 2d engine? A command buffer interface (either mmap()'d buffers or buffers copied using standardized ioctls) with a common command set might be a general approach working on all architectures -- not all card drivers would need to implement all command opcodes, a capability ioctl can return a bitfield of supported opcodes. The command buffers might then get verified and executed by the DRM drivers - on the userspace side you would only need to implement a few pretty generic drivers and the fallback code. The verification code can probably get shared for most or all cards. Holger --- This SF.Net email is sponsored by: Oracle 10g Get certified on the hottest thing ever to hit the market... Oracle 10g. Take an Oracle 10g class now, and we'll give you the exam FREE. http://ads.osdn.com/?ad_id=3149alloc_id=8166op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
Holger Waechtler wrote: Keith Whitwell wrote: I don't think this needs to be that complex. We only need a few working functions in the kernel: * identification (In particular unique identifier to pass via X to apps so they can find the head again) * event reporting (i.e. IRQs and anything else that is relevant) * mode setting * memory management * bitblt Everything else is best done as device-specific with the true API belonging in user-space. Comments ? Just to say that bitblt covers a lot of ground; ie. there's lots of varieties of blits with quite a few parameters - are you talking about just a simple copy within a single framebuffer, or can source and destination have different pitches? what about different pixel formats? what about fill-blits? etc. And secondly, that an ioctl per blit probably isn't a useful interface if you're trying to do a lot of small blits, like I guess drawing text. So if you wanted this to be maximally useful, some way of saying 'do these n blits' would make sense. And what about cards with no 2d engine? A command buffer interface (either mmap()'d buffers or buffers copied using standardized ioctls) with a common command set might be a general approach working on all architectures -- not all card drivers would need to implement all command opcodes, a capability ioctl can return a bitfield of supported opcodes. Maybe we could use the X11 protocol (runs away hides) Keith --- This SF.Net email is sponsored by: Oracle 10g Get certified on the hottest thing ever to hit the market... Oracle 10g. Take an Oracle 10g class now, and we'll give you the exam FREE. http://ads.osdn.com/?ad_id=3149alloc_id=8166op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
Everything else is best done as device-specific with the true API belonging in user-space. Comments ? Just to say that bitblt covers a lot of ground; ie. there's lots of varieties of blits with quite a few parameters - are you talking about just a simple copy within a single framebuffer, or can source and destination have different pitches? what about different pixel formats? what about fill-blits? etc. And secondly, that an ioctl per blit probably isn't a useful interface if you're trying to do a lot of small blits, like I guess drawing text. So if you wanted this to be maximally useful, some way of saying 'do these n blits' would make sense. You are quite right :) I was thinking that we need as much bitblt as one would need to implement a scrolling console: i.e. move areas within framebuffer and use bitmapped fonts. I think that this can be accomplished with an easy API. For example, what about separating it into two parts: 1) BITBLT ioctl: /* easy to check against actualy physical boundaries to prevent lockups, not virtual partition via memory manager */ typedef struct { long format; /* format key */ long offset; /* location with framebuffer */ long width; long height; long pitch; } SURFACE; typedef struct { SURFACE dest; SURFACE source; } BLIT; typedef struct { long n_items; BLIT item[1]; /* expanded as needed to allocate n items */ } BITBLT_ARG; framebuffer-bitblt(FRAMEBUFFER *, (*BITBLT_ARG)); /* in-kernel interface */ ioctl(fd, BITBLT, (BITBLT_ARG *)); /* user-space interface */ 2) Memory transfer ioctl: (for exchanging data with framebuffer) typedef struct { int direction;/* upload or download */ void *mem_area; SURFACE framebuffer_surface; SURFACE memory_surface; } TRANSFER; typedef struct { long n_items; TRANSFER item[1]; /* expand as needed for n items */ } TRANSFER_ARG; framebuffer-transfer(FRAMEBUFFER *,TRANSFER_ARG *); /* in-kernel interface */ ioctl(fd, TRANSFER, (TRANSFER_ARG *)); /* user-space interface */ And what about cards with no 2d engine? VESA framebuffer (and similar) can use plain memcpy. 3d accelerators without direct access to framebuffer would have some way of transferring memory to the framebuffer and back as well as blits. Truly weird hardware would require ingenuity - as appropriate. Comments ? best Vladimir Dergachev Keith --- This SF.Net email is sponsored by: Oracle 10g Get certified on the hottest thing ever to hit the market... Oracle 10g. Take an Oracle 10g class now, and we'll give you the exam FREE. http://ads.osdn.com/?ad_id=3149alloc_id=8166op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
A command buffer interface (either mmap()'d buffers or buffers copied using standardized ioctls) with a common command set might be a general approach working on all architectures -- not all card drivers would need to implement all command opcodes, a capability ioctl can return a bitfield of supported opcodes. Maybe we could use the X11 protocol Well, X11 protocol was designed rather well. We can simplify the matters quite a bit by requiring that writes to the fd always send N whole packets (and don't break on per-packet boundaries). On the other hand we would probably want to modify the protocol at least in the following way: 1) take into account modern hardware.. no short width anymore, more pixel formats. 2) only require parts essential for console implementation - everything else could be passed back to user-space daemon. (Note that if user-space daemon is not present this would mean that things like line-drawing packets would fail..) 3) the framebuffer would only emit IRQ and completion events. 1) implies that we are not going to be binary compatible with usual X11 protocol, so we are implementing a new protocol nevertheless which means this whole point rather academic: if one designs a new protocol there is no reason not to take into account design of X11. best Vladimir Dergachev (runs away hides) Keith --- This SF.Net email is sponsored by: Oracle 10g Get certified on the hottest thing ever to hit the market... Oracle 10g. Take an Oracle 10g class now, and we'll give you the exam FREE. http://ads.osdn.com/?ad_id=3149alloc_id=8166op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel --- This SF.Net email is sponsored by: Oracle 10g Get certified on the hottest thing ever to hit the market... Oracle 10g. Take an Oracle 10g class now, and we'll give you the exam FREE. http://ads.osdn.com/?ad_id=3149alloc_id=8166op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
Jon Smirl wrote: --- Alan Cox [EMAIL PROTECTED] wrote: On Maw, 2004-05-18 at 21:14, Jon Smirl wrote: So you don't have any problem with pulling VT support out of the kernel? You need the code to handle video context switches. You also need vt's because you have multiple security contexts on the PC console and good reason to keep that when using SELinux. VT switch is easy however. DRI+X already handles that, and we never have two people using the VT at once. Its one device, multiple handles only one currently active - like many other drivers Why does VT switch have to be in the kernel? I can have multiple xterms logged in as different users without kernel support. Why can't VT switching be implemented as if I was switching between multiple fullscreen xterms? I guess I don't see why there is a difference between multiple xterms and VT's. I can use su to set the xterm to any account. what if you want to create a 'freshclean' console using the SAK key or if your primary terminal is not responsive anymore? You need to be able to create a new terminal without any help of the currently active console and you need to be able to restart the graphics helper library/server/whatever at any time without killing your running applications on other 'VTs'. Holger --- This SF.Net email is sponsored by: SourceForge.net Broadband Sign-up now for SourceForge Broadband and get the fastest 6.0/768 connection for only $19.95/mo for the first 3 months! http://ads.osdn.com/?ad_id=2562alloc_id=6184op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
--- Holger Waechtler [EMAIL PROTECTED] wrote: Why does VT switch have to be in the kernel? I can have multiple xterms logged in as different users without kernel support. Why can't VT switching be implemented as if I was switching between multiple fullscreen xterms? I guess I don't see why there is a difference between multiple xterms and VT's. I can use su to set the xterm to any account. what if you want to create a 'freshclean' console using the SAK key or if your primary terminal is not responsive anymore? You need to be able to create a new terminal without any help of the currently active console and you need to be able to restart the graphics helper library/server/whatever at any time without killing your running applications on other 'VTs'. In this model SAK is implemented in the kernel by the system console. Hitting SAK will always work even if the terminal is not responsive (unless the kernel is dead). SAK will whack all apps on the user console, reload the graphics libs, and get you a fresh shell. In this model it is not possible to restart the graphics lib on one console without restarting all of the virtual consoles since they are all sharing the same process. This seems secure to me. Whacking everything with SAK and reloading the graphics libraries completely stops any attack that may try to load code into the graphics card. I wouldn't be surprised if you could write a pixel shader program that emulates a login screen. It may be possible to build this to suspend the apps instead of whacking them. Then you could reattached them if needed. If not reattached in a few minutes they would die. = Jon Smirl [EMAIL PROTECTED] __ Do you Yahoo!? SBC Yahoo! - Internet access at a great low price. http://promo.yahoo.com/sbc/ --- This SF.Net email is sponsored by: SourceForge.net Broadband Sign-up now for SourceForge Broadband and get the fastest 6.0/768 connection for only $19.95/mo for the first 3 months! http://ads.osdn.com/?ad_id=2562alloc_id=6184op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
--- Holger Waechtler [EMAIL PROTECTED] wrote: hmmm, it's not clear to me how this concept would allow real multiple user logins at the same time like it is common -- you can rum multiple X11 instances on multiple VTs and every new user is able to hit SAK without killing other user's applications. Each user's device driver knows how to put up the SAK login screen on it's head. The SAK is only going to kill thing assoicated with that head. This works because each card's driver is required to support the system console at the kernel level. SAK will always be available from any head. We will need a kernel option to control whether the rest of the system console (pintk, kdbg, etc) is available from any head or just the boot device. Ctrl-Alt-Del has to be the SAK key. It will reset the graphics and start a new login process. SysReq will get you the system console. System console will show printk, kdbg, OOPs, etc. Both of these keys will be active even if X is running. You can toggle between system console and user console. But system console is not a shell, no one is logged in on it. It can only be used to run dedicated things like kdbg. Holger = Jon Smirl [EMAIL PROTECTED] __ Do you Yahoo!? SBC Yahoo! - Internet access at a great low price. http://promo.yahoo.com/sbc/ --- This SF.Net email is sponsored by: SourceForge.net Broadband Sign-up now for SourceForge Broadband and get the fastest 6.0/768 connection for only $19.95/mo for the first 3 months! http://ads.osdn.com/?ad_id=2562alloc_id=6184op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
Jon Smirl wrote: --- Alan Cox [EMAIL PROTECTED] wrote: s/OpenGL/Some drawing library/ - providing its using the kernel interfaces we don't care what. (eg the bogl console driver is very small, the opengl one would probably be rather larger and nicer) I wasn't thinking that the kernel interface was standardized. For example DRM has some common IOCTLs and then hundreds of per chipset ones. There is no standard bitblt or draw char IOCTL. IMO, this is a long standing problem with the DRM. The main issue is that there's only one version number associated with each DRM module. What's needed is a device independent version and a device dependent version. In a sense, it needs something like an extension mechanism. Right now, some drivers support a vblank wait ioctl and some don't. The ones that do support it, support it the same way. If we could extend the DRM API in a device independent manner, we could solve some of this. When I thought about this in the past, I came up with two ways to do it. 1. Add a device independent version. Device independent code could be written that would test the version number the same way device dependent code does today. The drawback is that in order to advertise version X.N functionality, you also have to adverteise version [1.1, 1.N-1] functionality. Some hardware / drivers may not want to / be able to do that. 2. Add an extension query ioctl. Give each piece of functionality (i.e., all the related vblank functions) a unique number. Drivers would make a query like, Is extension 5 supported? If that ioctl returns true, then the driver could use that functionality. The disadvantage of this method is that it increases the number of ioctl calls that need to be made. Since the set of supported extensions can be tracked in the DRM with a bit string, the additional code size should be trivial. Thoughts? --- This SF.Net email is sponsored by: SourceForge.net Broadband Sign-up now for SourceForge Broadband and get the fastest 6.0/768 connection for only $19.95/mo for the first 3 months! http://ads.osdn.com/?ad_id=2562alloc_id=6184op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
--- Ian Romanick [EMAIL PROTECTED] wrote: IMO, this is a long standing problem with the DRM. The main issue is that there's only one version number associated with each DRM module. What's needed is a device independent version and a device dependent version. In a sense, it needs something like an extension mechanism. Right now, some drivers support a vblank wait ioctl and some don't. The ones that do support it, support it the same way. The are two OpenGL implementations to consider, the DRI/mesa one and other vendor's. My thought was that DRM is an internal interface for Mesa's use. This means that ATI/Nvidia's stacks don't have to provide a DRM interface if they don't want to. DRM would not be a published API for general use. The only standard way to access the video hardware will be via an OpenGL library and only one library at a time can be used on single piece of hardware. If you accept that DRM is not a published API for general use, then we can do anything with want with the DRM interface. I'm all for anything that will make the code simpler to write. Nobody but a Mesa developer is ever going to use the interface. = Jon Smirl [EMAIL PROTECTED] __ Do you Yahoo!? SBC Yahoo! - Internet access at a great low price. http://promo.yahoo.com/sbc/ --- This SF.Net email is sponsored by: SourceForge.net Broadband Sign-up now for SourceForge Broadband and get the fastest 6.0/768 connection for only $19.95/mo for the first 3 months! http://ads.osdn.com/?ad_id=2562alloc_id=6184op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
On Mer, 2004-05-19 at 01:35, Jon Smirl wrote: Why does VT switch have to be in the kernel? I can have multiple xterms logged in as different users without kernel support. Why can't VT switching be implemented as if I was switching between multiple fullscreen xterms? I guess I don't see why there is a difference between multiple xterms and VT's. I can use su to set the xterm to any account. You trust the X server. Thats already problematic with SELinux and compartmentalisation. For some things you need multiple X-servers for this reason. User space console makes it easier to go multi-user. Stick four cards in, run a user space console on each of them, and you have a four user system. No need for kernel support. Assuming you fix the input layer yes. I too would advocate that part being user space. We don't need multiple framebuffer let alone multiple keyboard/mouse mapping stuff in the kernel one, just what we have now or thereabouts --- This SF.Net email is sponsored by: SourceForge.net Broadband Sign-up now for SourceForge Broadband and get the fastest 6.0/768 connection for only $19.95/mo for the first 3 months! http://ads.osdn.com/?ad_id=2562alloc_id=6184op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
On Maw, 2004-05-18 at 23:27, Keith Packard wrote: No thoughts to supporting multiple sets of VTs, one per physical device then? That would be nice but how much of that needs to be kernel side. Not a lot I suspect. --- This SF.Net email is sponsored by: SourceForge.net Broadband Sign-up now for SourceForge Broadband and get the fastest 6.0/768 connection for only $19.95/mo for the first 3 months! http://ads.osdn.com/?ad_id=2562alloc_id=6184op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
--- Alan Cox [EMAIL PROTECTED] wrote: On Mer, 2004-05-19 at 01:35, Jon Smirl wrote: Why does VT switch have to be in the kernel? I can have multiple xterms logged in as different users without kernel support. Why can't VT switching be implemented as if I was switching between multiple fullscreen xterms? I guess I don't see why there is a difference between multiple xterms and VT's. I can use su to set the xterm to any account. You trust the X server. Thats already problematic with SELinux and compartmentalisation. For some things you need multiple X-servers for this reason. If we are going to build a new user space console, let's work with the SELinux people from the beginning to make it trustworthy. User space console could look just like the current VT system and run each session full screen. That stops the scraping the screen attack. xserver draws each app into it's own pbuffer. The individual apps don't have access to the main framebuffer. A properly designed xserver should be free from the screen scraping attack too. The DRM module will have to make sure you can't read buffers that don't belong to you. I don't want to go the model of running multiple X servers on the same hardware again. That path causes all of the problems with multitasking the device drivers on to the same piece of hardware. = Jon Smirl [EMAIL PROTECTED] __ Do you Yahoo!? SBC Yahoo! - Internet access at a great low price. http://promo.yahoo.com/sbc/ --- This SF.Net email is sponsored by: SourceForge.net Broadband Sign-up now for SourceForge Broadband and get the fastest 6.0/768 connection for only $19.95/mo for the first 3 months! http://ads.osdn.com/?ad_id=2562alloc_id=6184op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
On Mer, 2004-05-19 at 20:30, Jon Smirl wrote: xserver draws each app into it's own pbuffer. The individual apps don't have access to the main framebuffer. A properly designed xserver should be free from the screen scraping attack too. The DRM module will have to make sure you can't read buffers that don't belong to you. X isnt trustable. You can't prove the needed trust. --- This SF.Net email is sponsored by: SourceForge.net Broadband Sign-up now for SourceForge Broadband and get the fastest 6.0/768 connection for only $19.95/mo for the first 3 months! http://ads.osdn.com/?ad_id=2562alloc_id=6184op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
A rep from the SELinux group was at the Xdev conference. They are starting a project to verify X server. --- Alan Cox [EMAIL PROTECTED] wrote: On Mer, 2004-05-19 at 20:30, Jon Smirl wrote: xserver draws each app into it's own pbuffer. The individual apps don't have access to the main framebuffer. A properly designed xserver should be free from the screen scraping attack too. The DRM module will have to make sure you can't read buffers that don't belong to you. X isnt trustable. You can't prove the needed trust. --- This SF.Net email is sponsored by: SourceForge.net Broadband Sign-up now for SourceForge Broadband and get the fastest 6.0/768 connection for only $19.95/mo for the first 3 months! http://ads.osdn.com/?ad_id=2562alloc_id=6184op=click ___ Mesa3d-dev mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/mesa3d-dev = Jon Smirl [EMAIL PROTECTED] __ Do you Yahoo!? SBC Yahoo! - Internet access at a great low price. http://promo.yahoo.com/sbc/ --- This SF.Net email is sponsored by: SourceForge.net Broadband Sign-up now for SourceForge Broadband and get the fastest 6.0/768 connection for only $19.95/mo for the first 3 months! http://ads.osdn.com/?ad_id=2562alloc_id=6184op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
On Wed, 2004-05-19 at 11:25, Ian Romanick wrote: Jon Smirl wrote: --- Alan Cox [EMAIL PROTECTED] wrote: s/OpenGL/Some drawing library/ - providing its using the kernel interfaces we don't care what. (eg the bogl console driver is very small, the opengl one would probably be rather larger and nicer) I wasn't thinking that the kernel interface was standardized. For example DRM has some common IOCTLs and then hundreds of per chipset ones. There is no standard bitblt or draw char IOCTL. IMO, this is a long standing problem with the DRM. The main issue is that there's only one version number associated with each DRM module. What's needed is a device independent version and a device dependent version. In a sense, it needs something like an extension mechanism. Right now, some drivers support a vblank wait ioctl and some don't. The ones that do support it, support it the same way. If we could extend the DRM API in a device independent manner, we could solve some of this. When I thought about this in the past, I came up with two ways to do it. 1. Add a device independent version. Device independent code could be written that would test the version number the same way device dependent code does today. The drawback is that in order to advertise version X.N functionality, you also have to adverteise version [1.1, 1.N-1] functionality. Some hardware / drivers may not want to / be able to do that. 2. Add an extension query ioctl. Give each piece of functionality (i.e., all the related vblank functions) a unique number. Drivers would make a query like, Is extension 5 supported? If that ioctl returns true, then the driver could use that functionality. The disadvantage of this method is that it increases the number of ioctl calls that need to be made. Since the set of supported extensions can be tracked in the DRM with a bit string, the additional code size should be trivial. Thoughts? We already have a device independent version. What I put in was that the X Server can request a DRM device-independent and device-dependent interface version. The DRM changes its behavior accordingly (busid handling was the first change). Returns EINVAL if unsuccessful, and returns the actual version numbers of the DRM DI/DD interface, either way. Request -1.-1 version to just get the version number of an interface without changing anything. This was part of working to allow removal of old DRM code/features, which I've been completey distracted from. -- Eric Anholt[EMAIL PROTECTED] http://people.freebsd.org/~anholt/ [EMAIL PROTECTED] --- This SF.Net email is sponsored by: SourceForge.net Broadband Sign-up now for SourceForge Broadband and get the fastest 6.0/768 connection for only $19.95/mo for the first 3 months! http://ads.osdn.com/?ad_id=2562alloc_id=6184op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
Jon Smirl wrote: --- Holger Waechtler [EMAIL PROTECTED] wrote: Why does VT switch have to be in the kernel? I can have multiple xterms logged in as different users without kernel support. Why can't VT switching be implemented as if I was switching between multiple fullscreen xterms? I guess I don't see why there is a difference between multiple xterms and VT's. I can use su to set the xterm to any account. what if you want to create a 'freshclean' console using the SAK key or if your primary terminal is not responsive anymore? You need to be able to create a new terminal without any help of the currently active console and you need to be able to restart the graphics helper library/server/whatever at any time without killing your running applications on other 'VTs'. In this model SAK is implemented in the kernel by the system console. Hitting SAK will always work even if the terminal is not responsive (unless the kernel is dead). SAK will whack all apps on the user console, reload the graphics libs, and get you a fresh shell. In this model it is not possible to restart the graphics lib on one console without restarting all of the virtual consoles since they are all sharing the same process. This seems secure to me. Whacking everything with SAK and reloading the graphics libraries completely stops any attack that may try to load code into the graphics card. I wouldn't be surprised if you could write a pixel shader program that emulates a login screen. It may be possible to build this to suspend the apps instead of whacking them. Then you could reattached them if needed. If not reattached in a few minutes they would die. hmmm, it's not clear to me how this concept would allow real multiple user logins at the same time like it is common -- you can rum multiple X11 instances on multiple VTs and every new user is able to hit SAK without killing other user's applications. Holger --- This SF.Net email is sponsored by: Oracle 10g Get certified on the hottest thing ever to hit the market... Oracle 10g. Take an Oracle 10g class now, and we'll give you the exam FREE. http://ads.osdn.com/?ad_id=3149alloc_id=8166op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
--- Holger Waechtler [EMAIL PROTECTED] wrote: hmmm, it's not clear to me how this concept would allow real multiple user logins at the same time like it is common -- you can rum multiple X11 instances on multiple VTs and every new user is able to hit SAK without killing other user's applications. It's not going to allow multiple login prompts on different VTs on the same head. If you have a two headed card you will be able to use SAK on each one independently. SAK will kill the processes associated with that head, not all of the heads. If you have two cards each with four head you will be able to run four people each independently running xserver and SAK. What you won't be able to do is VT switch to another login screen. These screens will work like xterms, you switch to one then use su to change identity. When you switch you will already be logged on. This is because the SAK screen assigned ownership of the graphics device to the user id that is logged on. That let's the graphics code run without the need to be a root process. = Jon Smirl [EMAIL PROTECTED] __ Do you Yahoo!? Yahoo! Domains Claim yours for only $14.70/year http://smallbusiness.promotions.yahoo.com/offer --- This SF.Net email is sponsored by: Oracle 10g Get certified on the hottest thing ever to hit the market... Oracle 10g. Take an Oracle 10g class now, and we'll give you the exam FREE. http://ads.osdn.com/?ad_id=3149alloc_id=8166op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
--- Alan Cox [EMAIL PROTECTED] wrote: On Maw, 2004-05-18 at 01:13, Jon Smirl wrote: 1) Boot console. This is implement via BIOS support. It is used to printk a processor initialization failure or failure to find initramfs. Some embedded systems might have to build one of these into the kernel but not a normal desktop machine. This is the kind of console you use to write grub/lilo. It looks like all non-86, non-Mac archs already have this. We can't use the BIOS that late. Currently the set up we have is that the normal console kicks in after PCI setup, and might be vga text mode, frame buffer or whatever. This is your system console and probably where predefined modes are used for nonvga devices, no acceleration and so on. We also have an early_vga PC console hack, and firmware console drivers that can kick in earlier (normally for debug) depending upon the platform. In the PC case the 16bit bios console services go away too early but EFI might provide help here if its ever adopted. Thats analogous to your bios console I guess ? Boot console is a platform specific problem. It's only purpose is to get out an error message saying that the system console can't be found or some other similar type error. Agree with this level. The kernel provides the tty layer (Unix 98 ttys) and might need some userspace apps tweaking a little too - no big problems. I was thinking ptmx/pty, or do you want to use tty? With ptmx/pty you can get rid of the tty devices. When User console is up it is using the full OpenGL driver. xserver would use the same OpenGL driver. The User console app and xserver could even be the same program. If User console/xserver dies, you can always user SAK to relaunch if it doesn't happen automatically. s/OpenGL/Some drawing library/ - providing its using the kernel interfaces we don't care what. (eg the bogl console driver is very small, the opengl one would probably be rather larger and nicer) I wasn't thinking that the kernel interface was standardized. For example DRM has some common IOCTLs and then hundreds of per chipset ones. There is no standard bitblt or draw char IOCTL. I definitely don't want to try sharing the device driver on VT switch, that will take us right back to where we are. Each device should have a single client library driving it at a time. But this works if the program implement VT switch is the owner of the device. So you don't have any problem with pulling VT support out of the kernel? = Jon Smirl [EMAIL PROTECTED] __ Do you Yahoo!? SBC Yahoo! - Internet access at a great low price. http://promo.yahoo.com/sbc/ --- This SF.Net email is sponsored by: SourceForge.net Broadband Sign-up now for SourceForge Broadband and get the fastest 6.0/768 connection for only $19.95/mo for the first 3 months! http://ads.osdn.com/?ad_id=2562alloc_id=6184op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
On Maw, 2004-05-18 at 21:14, Jon Smirl wrote: I was thinking ptmx/pty, or do you want to use tty? With ptmx/pty you can get rid of the tty devices. You need the tty devices for the boot/kernel console and the code specifc to them is tiny. For the usermode one its clearly ptmx/pty I wasn't thinking that the kernel interface was standardized. For example DRM has some common IOCTLs and then hundreds of per chipset ones. There is no standard bitblt or draw char IOCTL. The DRM layer provides the needed basic kernel services, be they standardised or not. The question of what library is used really doesn't matter. Yes someone would have to write a lot of code for many chips with DRI and chip specific code - but thats up to them. I definitely don't want to try sharing the device driver on VT switch, that will take us right back to where we are. Each device should have a single client library driving it at a time. But this works if the program implement VT switch is the owner of the device. So you don't have any problem with pulling VT support out of the kernel? You need the code to handle video context switches. You also need vt's because you have multiple security contexts on the PC console and good reason to keep that when using SELinux. VT switch is easy however. DRI+X already handles that, and we never have two people using the VT at once. Its one device, multiple handles only one currently active - like many other drivers --- This SF.Net email is sponsored by: SourceForge.net Broadband Sign-up now for SourceForge Broadband and get the fastest 6.0/768 connection for only $19.95/mo for the first 3 months! http://ads.osdn.com/?ad_id=2562alloc_id=6184op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
Around 19 o'clock on May 18, Alan Cox wrote: VT switch is easy however. DRI+X already handles that, and we never have two people using the VT at once. Its one device, multiple handles only one currently active - like many other drivers No thoughts to supporting multiple sets of VTs, one per physical device then? -keith pgpflcVLboxvW.pgp Description: PGP signature
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
On Iau, 2004-05-13 at 01:58, Jon Smirl wrote: --- Alan Cox [EMAIL PROTECTED] wrote: argument for _good_ console support becomes that after boot you run a graphical user space console app built with OpenGL, antialiased true When I proposed this a couple of months back both you and Linus called me insane. I need to go find those posts. At the time you seemed to want to get rid of the actual in kernel basic console - that is why. There is still a master kernel based console that handles boot, printk, oops and kdbg. Each head will use the kernel based console to implement SAK. Ctrl-Alt-Del gets you SAK, SysReq get you the kernel console. No logins on the kernel console it is write only No it needs to be read/write. You want it for embedded setups, for debugging and for all those inconvenient rescue the computer situations. (and for kernel debuggers..) --- This SF.Net email is sponsored by: SourceForge.net Broadband Sign-up now for SourceForge Broadband and get the fastest 6.0/768 connection for only $19.95/mo for the first 3 months! http://ads.osdn.com/?ad_id=2562alloc_id=6184op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
--- Alan Cox [EMAIL PROTECTED] wrote: On Iau, 2004-05-13 at 01:58, Jon Smirl wrote: --- Alan Cox [EMAIL PROTECTED] wrote: argument for _good_ console support becomes that after boot you run a graphical user space console app built with OpenGL, antialiased true When I proposed this a couple of months back both you and Linus called me insane. I need to go find those posts. At the time you seemed to want to get rid of the actual in kernel basic console - that is why. There is still a master kernel based console that handles boot, printk, oops and kdbg. Each head will use the kernel based console to implement SAK. Ctrl-Alt-Del gets you SAK, SysReq get you the kernel console. No logins on the kernel console it is write only No it needs to be read/write. You want it for embedded setups, for debugging and for all those inconvenient rescue the computer situations. (and for kernel debuggers..) I was too strict in saying write only, I listed kdbg as a user and it obviously needs read/write. Same for a system that needs fsck. If you booted in single user mode it would probably come up on this console too. I've reading though the code for some of the other arch's and this is the model I'm currently thinking about: 1) Boot console. This is implement via BIOS support. It is used to printk a processor initialization failure or failure to find initramfs. Some embedded systems might have to build one of these into the kernel but not a normal desktop machine. This is the kind of console you use to write grub/lilo. It looks like all non-86, non-Mac archs already have this. 2) System console. An soon as initramfs is up and early user space is active this console initializes. When it starts, Boot console disappears. This console is used for: SAK, kdbg, OOPs, printk, single, fsck, etc. The user space code is used to probe monitor DDC. The kernel driver for this is based on a DRM/fbdev merge. You can use SysReq to hotkey to this console at any time. This console is implemented in the kernel. 3) User console. A normal login gets one of these. It will probably be generated by a user space app implement on OpenGL with fancy fonts, unicode, etc. The user is assigned ownership of their video device which allows direct rendering. Multiple copies of this app can be running, each providing an independent user console on each video device. Kernel VT and terminal emulation support is pushed into this user space app. When User console is up it is using the full OpenGL driver. xserver would use the same OpenGL driver. The User console app and xserver could even be the same program. If User console/xserver dies, you can always user SAK to relaunch if it doesn't happen automatically. = Jon Smirl [EMAIL PROTECTED] __ Do you Yahoo!? SBC Yahoo! - Internet access at a great low price. http://promo.yahoo.com/sbc/ --- This SF.Net email is sponsored by: SourceForge.net Broadband Sign-up now for SourceForge Broadband and get the fastest 6.0/768 connection for only $19.95/mo for the first 3 months! http://ads.osdn.com/?ad_id=2562alloc_id=6184op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Mode manager / Framebuffer management (WAS: Re: [Dri-devel] Memory management of AGP and VRAM)
Mike Mestnik wrote: Let me start of by saying I think you are on the right track and all of your ideas look good. --- David Bronaugh [EMAIL PROTECTED] wrote: Mike Mestnik wrote: This is vary good. - To accomidate mergedfb the number of FBs should be allowed to be 0. How does mergedfb work internally? I don't know. However we need it to? I think if we cripple X the current mergedfb will also have to be crippled. I think this design allows for mergedfb to work however we need it to by encapsulating all that (setup) logic within the Extended DRM module. Do you disagree? Alternatively to this, maybe the best way to do this would be to specify a double-width mode (eg 2048x768) and an extra feature parameter of MERGEDFB or some such -- that might work. However, I can't claim to understand mergedfb (as in, how it's implemented) yet, so this is probably a naive solution. I see it more as just a way of pointing a viewport to a framebuffer, like a screen(FB) swap. What I see is that a FB gets allocated and then modes get set, with it's viewport looking into this FB. This can all be part of the modesetting code, then the FB and the viewport should be returned. That way the FB can be deallocated, after a succsesfull FB change. There will be rare cases where the card can't handel both FBs then the FB allocate code might need to handel this NEEDED deallocate/change inorder to allocate the new/replacing FB. Hmm, I don't really see any disagreement here. In fact, I think you're filling in useful missing details in my understanding. The framebuffer alloc code will be part of the proposed Extended DRM infrastructure -- which I would rather not be part of (beyond what I want to do). Hopefully modes can be set withought FBs this cuts down on the FB {a,de}lloc code. However inorder to cutdown on card specific code, it may be best for all cards the deal with worst-case FB alloc, if this is to be a feature. All allocation of framebuffers will happen within the kernel. None will be requested by the mode manager. - Sharing of FBs should be allowed, for heads on the same card. Same deal, except instead of a feature of MERGEDFB, the feature should be CLONE I don't like the idea of having things so static. Attaching and deattaching modes(viewports) from FBs should be done via a full API. If this is at all possible? I don't know whether it is or not. I kind of like this kind of simplicity, though -- I think it allows for simply adding numbers if needed, which would be - There is no way to ?change?(read as specify) the size of a FB. If you can specify the resolution, you can specify the size of the framebuffer. What else did you have in mind? Size of viewport != FB size, thought I think you got that by the end of my msg. Yeah, I understood that. - Allocating the second/... FB may be difficult, My comments above and below, as two diffrent cases. Shouldn't be a problem. - Have mem free as well as mem total. This helps with multi-tasking, I.E. Two apps sharing the same VT(context). For multi-headed cards thay will have to share FB resources. - Returning hardware capabilitys(like in a termcap type way), not just mem sizes. I.E. zbuffer type(how to know it's size). Allocating a FB on some cards may not be a simple as L*H*D. As I'm not an expert on hardware I don't know what snags you might hit on, that are not version but card dependant. Hmm... I'd love for you to elaborate here, though I -think- I know what you're getting at. I wish I could but I realy don't know, it's just something I think the desing might need. I used the source and saw into the future. OK; elaborate? Virtual fb vs Actual FB. IMHO Actual FB is the monitors mode and not the allocated size of the FB(Virtual fb). That's the idea. This is what the mode manager receives: struct ms_mode { __u32 xres; __u32 yres; __u32 bpp; __u32 refresh; }; No FB? This may be positive. I -did- forget to specify the PCI dev, and the CRTC number (screen number, whatever). That should be added to the struct -- maybe something like this: struct ms_mode { char * pci_dev; __u32 crtc_no; __u32 yres; __u32 bpp; __u32 refresh; }; I can't think of any cases where you'd actually -need- to call the video BIOS in order to change the framebuffer pointer or offsets (if this were true, GL would be a nightmare on those cards). Please. I need more feedback here, people. SPEAK! Destroy my ideas! Nitpick until this proposal looks like swiss chese! The more feedback there is, the better it will be. David Bronaugh --- This SF.Net email is sponsored by: SourceForge.net Broadband Sign-up now for SourceForge Broadband and get the fastest 6.0/768 connection for only $19.95/mo for the first 3 months! http://ads.osdn.com/?ad_id=2562alloc_id=6184op=click --
Re: [Dri-devel] Memory management of AGP and VRAM
Mike Mestnik wrote: This is vary good. - To accomidate mergedfb the number of FBs should be allowed to be 0. How does mergedfb work internally? I don't know. Alternatively to this, maybe the best way to do this would be to specify a double-width mode (eg 2048x768) and an extra feature parameter of MERGEDFB or some such -- that might work. However, I can't claim to understand mergedfb (as in, how it's implemented) yet, so this is probably a naive solution. - Sharing of FBs should be allowed, for heads on the same card. Same deal, except instead of a feature of MERGEDFB, the feature should be CLONE - There is no way to ?change?(read as specify) the size of a FB. If you can specify the resolution, you can specify the size of the framebuffer. What else did you have in mind? - Allocating the second/... FB may be difficult, - Have mem free as well as mem total. - Returning hardware capabilitys(like in a termcap type way), not just mem sizes. I.E. zbuffer type(how to know it's size). Hmm... I'd love for you to elaborate here, though I -think- I know what you're getting at. The more I think about this, the more sense it makes to have the apps talking to the kernel and requesting things via ioctl(s), the kernel communicating with the userspace to do mode management, and the kernel communicating back to the app. Having the being the thing communicated with is getting to be a huge mess. That being said, having kernel calling userspace via whatever method can be ugly. Here's the suggestion I got from hpa (which is roughly what I was thinking of; but he filled in some important bits): - At startup, a pipe is opened which the mode manager can read from and the kernel can write to - When the kernel needs to set a mode, it locks the dev, feeds the pipe, and waits a predefined period of time (0.5s?) - Once the kernel's sure it can set the mode, it feeds the pipe to the mode manager a serialized version of the mode params - The mode manager sets the mode, then goes back to waiting on the pipe - The kernel returns from the ioctl 0.5s (or however much time) after it called the mode manager Bad things about this approach: - Can't tell if setting the mode succeeded (see below about fail-over) - There is an assumption made about how long it will take to set modes -- would probably have to run with realtime priority to ensure setting the mode happened quickly enough (it already runs as root; why not) Dubious things: - Have to have mode knowledge gleaned from DDC or whatnot in kernel - Alternative: Have also do - There might be problems if the entire device needed to be locked while the mode was being set - My question: Is this necessary? Good things: - Don't have to have the know about MergedFB, clone mode, etc because it's *simply* setting mode timings -- nothing more - Still moves that important chunk of code out of the kernel - Keeps locking entirely in kernel (if it is needed) Here's the call chain for the mode manager (when it starts up): - Sets up pipe to kernel - Tries each DRM device, and finds out what's at each one - Opens a config file, loads in user-specified modelines (necessary?) - Queries each of them for DDC data or whatnot (using the specific driver) and stores it associated with that device - Calls an ioctl on each live DRM device and informs it of the available modes (simply xres, yres, refresh; nothing more) - Could fire up a thread to watch for i2c data or whatever - Waits on the pipe Here's my current call chain for setting a mode: - App requests mode from Extended DRM via ioctl (same sort of format, except packed into a struct instead of a string) - Extended DRM checks if the requested mode is available - Extended DRM locks the device - Extended DRM checks if there is enough memory to set the specified mode - If not enough memory, Extended DRM returns -ENOMEM or something - Otherwise, continue - Extended DRM frees previous framebuffer (if applicable), allocates new framebuffer(s) (including Z buffer) - This could be where the device could be unlocked, all depending - If the Extended DRM ioctl could have a safe way to set registers, it would be true. - Extended DRM cooks up a serialized version of the mode string - Extended DRM feeds pipe to userspace application the serialized mode string - Extended DRM waits 0.5s or whatever timeout is decided upon - When this is done (if we don't have a safe way to unlock registers and setting modes while other stuff is going on is not safe) the Extended DRM unlocks the device - Mode manager receives serialized mode, parses it (or whatever; could simply shove it into a struct; it's trusted code) - Mode manager gets best-match mode (you want something usable at least if bad things happen; screen corruption's ugly) - Mode manager sets appropriate registers - Sets timing registers - Does _NOT_ set base address registers or anything to do with tiling modes, or
Re: [Dri-devel] Memory management of AGP and VRAM
Let me start of by saying I think you are on the right track and all of your ideas look good. --- David Bronaugh [EMAIL PROTECTED] wrote: Mike Mestnik wrote: This is vary good. - To accomidate mergedfb the number of FBs should be allowed to be 0. How does mergedfb work internally? I don't know. However we need it to? I think if we cripple X the current mergedfb will also have to be crippled. Alternatively to this, maybe the best way to do this would be to specify a double-width mode (eg 2048x768) and an extra feature parameter of MERGEDFB or some such -- that might work. However, I can't claim to understand mergedfb (as in, how it's implemented) yet, so this is probably a naive solution. I see it more as just a way of pointing a viewport to a framebuffer, like a screen(FB) swap. What I see is that a FB gets allocated and then modes get set, with it's viewport looking into this FB. This can all be part of the modesetting code, then the FB and the viewport should be returned. That way the FB can be deallocated, after a succsesfull FB change. There will be rare cases where the card can't handel both FBs then the FB allocate code might need to handel this NEEDED deallocate/change inorder to allocate the new/replacing FB. Hopefully modes can be set withought FBs this cuts down on the FB {a,de}lloc code. However inorder to cutdown on card specific code, it may be best for all cards the deal with worst-case FB alloc, if this is to be a feature. - Sharing of FBs should be allowed, for heads on the same card. Same deal, except instead of a feature of MERGEDFB, the feature should be CLONE I don't like the idea of having things so static. Attaching and deattaching modes(viewports) from FBs should be done via a full API. If this is at all possible? - There is no way to ?change?(read as specify) the size of a FB. If you can specify the resolution, you can specify the size of the framebuffer. What else did you have in mind? Size of viewport != FB size, thought I think you got that by the end of my msg. - Allocating the second/... FB may be difficult, My comments above and below, as two diffrent cases. - Have mem free as well as mem total. This helps with multi-tasking, I.E. Two apps sharing the same VT(context). For multi-headed cards thay will have to share FB resources. - Returning hardware capabilitys(like in a termcap type way), not just mem sizes. I.E. zbuffer type(how to know it's size). Allocating a FB on some cards may not be a simple as L*H*D. As I'm not an expert on hardware I don't know what snags you might hit on, that are not version but card dependant. Hmm... I'd love for you to elaborate here, though I -think- I know what you're getting at. I wish I could but I realy don't know, it's just something I think the desing might need. I used the source and saw into the future. This is sorta what I had in mind for modes. The first part is a blatant rip of linux/fb.h: struct mode { __u32 xres; /* Actual FB width */ __u32 yres; /* Actual FB height */ __u32 xres_virtual; /* Virtual fb width */ __u32 yres_virtual;/* Virtual fb height */ __u32 xoffset; /* Offset of actual from top-left corner of virtual */ __u32 yoffset; __u32 bpp; /* Bits per pixel */ __u32 refresh; /* Preferred refresh rate (0 for no preference) */ __u32 fb_mode; /* Example: various tiled modes versus linear; defined as integers (LINEAR, TILED4K, etc) */ __u32 feature; /* Numeric feature code (eg MERGEDFB, CLONE) */ }; Virtual fb vs Actual FB. IMHO Actual FB is the monitors mode and not the allocated size of the FB(Virtual fb). This is what the mode manager receives: struct ms_mode { __u32 xres; __u32 yres; __u32 bpp; __u32 refresh; }; No FB? This may be positive. __ Do you Yahoo!? SBC Yahoo! - Internet access at a great low price. http://promo.yahoo.com/sbc/ --- This SF.Net email is sponsored by: SourceForge.net Broadband Sign-up now for SourceForge Broadband and get the fastest 6.0/768 connection for only $19.95/mo for the first 3 months! http://ads.osdn.com/?ad_id=2562alloc_id=6184op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] Memory management of AGP and VRAM
David Bronaugh wrote: Anyhow... to recap the ideas thus far: I'm going to elaborate considerably at this point, probably dragging in lots of stuff no one wants to see in here, etc -- all to try and figure out what's wanted/needed. The following is about the mode setter itself. One weird thought I had was that in the end, this mode setting system would probably have to handle monitor hotplug events. I don't know how useful this feature is, etc. I'm not sure exactly how this would work, or -if- it works, or if it belongs in this. But -- there are some interesting consequences of it: - Monitors could be unplugged at any time, and swapped -- and things like refresh rates could be changed if needed (an interesting possibility). I'm not sure, but I -think- this might be good support infrastructure for various power management suspend options. - This would allow for monitor swaps during operating system operation to be rather painless, which I can state for certain is not true on some other platforms *cough* Winders *cough* Putting monitor parameter read code (DDC or whatever it ends up being) into this -does- make sense in my opinion (hotplug or none). Feedback on why this is a good/bad thing to do would be nice, since I really don't know. Another weird idea I had involved Linux's hotplug system -- there are some interesting possibilities there. I'm not sure where it'd go though. But -- the end result of it is that there should be more than one way to get output out of this code when a mode change is requested, or other events happen (like monitors being plugged in). Possibly have a message bus to throw stuff like this onto (D-Bus anyone?). This might be useful for a future X server, for example, to be able to change configuration on the fly as the hardware configuration changes. Random question: How close a correspondence is there currently between userspace (X server) and kernelspace (fbdev) drivers? - Userspace application (mode setter) which holds all mode setting code which waits on a FIFO for input - Informs kernel of mode changes (and other things? what else?) via ioctl - Informs other listening applications of mode changes, monitor connect events, etc. (D-Bus?) - (What about hotplug video?) - How should the kernel inform this of a video card addition? Is this meaningful/relevant? - Should call ioctl to get list of available Extended DRM devices and associated drivers (don't want to duplicate code) - It seems like there would be exact pairing between kernel-side Extended DRM devs and mode setting drivers - This sounds to me like a very useful thing for the Extended DRM being proposed (could be used in X servers, maybe; see random question above) - Small userspace library to format mode requests to be sent to the mode setter via FIFO - Don't reinvent the wheel - Format of messages might be something like device identifier, resx, resy, colour depth, refresh (optional), extra_params (optional) - Kernel Extended DRM driver (for lack of a better term) handles ioctl informing kernel of mode changes Anyhow, there's the weird-thoughts-and-fleshing-out for the day. David Bronaugh --- This SF.Net email is sponsored by: SourceForge.net Broadband Sign-up now for SourceForge Broadband and get the fastest 6.0/768 connection for only $19.95/mo for the first 3 months! http://ads.osdn.com/?ad_id=2562alloc_id=6184op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] Memory management of AGP and VRAM
--- David Bronaugh [EMAIL PROTECTED] wrote: - Format of messages might be something like device identifier, resx, resy, colour depth, refresh (optional), extra_params (optional) Did you talk at all about memory mngmt? For instance when setting a mode is it needed to have a frame buffer or at least enuff memory for that mode. The reason I ask is in the VGA days one could have 6 FBs. I know for most things 2 will be sufficent but 3 is also good, thought it's true you probly rarely need a full-screen FB. - Kernel Extended DRM driver (for lack of a better term) handles ioctl informing kernel of mode changes Anyhow, there's the weird-thoughts-and-fleshing-out for the day. David Bronaugh __ Do you Yahoo!? Yahoo! Movies - Buy advance tickets for 'Shrek 2' http://movies.yahoo.com/showtimes/movie?mid=1808405861 --- This SF.Net email is sponsored by: SourceForge.net Broadband Sign-up now for SourceForge Broadband and get the fastest 6.0/768 connection for only $19.95/mo for the first 3 months! http://ads.osdn.com/?ad_id=2562alloc_id=6184op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] Memory management of AGP and VRAM
Egbert Eich wrote: David Bronaugh writes: Egbert Eich wrote: I don't think you want to call user mode code from inside the kernel. The kernel could take a passive role and use the mode that a userland program tells it is set. If all the kernel needs is a linear freambuffer of size x * y and depth d there is no problem. Things get a little more complicated if the kernel wants to set the fb start address for scrolling, use acceleration for faster drawing or the framebuffer is not really fully linear. I was talking about the userspace code -only- doing mode setting. It would take the parameters passed to it via a FIFO or whatever, in whatever format, and set that mode on the specified device. Nothing more. It wouldn't have state (if at all possible). One thing I'm not at all sure about is how to have bidirectional communication between kernel and userspace. The idea I had was for the userspace mode-setting program to open a block device-file (like /dev/drmctl0 (just making up names here)) and wait for input in the form of a string (there's no reason to go with the formats I've suggested here; they're just for the sake of argument). On receipt of that input, it would either set the requested mode and tell the kernel exactly what mode it set, or not set the requested mode and tell the kernel it didn't set the mode (both via the same device-file; maybe an ioctl?). That is pretty much like power management is handled today. A user daemon sits there and waits for events. It will act accordingly and return a status to the kernel. But why do you want to do mode setting from inside the kernel anyhow? If we can make the kernel do its output on whatever video mode it gets we should be fine. This way the user app sets the video mode and the kernel can still print emergency messages (well, in theory - as writing to the fb will definitely collide with active accellerator hardware). My only argument for having any of the mode setting kernel-side, really, was for bootstrap/initial mode setting. Other than that, I don't care. And the initial mode setting for boot messages needs to happen way earlier - possibly in a bootstrap manager. As I see it, this'd basically get around all the license problems with the mode setting code (it could still be GPL, yet since it isn't in a position to taint, that's OK) and it would result in -one- location, guaranteed, for mode setting code. I don't know whether the one location thing'd be a good idea, but it sounds like one to me. Here my point is that the world is not Linux only (although I use Linux myself) and it would make sense to make this code portable across OSes. In this case GPL may be a problem - especially if the code needs to go into the kernel. The userspace mode-setting program (what I'm talking about here), which would be doing any more tricky mode setting, would have -no- hooks into the kernel. None. Thus, even if it were GPL, it wouldn't be a problem for it to be running on a *BSD. It'd ensure that the mode setting code was -entirely- separate from the X server, any other libraries, etc. It'd also allow the driver writer, at their discretion, to put the code in the kernel (in which case the userspace code would never be used) or in userspace (in which case the kernel would simply request that the userspace code do its bidding). You mean code that could be put either into the kernel or live in userland - depending on the requirements of the underlying OS? Or the requirements of the hardware, or the decisions of the driver creator -- whichever. Of course, the kernel portion would potentially still have license problems... it's not a total solution to that. But -- it does get as much code as you want into userspace, without enforcing policies. Right. If this proposal were to follow your idea, and encompass all mode setting code, then the only code needed -should- be trivial. In fact (interesting idea) there's no reason a userspace application couldn't feed a mode change request to the daemon via a FIFO, and the mode setter merely inform the kernel (sounds a bit more sane, even). That would almost totally decouple mode setting from the kernel -- which, by the sounds of it, would be a good thing. If an OS wants to do things differently, it shouldn't be a big deal. Right, however there are people who like to have a more fine grained control over things than just accepting what the driver considers the best-match. Right... what this says to me is that there have to be more possible parameters in this string. And some may even be driver dependent. Absolutely. Anyhow... to recap the ideas thus far: - Userspace application (mode setter) which holds all mode setting code which waits on a FIFO for input - Informs kernel of mode changes via ioctl - Small userspace library to format mode requests to be sent to
Re: [Dri-devel] Memory management of AGP and VRAM
Mike Mestnik wrote: --- David Bronaugh [EMAIL PROTECTED] wrote: - Format of messages might be something like device identifier, resx, resy, colour depth, refresh (optional), extra_params (optional) Did you talk at all about memory mngmt? For instance when setting a mode is it needed to have a frame buffer or at least enuff memory for that mode. The reason I ask is in the VGA days one could have 6 FBs. I know for most things 2 will be sufficent but 3 is also good, thought it's true you probly rarely need a full-screen FB. I didn't talk at all about memory management -- I guess I should, because obviously that ground's not covered. First, you'd have to add a parameter specifying how many framebuffers you wanted at that res -- add that after 'depth'. Next, you'd use a kernel-side ioctl to query whether there is enough memory to set a mode -- almost identical to the ioctl to actually set the mode. So the sequence would go: - Userspace program requets mode by feeding string into FIFO of mode setter (generated using library) - Mode setter makes sure that the mode is possible given the current monitor setup and CRTC - Mode setter locks device (so that nothing funny can happen) - Mode setter asks kernel if mode is possible - If mode is not possible, mode setter releases lock and feeds couldn't do this type response to program requesting mode - If mode is possible, continue - Mode setter tries to set mode - If somehow this fails, mode setter releases lock and feeds couldn't do this type response to program requesting mode - I don't know how this would ever happen; if it can't this code can be simplified. - If this succeeds, continue - Mode setter informs kernel of mode change, new parameters - Mode setter releases lock If the actual act of setting a mode can never fail, then this can be simplified -- we don't need an ioctl to ask if a mode is possible (from the kernel's perspective; aka, is there enough memory), we simply need a way to find out if setting that mode in kernel was successful. Essentially, this would be the set of events then: - Userspace program requets mode by feeding string into FIFO of mode setter (generated using library) - Mode setter makes sure that the mode is possible given the current monitor setup and CRTC - Mode setter locks device (so that nothing funny can happen) - Mode setter informs kernel of mode change, new parameters - If mode is not possible, mode setter releases lock and feeds couldn't do this type response to program requesting mode - If mode is possible, continue - Mode setter sets mode - Mode setter releases lock When the mode setter informs the kernel of a mode change, it can do whatever it wants, but the logical action is for it to allocate framebuffer memory. This is one possible route, where the kernel is minimally involved. I don't like it all that much, for the following reasons: - It's illogical to open a FIFO which ends in some userspace program to set modes (shouldn't you call an ioctl to the driver to do this?) The alternative is to have the kernel somehow call this code (can it feed a FIFO?). I don't know if this is a better approach -- Egbert Eich doesn't seem to think so. Feedback more than weclome. David Bronaugh --- This SF.Net email is sponsored by: SourceForge.net Broadband Sign-up now for SourceForge Broadband and get the fastest 6.0/768 connection for only $19.95/mo for the first 3 months! http://ads.osdn.com/?ad_id=2562alloc_id=6184op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] Memory management of AGP and VRAM
This is vary good. - To accomidate mergedfb the number of FBs should be allowed to be 0. - Sharing of FBs should be allowed, for heads on the same card. - There is no way to ?change?(read as specify) the size of a FB. - Allocating the second/... FB may be difficult, - Have mem free as well as mem total. - Returning hardware capabilitys(like in a termcap type way), not just mem sizes. I.E. zbuffer type(how to know it's size). --- David Bronaugh [EMAIL PROTECTED] wrote: Mike Mestnik wrote: --- David Bronaugh [EMAIL PROTECTED] wrote: - Format of messages might be something like device identifier, resx, resy, colour depth, refresh (optional), extra_params (optional) Did you talk at all about memory mngmt? For instance when setting a mode is it needed to have a frame buffer or at least enuff memory for that mode. The reason I ask is in the VGA days one could have 6 FBs. I know for most things 2 will be sufficent but 3 is also good, thought it's true you probly rarely need a full-screen FB. I didn't talk at all about memory management -- I guess I should, because obviously that ground's not covered. First, you'd have to add a parameter specifying how many framebuffers you wanted at that res -- add that after 'depth'. Next, you'd use a kernel-side ioctl to query whether there is enough memory to set a mode -- almost identical to the ioctl to actually set the mode. So the sequence would go: - Userspace program requets mode by feeding string into FIFO of mode setter (generated using library) - Mode setter makes sure that the mode is possible given the current monitor setup and CRTC - Mode setter locks device (so that nothing funny can happen) - Mode setter asks kernel if mode is possible - If mode is not possible, mode setter releases lock and feeds couldn't do this type response to program requesting mode - If mode is possible, continue - Mode setter tries to set mode - If somehow this fails, mode setter releases lock and feeds couldn't do this type response to program requesting mode - I don't know how this would ever happen; if it can't this code can be simplified. - If this succeeds, continue - Mode setter informs kernel of mode change, new parameters - Mode setter releases lock If the actual act of setting a mode can never fail, then this can be simplified -- we don't need an ioctl to ask if a mode is possible (from the kernel's perspective; aka, is there enough memory), we simply need a way to find out if setting that mode in kernel was successful. Essentially, this would be the set of events then: - Userspace program requets mode by feeding string into FIFO of mode setter (generated using library) - Mode setter makes sure that the mode is possible given the current monitor setup and CRTC - Mode setter locks device (so that nothing funny can happen) - Mode setter informs kernel of mode change, new parameters - If mode is not possible, mode setter releases lock and feeds couldn't do this type response to program requesting mode - If mode is possible, continue - Mode setter sets mode - Mode setter releases lock When the mode setter informs the kernel of a mode change, it can do whatever it wants, but the logical action is for it to allocate framebuffer memory. This is one possible route, where the kernel is minimally involved. I don't like it all that much, for the following reasons: - It's illogical to open a FIFO which ends in some userspace program to set modes (shouldn't you call an ioctl to the driver to do this?) The alternative is to have the kernel somehow call this code (can it feed a FIFO?). I don't know if this is a better approach -- Egbert Eich doesn't seem to think so. Feedback more than weclome. David Bronaugh __ Do you Yahoo!? Yahoo! Movies - Buy advance tickets for 'Shrek 2' http://movies.yahoo.com/showtimes/movie?mid=1808405861 --- This SF.Net email is sponsored by: SourceForge.net Broadband Sign-up now for SourceForge Broadband and get the fastest 6.0/768 connection for only $19.95/mo for the first 3 months! http://ads.osdn.com/?ad_id=2562alloc_id=6184op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
Jon Smirl wrote: --- Alan Cox [EMAIL PROTECTED] wrote: argument for _good_ console support becomes that after boot you run a graphical user space console app built with OpenGL, antialiased true When I proposed this a couple of months back both you and Linus called me insane. I need to go find those posts. don't worry, sometimes it takes a while for the right ideas to evolve. :) Remember that a while ago everybody thinking about anything like a graphics interface in kernel space was disparaged as fool... Holger --- This SF.Net email is sponsored by: SourceForge.net Broadband Sign-up now for SourceForge Broadband and get the fastest 6.0/768 connection for only $19.95/mo for the first 3 months! http://ads.osdn.com/?ad_id=2562alloc_id=6184op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
RE: [Dri-devel] Memory management of AGP and VRAM
Sottek, Matthew J writes: Boy, I haven't really been following this too closely, but surely this sort of thing can be resolved with an extension mechanism or api versioning? An extension mechanism is fine for eventually extending the basic functionality, but a driver writer should not have to wait for consensus to add required features to their driver. Currently I don't think we Right. Therefore I would call for an extendable API with driver private parts. There is a basic API to handle a minimal set required to make a dump fb work. Something that can be supported by almost any chip there is. Then augment this with an 'optinal' part which handles stuff that is beyond the basics but well understood and supported by more than one driver. Above this put a driver private API. Stuff in there may over time be merged into the optional part when things are well understood and we can find a common denominator. could get very much consensus on anything other than a very basic API so saying that advanced features can be defined extensions is perhaps too optimistic. If the advanced features can just remain device dependent extensions, at least in the beginning, then we can probably make some actual progress in getting to a design. API versioning is a must no matter what. Right. Egbert. --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] Memory management of AGP and VRAM
David Bronaugh writes: Egbert Eich wrote: I don't think you want to call user mode code from inside the kernel. The kernel could take a passive role and use the mode that a userland program tells it is set. If all the kernel needs is a linear freambuffer of size x * y and depth d there is no problem. Things get a little more complicated if the kernel wants to set the fb start address for scrolling, use acceleration for faster drawing or the framebuffer is not really fully linear. I was talking about the userspace code -only- doing mode setting. It would take the parameters passed to it via a FIFO or whatever, in whatever format, and set that mode on the specified device. Nothing more. It wouldn't have state (if at all possible). One thing I'm not at all sure about is how to have bidirectional communication between kernel and userspace. The idea I had was for the userspace mode-setting program to open a block device-file (like /dev/drmctl0 (just making up names here)) and wait for input in the form of a string (there's no reason to go with the formats I've suggested here; they're just for the sake of argument). On receipt of that input, it would either set the requested mode and tell the kernel exactly what mode it set, or not set the requested mode and tell the kernel it didn't set the mode (both via the same device-file; maybe an ioctl?). That is pretty much like power management is handled today. A user daemon sits there and waits for events. It will act accordingly and return a status to the kernel. But why do you want to do mode setting from inside the kernel anyhow? If we can make the kernel do its output on whatever video mode it gets we should be fine. This way the user app sets the video mode and the kernel can still print emergency messages (well, in theory - as writing to the fb will definitely collide with active accellerator hardware). And the initial mode setting for boot messages needs to happen way earlier - possibly in a bootstrap manager. As I see it, this'd basically get around all the license problems with the mode setting code (it could still be GPL, yet since it isn't in a position to taint, that's OK) and it would result in -one- location, guaranteed, for mode setting code. I don't know whether the one location thing'd be a good idea, but it sounds like one to me. Here my point is that the world is not Linux only (although I use Linux myself) and it would make sense to make this code portable across OSes. In this case GPL may be a problem - especially if the code needs to go into the kernel. The userspace mode-setting program (what I'm talking about here), which would be doing any more tricky mode setting, would have -no- hooks into the kernel. None. Thus, even if it were GPL, it wouldn't be a problem for it to be running on a *BSD. It'd ensure that the mode setting code was -entirely- separate from the X server, any other libraries, etc. It'd also allow the driver writer, at their discretion, to put the code in the kernel (in which case the userspace code would never be used) or in userspace (in which case the kernel would simply request that the userspace code do its bidding). You mean code that could be put either into the kernel or live in userland - depending on the requirements of the underlying OS? Or the requirements of the hardware, or the decisions of the driver creator -- whichever. Of course, the kernel portion would potentially still have license problems... it's not a total solution to that. But -- it does get as much code as you want into userspace, without enforcing policies. Right. Right, however there are people who like to have a more fine grained control over things than just accepting what the driver considers the best-match. Right... what this says to me is that there have to be more possible parameters in this string. And some may even be driver dependent. Egbert. --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] Memory management of AGP and VRAM
On Maw, 2004-05-11 at 19:48, Egbert Eich wrote: For the text console to be usable you possibly want to be able to 1. move the fb start address for scrolling 2. to do some basik 2D accel for fast text drawing Also your framebuffer may not be completely linear. In which case bank switch is needed. Its actually not clear you need 2D accel. Its *nice* but not essential. Its becoming more and more the case that console mode is the debug/boot interface for a device. (I'n not talking about VGA banking, but it seems like modern HW may not be able to map in all video memory at the same time). We hit this with the vesa driver - we just use the first 8Mb or so nowdays. --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] Memory management of AGP and VRAM
On Maw, 2004-05-11 at 19:48, Egbert Eich wrote: For the text console to be usable you possibly want to be able to 1. move the fb start address for scrolling 2. to do some basik 2D accel for fast text drawing I thought about this a bit more. Let me propose a different viewpoint as a result. That viewpoint is that there is no reason for any acceleration. Scroll at most. If the video mode switching is done right and apps can handle graphics nicely then you need a kernel mode text console at boot, but thinking about Jon's ideas and the GL without X and other work the rational argument for _good_ console support becomes that after boot you run a graphical user space console app built with OpenGL, antialiased true type font handling, megabyte scrollback, hotkeys, URL detection/menus, googlizer and the like. On that basis the kernel driver really can be argued to be boot/debug only. --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
RE: [Dri-devel] Memory management of AGP and VRAM
I thought about this a bit more. Let me propose a different viewpoint as a result. That viewpoint is that there is no reason for any acceleration. Scroll at most. If the video mode switching is done right and apps can handle graphics nicely then you need a kernel mode text console at boot, but thinking about Jon's ideas and the GL without X and other work the rational argument for _good_ console support becomes that after boot you run a graphical user space console app built with OpenGL, antialiased true type font handling, megabyte scrollback, hotkeys, URL detection/menus, googlizer and the like. I agree! The 99% case should just be a user-space console. It is a much more efficient design because currently the console renders rather synchronous with the data being generated which is unnecessary. but back to the banked memory problem. The issue is that writing pixels is a device dependent operation. Just because there are only a handful a ways it is done does not make it device independent. You should be calling a put function rather than drawing characters in memory from DI code. The put function's implementation would probably be to call a generic helper but any implementation could be used. The kernel-proper then never draws to memory directly. This also allows any locking or hardware idling to be done transparently in the driver where it belongs. Yes, there is a speed disadvantage but if we are going toward doing 99% of console drawing from a user-space client it is not a concern. On that basis the kernel driver really can be argued to be boot/debug only. I don't see this leap? Hardware touching still needs a privileged home. That is either a *.so linked to a root app, A root daemon, or a kernel driver. (perhaps you meant the kernel-proper-driver interface is only used for boot/debug) I still think the solution is user-space API backed by whatever the driver writer wants/needs. My prediction is that you end up with some small kernel driver doing the hardware touching with a thin DD user-space API called from a corresponding DD layer within OGL, X server, whatever. So I think we are jelling around this concept. (Speak up if this doesn't jell with you) 1: A user mode library interface for basic mode setting that does not require elevated privileges. library is backed by whatever technical means suits your fancy. 2: Some optional components to the mode setting interface to deal with some more advanced but still device independent concepts. 3: Any number of device dependent interfaces. 4:A kernel level API so that the kernel-proper can draw in a device independent manner for slow consoles, oopsen, debuggers, and booting. --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] Memory management of AGP and VRAM
Alan Cox writes: In which case bank switch is needed. Its actually not clear you need 2D accel. Its *nice* but not essential. Its becoming more and more the case that console mode is the debug/boot interface for a device. OK. (I'n not talking about VGA banking, but it seems like modern HW may not be able to map in all video memory at the same time). We hit this with the vesa driver - we just use the first 8Mb or so nowdays. That should work. The kernel could use whatever bank is currently mapped. Or the banking is done inside the kernel - as it may be needed anyway to do DRI. Egbert. --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
--- Alan Cox [EMAIL PROTECTED] wrote: argument for _good_ console support becomes that after boot you run a graphical user space console app built with OpenGL, antialiased true When I proposed this a couple of months back both you and Linus called me insane. I need to go find those posts. This logical conclusion at the end of this is a user space console. It makes the problem of multi-user Linux simple to implement. Via OpenGL you also get full acceleration. If it is coordinated with xserver you can make your VT's appear as windows. There is still a master kernel based console that handles boot, printk, oops and kdbg. Each head will use the kernel based console to implement SAK. Ctrl-Alt-Del gets you SAK, SysReq get you the kernel console. No logins on the kernel console it is write only, SAK will start the user space console. The kernel console and SAK display are implemented in the driver. The trick to understanding this is that you have to understand the concept of how direct rendering works. With direct rendering there is no requirement to send everything to another process or the kernel. You can if you want, but you don't have to. --- Alan Cox [EMAIL PROTECTED] wrote: On Maw, 2004-05-11 at 19:48, Egbert Eich wrote: For the text console to be usable you possibly want to be able to 1. move the fb start address for scrolling 2. to do some basik 2D accel for fast text drawing I thought about this a bit more. Let me propose a different viewpoint as a result. That viewpoint is that there is no reason for any acceleration. Scroll at most. If the video mode switching is done right and apps can handle graphics nicely then you need a kernel mode text console at boot, but thinking about Jon's ideas and the GL without X and other work the rational argument for _good_ console support becomes that after boot you run a graphical user space console app built with OpenGL, antialiased true type font handling, megabyte scrollback, hotkeys, URL detection/menus, googlizer and the like. On that basis the kernel driver really can be argued to be boot/debug only. --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 ___ Mesa3d-dev mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/mesa3d-dev mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/mesa3d-dev = Jon Smirl [EMAIL PROTECTED] __ Do you Yahoo!? Yahoo! Movies - Buy advance tickets for 'Shrek 2' http://movies.yahoo.com/showtimes/movie?mid=1808405861 --- This SF.Net email is sponsored by: SourceForge.net Broadband Sign-up now for SourceForge Broadband and get the fastest 6.0/768 connection for only $19.95/mo for the first 3 months! http://ads.osdn.com/?ad_id=2562alloc_id=6184op=click -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
RE: [Dri-devel] Memory management of AGP and VRAM
Sottek, Matthew J writes: I agree. I think we are on the same page. A minimal set of features is all that would be part of the defined mode setting API. It is just a question of if some of the multi-head concepts are generic enough to be part of that defined set. That's exactly the problem. My experience is that many things in mode setting are just too interrelated. You can easily design a very tiny API to set up a mode to get something drawn to the screen, however if you want to make use of all the nice features your hardware offers you will find out that this tiny API is more in your way than it is useful and you'll end up duplicating everything. You have a valid point, making a small mode API will guarantee that the most advanced drivers are going to have a duplicate API to reach their most advanced features. I don't see this as a problem. We must have a Device independent API such that software _can_ be written that will set modes and/or do some minimal drawing. Maybe the oops displayer or an OS installer etc. We also must not make the API so advanced that it is hopelessly mangled with one-off features of rare hardware. Those one-off features will have to be reached only by applications with device dependent knowledge. So I see the small duplication of API features as the solution, not the problem. --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] Memory management of AGP and VRAM
Sottek, Matthew J wrote: Sottek, Matthew J writes: I agree. I think we are on the same page. A minimal set of features is all that would be part of the defined mode setting API. It is just a question of if some of the multi-head concepts are generic enough to be part of that defined set. That's exactly the problem. My experience is that many things in mode setting are just too interrelated. You can easily design a very tiny API to set up a mode to get something drawn to the screen, however if you want to make use of all the nice features your hardware offers you will find out that this tiny API is more in your way than it is useful and you'll end up duplicating everything. You have a valid point, making a small mode API will guarantee that the most advanced drivers are going to have a duplicate API to reach their most advanced features. I don't see this as a problem. We must have a Device independent API such that software _can_ be written that will set modes and/or do some minimal drawing. Maybe the oops displayer or an OS installer etc. We also must not make the API so advanced that it is hopelessly mangled with one-off features of rare hardware. Those one-off features will have to be reached only by applications with device dependent knowledge. So I see the small duplication of API features as the solution, not the problem. Boy, I haven't really been following this too closely, but surely this sort of thing can be resolved with an extension mechanism or api versioning? Keith --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
RE: [Dri-devel] Memory management of AGP and VRAM
Boy, I haven't really been following this too closely, but surely this sort of thing can be resolved with an extension mechanism or api versioning? An extension mechanism is fine for eventually extending the basic functionality, but a driver writer should not have to wait for consensus to add required features to their driver. Currently I don't think we could get very much consensus on anything other than a very basic API so saying that advanced features can be defined extensions is perhaps too optimistic. If the advanced features can just remain device dependent extensions, at least in the beginning, then we can probably make some actual progress in getting to a design. API versioning is a must no matter what. --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
RE: [Dri-devel] Memory management of AGP and VRAM
Alan Cox writes: On Gwe, 2004-05-07 at 21:59, Egbert Eich wrote: However chipset probing/display device probing and mode setting isn't required to live in kernel space. Portability and system stability arguments speak against it. In fact only Apple MAC users seem to advocate this idea to be able to an initial video mode on their systems. Lots of minor systems from mobile phones to supercomputer systems don't have a text mode. That still only requires some predefined mode tables in the kernel in the form of register setting lists - aka the way BIOS tables generally work. Right. On the other hand on lot of systems we need to rely on some firmware to do some basic setup (ie. initialize the graphics chip for memory clock and characteristics of the video memory). I would expect that this firmware can do (or can be told to do) some basic mode set up, too. If there is no such firmware and the initialization needs to be handled e arly during pre-boot process then we are talking about a real custom case anyway. In this case an initial mode would certainly have to be set up along side the low level programming. Wouldn't it be appropriate to leave this to a 'boot loader' whatever this may be in this case - as the boot loader would like to display an initial logo or menu? The kernel could take whatever mode there is (provided it is passed all information it needs) and put its output there - much as it is proposed for the Xserver (Of course the Xserver could not provide meaningful output on a true text mode. Therefore the mode setting API should provide a minimal set of standard features a set of optional features (which may evolve over time) and allow a chipset specific API that may - over time - move into the optional features. The mode setting interface should probably be userspace. How the user space talks to the kernel module behind it is entirely its own business (or even if it does). The mode setting interface itself needs to have a common API above it however. This is how ALSA handles audio and aspects of video4linux2 work. I can see a user space interface which takes the existing XFree86 type mode line structure (timings, hsync +/- etc) being reasonably sane. The X server can compute modes it needs through this, the kernel can use precomputed modes for text and for SAK or can use the same interface via hotplug If we can move mode setting to a library (or a daemon as we may need some persistant data), the Xserver as well as the kernel would be passed information about the mode and would passively make use of it. Egbert. --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] Memory management of AGP and VRAM
David Bronaugh writes: Egbert Eich wrote: I don't only want to see mode selection in user land but also mode programming. I keep reiterating the reasons: 1. Mode programming for a number of chips needs to be done thru the BIOS. Unless one wants to stick a complete x86 emulator into the kernel this needs to be done from user land. 2. HW programming (especially programming around hw quirks) is a hard job, and you need the hardware - if possible every flavor if it. No need to duplicate this for different OSes - not speaking of the support nightmare that is involved. I don't know if someone else has suggested this (if so, I apologize for stealing your idea, random person), but is there any reason you can't have the more complicated mode programming code (the non-bootstrap variety) as a userspace program which the kernel somehow calls (however it ends up; via FIFO communication, whatever; I'm not a kernel guru), and which does all the mode setting work? I don't think you want to call user mode code from inside the kernel. The kernel could take a passive role and use the mode that a userland program tells it is set. If all the kernel needs is a linear freambuffer of size x * y and depth d there is no problem. Things get a little more complicated if the kernel wants to set the fb start address for scrolling, use acceleration for faster drawing or the framebuffer is not really fully linear. As I see it, this'd basically get around all the license problems with the mode setting code (it could still be GPL, yet since it isn't in a position to taint, that's OK) and it would result in -one- location, guaranteed, for mode setting code. I don't know whether the one location thing'd be a good idea, but it sounds like one to me. Here my point is that the world is not Linux only (although I use Linux myself) and it would make sense to make this code portable across OSes. In this case GPL may be a problem - especially if the code needs to go into the kernel. It'd ensure that the mode setting code was -entirely- separate from the X server, any other libraries, etc. It'd also allow the driver writer, at their discretion, to put the code in the kernel (in which case the userspace code would never be used) or in userspace (in which case the kernel would simply request that the userspace code do its bidding). You mean code that could be put either into the kernel or live in userland - depending on the requirements of the underlying OS? You could simply pass something like this (using an arbitrary text format) to userspace: radeon:1024x768 and have it set the best-match mode. The 'radeon' part, of course, would make sure that the wrong code wasn't used. Likewise, the userspace program could be fed any data it needed this way. Right, however there are people who like to have a more fine grained control over things than just accepting what the driver considers the best-match. Cheers, Egbert. --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] RE: [Dri-devel] Memory management of AGP and VRAM
Jon Smirl writes: Can you run grub or lilo on these machines? Also, these is no rule saying a device driver can't have several tables of _init register values that can be used to set the mode on a primary monitor at boot. I would just like to see all of the code that does DDC decoding and modeline computations moved to user space. When you add up that code there is about 40K of it in the fbdriver and about 50K in the radeon driver. When the fbdev drivers start probing for multiple heads, TV support, etc that code is going to get even larger. Since the code is used only rarely (in kernel terms) this code should be in user space instead of the driver. I've also proposed that if you really, really want to you could do the DDC probing the in driver at boot and mark all of the code _init. Then the user space code would take over after that. Note that I'm talking about early userspace (initrd) timeframe, not normal user space. Wouldn't it be the job of the kernel bootstrap process to do this initial setup? This bootstrap code would be wiped once the kernel starts up. Allow me to speak up for users of IBM pSeries hardware or Sun SPARC hardware. Users of those systems face exactly the same issues as Mac users. I imagine most embedded systems will be in the same boat. Being forced to use a serial console for early boot messages is so 1980's. ;) The kernel doesn't need to have support for everything, but I think it's important to have at least minimal support. I'm not speaking about a text mode. I would think on most systems the firmware would provide some reasonable initial mode that the kernel can use. If there is no such firmware one would expect there is some preboot software that is used to bootstrap the kernel that could do such a setup - using a number of fixed modes hard coded in tables. (It is a pain to debug, though). Cheers, Egbert. --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
RE: [Dri-devel] Memory management of AGP and VRAM
Sottek, Matthew J writes: I agree. I think we are on the same page. A minimal set of features is all that would be part of the defined mode setting API. It is just a question of if some of the multi-head concepts are generic enough to be part of that defined set. That's exactly the problem. My experience is that many things in mode setting are just too interrelated. You can easily design a very tiny API to set up a mode to get something drawn to the screen, however if you want to make use of all the nice features your hardware offers you will find out that this tiny API is more in your way than it is useful and you'll end up duplicating everything. Regards, Egbert. --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] Memory management of AGP and VRAM
Around 11 o'clock on May 10, Egbert Eich wrote: Therefore any application on top of the driver should be prepared that video mode parameters it cares about (like fb location, fb stride, fb size, resolution, depth (?) may change underneath its feet. X can handle any size/location changes, but it really doesn't want the depth of the 'screen' (aka 'root window') to change. I suspect this isn't really a likely constraint with reasonably modern video cards (like within the last 10 years or so). If it is, we may indeed need to provide some fallbacks, either across the mode API or within the X server to handle this case. -keith pgp0.pgp Description: PGP signature
Re: [Dri-devel] Memory management of AGP and VRAM
Jon Smirl writes: --- Egbert Eich [EMAIL PROTECTED] wrote: I fear that we will get a very Linux-centric view on device drivers. This will leave us with device drivers for Linux and a different ones Tell me the right non-Linux lists and I will post there too. There have been significant complaints from the Linux kernel developers over the current DRM code. Most of it centers around the crossplatform support. Better division of the platform specfic code from the generic code should address these. Right. However I've got the impression that this has improved already. As far as DRM is concerned only a few OSes/platforms matter. mode setting and 2D accel is an issue on many more OSes. Many of those don't have a vital community as Linux does. Idnividuals and groups intereseted in X (until now XFree86) on these platforms usually gather on the project related mailing lists. kernel space. One thing that - in my opinion - should *not* live in there is mode detection and initialization. I'm making some progress on this front. I think I've talked benh into it, and he's started talking to Linus about it. If Linux goes this path then is someone going to move the other platforms onto this path too? Support is starting to grow for merging FB/DRM into a simgle driver. --- Benjamin Herrenschmidt [EMAIL PROTECTED] wrote: The proposal is for a user space library that does mode setting as well as other common things like cursor support. This library would in turn load plug-in libraries for each card. Ok, we have been discussing all of these points over and over again, and I will be at KS, so I didn't want to restart the whole thing on this list, but I wanted to note a few things though: For the mode setting case the library would read the DDC data for each head using the existing I2C drivers or the driver plug-in lib for non-standard cases. This data would then be combined with config files where the user can override modelines and such. Most of this exists inside of X already. These pieces could easily be separated out of the server. I agree with the idea of moving the EDID decoding mode selection to userland. In this regard, though, I beleive we should aim toward some I don't only want to see mode selection in user land but also mode programming. I keep reiterating the reasons: 1. Mode programming for a number of chips needs to be done thru the BIOS. Unless one wants to stick a complete x86 emulator into the kernel this needs to be done from user land. 2. HW programming (especially programming around hw quirks) is a hard job, and you need the hardware - if possible every flavor if it. No need to duplicate this for different OSes - not speaking of the support nightmare that is involved. 3. Quality of video driver code is often not what we expect from kernel code. The focus of the developer is often clearly upon getting the hardware to work. Graphics driver programmers shouldn't be forced to have to deal with kernel interfaces. 4. Debugging mode setting code involves a lot of round trips (edit-build- test-edit...). This can be done more effectively from user space. 5. Having this code in user mode in a separate project allows deployment of support for new chipsets. simple library that sits with the kernel, eventually distributed with the kernel tree, to live in initramfs optionally since it may be required to even get a console at boot (which is fine, initramfs is available early). The video cards themselves have PCI drivers that can trigger detection by the library via hotplug, the library could manage things like persistant configuration, either separate desktops or geometry of a complex desktop, etc... and eventually notification of userland clients of mode changes. There are competing requirements here: Libs that wack the hardware should be OS/platform independent as possible. This is also a license issue. Yes, we will most likely need OS dependent non-chipset specific wrappers, but those are cheap to do - a lot cheaper than code dealing directly with chipset quirks. One reason for that is lots of monitors lie about their capabilities in their EDID block, so we want override files. The kernel driver in this case doesn't need to be that much different than the current fbdev's though, except that we want to move the HW access for graphics commands to the kernel too, which basically turns into merging the DRI driver and the fbdev. There is no need, I think, to re-invent the wheel from scratch here, it would be a lot more realistic to build on top of those existing pieces. The modelines would be passed into the plug-in libs which would turn them into register values. Finally the plug-in lib would use a private IOCTL to set the state into the video hardware. It's not that easy. Modelines are
Re: [Dri-devel] Memory management of AGP and VRAM
Alan Cox writes: On Iau, 2004-05-06 at 09:39, Egbert Eich wrote: Furthermore I'd argue that as little as necessary should live in the kernel space. One thing that - in my opinion - should *not* live in there is mode detection and initialization. There is a need to handle some mode setup/init in the kernel (think about non-text mode hardware) but the hotplug interface allows most cards to do that in userspace, and all the discussion so far seems keen on that [Kernel folk believe lots should be done in user space too!] Alan, That sounds good! (But we never had problems agreeing on things ;-) ) Whatever code we decide to put into the kernel, we should provide an abstraction layer to not expose the driver writer to arbitrary kernel interfaces. This aids portability, helps to keep things stable and makes this code independent of changes in other parts of the kernel. My experience at least with video driver code is that it is 'expensive' compared to most other software. So people working on such code have a different focus than the rest of the world. Cheers, Egbert. --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Linux-fbdev-devel] Re: [Mesa3d-dev] RE: [Dri-devel] Memory management of AGP and VRAM
Also, these is no rule saying a device driver can't have several tables of _init register values that can be used to set the mode on a primary monitor at boot. I would just like to see all of the code that does DDC decoding and modeline computations moved to user space. When you add up that code there is about 40K of it in the fbdriver and about 50K in the radeon driver. When the fbdev drivers start probing for multiple heads, TV support, etc that code is going to get even larger. Since the code is used only rarely (in kernel terms) this code should be in user space instead of the driver. I've also proposed that if you really, really want to you could do the DDC probing the in driver at boot and mark all of the code _init. Then the user space code would take over after that. Note that I'm talking about early userspace (initrd) timeframe, not normal user space. DDC probing is still new. I doubt it will be that large in the long run. I bet we will see redudency in the drivers that we seperate out. Give me time and I can show that it will be minimum code. --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] Memory management of AGP and VRAM
Egbert Eich wrote: I don't only want to see mode selection in user land but also mode programming. I keep reiterating the reasons: 1. Mode programming for a number of chips needs to be done thru the BIOS. Unless one wants to stick a complete x86 emulator into the kernel this needs to be done from user land. 2. HW programming (especially programming around hw quirks) is a hard job, and you need the hardware - if possible every flavor if it. No need to duplicate this for different OSes - not speaking of the support nightmare that is involved. I don't know if someone else has suggested this (if so, I apologize for stealing your idea, random person), but is there any reason you can't have the more complicated mode programming code (the non-bootstrap variety) as a userspace program which the kernel somehow calls (however it ends up; via FIFO communication, whatever; I'm not a kernel guru), and which does all the mode setting work? As I see it, this'd basically get around all the license problems with the mode setting code (it could still be GPL, yet since it isn't in a position to taint, that's OK) and it would result in -one- location, guaranteed, for mode setting code. I don't know whether the one location thing'd be a good idea, but it sounds like one to me. It'd ensure that the mode setting code was -entirely- separate from the X server, any other libraries, etc. It'd also allow the driver writer, at their discretion, to put the code in the kernel (in which case the userspace code would never be used) or in userspace (in which case the kernel would simply request that the userspace code do its bidding). You could simply pass something like this (using an arbitrary text format) to userspace: radeon:1024x768 and have it set the best-match mode. The 'radeon' part, of course, would make sure that the wrong code wasn't used. Likewise, the userspace program could be fed any data it needed this way. Anyhow, just an idea. No idea if it's a good one or not, really; but I'd love to hear feedback. David Bronaugh --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] Memory management of AGP and VRAM
David Bronaugh wrote: You could simply pass something like this (using an arbitrary text format) to userspace: radeon:1024x768 and have it set the best-match mode. The 'radeon' part, of course, would make sure that the wrong code wasn't used. Likewise, the userspace program could be fed any data it needed this way. Clarification here: I said something which didn't cover the ground very well. You'd probably want something more like radeon;pci::01:01.1;1024x768 You'd want to identify which subdevice of which card was involved. I don't know how far this would go, but without identifying the specific device, it'd be pretty useless. David Bronaugh --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] Memory management of AGP and VRAM
On Llu, 2004-05-10 at 12:46, Egbert Eich wrote: Yes, we will most likely need OS dependent non-chipset specific wrappers, but those are cheap to do - a lot cheaper than code dealing directly with chipset quirks. Well the minimal kernel side stuff required to make hot plug work is going to be - Use the kernel to do PCI stuff (esp VGA_EN !) - Map PCI objects (video ram, registers, etc) from files that allow the kernel to know what PCI device is in use - Handle the device vanishing rudely (ie polls for busy need to time out etc) With frame buffers the ability to switch mode cleanly is helpful because of two things - SAK (Secure Attention Key) - need to get back to a sane mode (Can be mostly done in user space) - Panic/Crash cases Clean context switching also makes VT switch a lot neater AGP/3D obviously adds a ton more, but if you were simply wanting to get hotplug friendly X working with a generic old 2D card very little is needed kernel side --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] Memory management of AGP and VRAM
Keith Packard writes: Around 16 o'clock on May 6, Sottek, Matthew J wrote: 1) If the mode setting can be removed from the X server then we can leverage that module for whatever graphics system is required. Some times we need an X server, some times we need something more like a framebuffer. Putting this in one place is a must. 'one place' appears to mean a common library API and not a common kernel API; some cards require extensive code to manage mode selection which can't be effectively implemented in kernel mode (like the current X i810 drivers). Right. There are other reasons that speak for leaving mode setting code out of the kernel: 1. Unlike most other code mode setting is not 'cheap'. Getting this code to work right on all falvors of cards using a specific chip is not an easy task. Keeping this independent of the OS as much as possible removes the need for duplication of this 'blood sweat' code. 2. Mode setting code is often times the result of trial and error trying to tiptoe around undocumented 'features' in the hardware. Although this code often does amazing things its quality is often not very good. It is usually not code one wants to have in the kernel. One may say: this can be fixed! Trust me, experience has show, this doesn't happen! 3. Faster deployment of new chipset support on a wider range of OSes. 2) Providing one place for rendering code would be nice too. For cards which can support it, I'd like to suggest that the GL API seems a natural fit here. Retargeting the X server to GL appears possible, and I hope to have a proof of concept running by OLS to show people. I'm sure this works well when COMPOSITE is enabled and OpenGL is used to accelerate COMPOSITE. I would like to see the performance of COMPOSITE + OpenGL vs. no COMPOSITE + XAA. For other cards, I suggest that there aren't a whole lot of useful accelerated operations; 2D only cards generally don't support general image compositing, so the only critical operations for modern applications are video-video blt and (optionally) solid fill. I've implemented rather a lot of X servers in this way to good effect. With XAA we already have an abstraction model for X that only requires to program the basic hardware dependent functionality in the chipset driver itself and to set bitmask telling the abstraction layer above which functionalities are supported. I'm sure this layer could be generalized. 1) A small, device-independent, API that can be used to set modes and do some very simple rendering. Yes, the lowest level graphics driver needs to be able to request a specific mode and find out how that affects the hardware. I would suggest that the 'mode' selected here be indirect -- a 'symbolic' mode which reflects a more sophisticated configuration as specified by some device-specific mechanism. For instance, it would be nice to start a graphics application in TV mode without that application needing to know about all of the underlying complexities. This is similar to how standard modes are specified in X today -- you request a resolution, which is really just a symbolic name for a list of modes. The driver then selects one of those modes which the monitor can support. This doesn't really work well with 'TV' as this may require a specific resolution that's dependent on the underlying hardware. Your permitted resolutions are often tightly coupled to a lot of hardware dependent parameters. Only the chipset dependent driver can decide which modes it is capable of after it has knowledge about these other parameters. More so, these parameters may change during runtime when for example the output device is to be changed. Therefore any application on top of the driver should be prepared that video mode parameters it cares about (like fb location, fb stride, fb size, resolution, depth (?) may change underneath its feet. 2) A mechanism to make all the device dependent extensions your heart desires. Absolutely -- both for driver writers and mode selection mechanisms. Of course, one thing here is to make sure the kernel API isn't just a 'bag of bits in an ioctl'. Perhaps the kernel API could accept a list of register name/value pairs for the desired mode; the kernel driver would then be responsible for setting the register values appropriately. On some platforms this is already done on the PIO level. Egbert. --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] RE: [Dri-devel] Memory management of AGP and VRAM
On Sul, 2004-05-09 at 17:45, Jon Smirl wrote: Also, these is no rule saying a device driver can't have several tables of _init register values that can be used to set the mode on a primary monitor at boot. I would just like to see all of the code that does DDC decoding and modeline computations moved to user space. But there should also be no rule that says it cannot be in kernel space. Lets take the Voodoo2 again, the mode computation is *tiny*. Or many embedded devices where the modes are very simple to set up. The API at user space for the driver modules has to leave the question of *who* does what private to the driver. --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] RE: [Dri-devel] Memory management of AGP and VRAM
Can you run grub or lilo on these machines? Also, these is no rule saying a device driver can't have several tables of _init register values that can be used to set the mode on a primary monitor at boot. I would just like to see all of the code that does DDC decoding and modeline computations moved to user space. When you add up that code there is about 40K of it in the fbdriver and about 50K in the radeon driver. When the fbdev drivers start probing for multiple heads, TV support, etc that code is going to get even larger. Since the code is used only rarely (in kernel terms) this code should be in user space instead of the driver. I've also proposed that if you really, really want to you could do the DDC probing the in driver at boot and mark all of the code _init. Then the user space code would take over after that. Note that I'm talking about early userspace (initrd) timeframe, not normal user space. --- Ian Romanick [EMAIL PROTECTED] wrote: Egbert Eich wrote: However chipset probing/display device probing and mode setting isn't required to live in kernel space. Portability and system stability arguments speak against it. In fact only Apple MAC users seem to advocate this idea to be able to an initial video mode on their systems. Allow me to speak up for users of IBM pSeries hardware or Sun SPARC hardware. Users of those systems face exactly the same issues as Mac users. I imagine most embedded systems will be in the same boat. Being forced to use a serial console for early boot messages is so 1980's. ;) The kernel doesn't need to have support for everything, but I think it's important to have at least minimal support. --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 ___ Mesa3d-dev mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/mesa3d-dev = Jon Smirl [EMAIL PROTECTED] __ Do you Yahoo!? Win a $20,000 Career Makeover at Yahoo! HotJobs http://hotjobs.sweepstakes.yahoo.com/careermakeover --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] RE: [Dri-devel] Memory management of AGP and VRAM
Holger Waechtler wrote: Jon Smirl wrote: Can you run grub or lilo on these machines? The equivalent loader is called MILO for SPARC and Yaboot for PowerPC. oops -- the SPARC image loader was called SILO. MILO was the mini image loader for Alpha. sorry for confusion, Holger --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] RE: [Dri-devel] Memory management of AGP and VRAM
Jon Smirl wrote: Can you run grub or lilo on these machines? The equivalent loader is called MILO for SPARC and Yaboot for PowerPC. The BIOS equivalent is called OpenFirmware and provides a helper API for mode setting and graphics card initialisation. There are comments in the drivers which mark other than the default modes as TODO (e.g. 'these are somewhat sane defaults for Mac boards, we will need to find a good way of getting these from OpenFirmware'), I don't know whether they are rooted in missing docs or in technical troubles communicating with the OpenFirmware. Holger --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] Memory management of AGP and VRAM
On Iau, 2004-05-06 at 09:39, Egbert Eich wrote: Furthermore I'd argue that as little as necessary should live in the kernel space. One thing that - in my opinion - should *not* live in there is mode detection and initialization. There is a need to handle some mode setup/init in the kernel (think about non-text mode hardware) but the hotplug interface allows most cards to do that in userspace, and all the discussion so far seems keen on that [Kernel folk believe lots should be done in user space too!] --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] RE: [Dri-devel] Memory management of AGP and VRAM
Egbert Eich wrote: However chipset probing/display device probing and mode setting isn't required to live in kernel space. Portability and system stability arguments speak against it. In fact only Apple MAC users seem to advocate this idea to be able to an initial video mode on their systems. Allow me to speak up for users of IBM pSeries hardware or Sun SPARC hardware. Users of those systems face exactly the same issues as Mac users. I imagine most embedded systems will be in the same boat. Being forced to use a serial console for early boot messages is so 1980's. ;) The kernel doesn't need to have support for everything, but I think it's important to have at least minimal support. --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
RE: [Dri-devel] Memory management of AGP and VRAM
Moving mode setting from the Xsever does not necessarily mean it has to go into the kernel. I agree. The thing I am worried about (just speaking about the mode setting part here) is that we end up with 2 defined APIs. One for the mode setting, done as a user library, and another for the kernel part that is really tiny. That assumes a split that will be hard to agree on. As long as we only define one entry point, everyone can be happy. i.e. define a pretty small library API in user space and let the driver writer decide where to split the unprivileged library code from the privileged (kernel in most cases) part. But that means that there is no guarantee what the actual kernel API looks like, it becomes driver dependent. In my mind you have 3 modules. 1: Privileged code that can access hardware directly 2: Unprivileged code that provides a minimal known mode API and any number of unknown APIs specific to hardware. 3: DD code that is part of the higher level library. XFree driver or Mesa driver etc. The API between 1 and 2 is undefined. Any kernel/user split could be used. The goal for the driver writer is to get as much in module #2 but you could put all the code kernel-side and just make module #2 a function wrapper for your ioctls if you wanted. The API between 2 and 3 has only a tiny definition. Small mode setting, maybe some tiny 2d drawing just to be nice and nothing else. The rest is driver dependent and only gets used by module 3. Module 3 is just the DD layer or XFree, or the DD layer of Mesa. Doesn't matter what high level design ends up winning in the end. People seem to advocate to utilize OpenGL for 2D rendering on modern chipsets. It remains to be seem how feasable this alternative is. However a solution for this already exists. If we are talking about 3D rendering a solution for this already in the making with standalone MESA. For 2D rendering X has a rather smart infrastructure to map X drawing requests onto those 2D primitives that are commonly provided by chipsets. The driver part there is rather lightweight as most of the work is done by this abstraction layer. It would be great if this interface could be kept for chipsets that need 2D acceleration. I agree. The high level stuff will probably need several code paths depending on chip capabilities. You are talking about APIs used above module 3 which should all be possible given the correct 1 and 2. 1) A small, device-independent, API that can be used to set modes and do some very simple rendering. I would suggest get, set, put, copy. Do you suggest to accelerate these and put the acceleration for them into the kernel? This would mean a longer path from user space. Since the these operations typically don't deal with huge areas this may mean a signifficant performance penalty. Not making any claim as to where they would be (module 1 or 2). Just indicating that there would be a small defined API and everything else is undefined. Having a defined way to do some primitives would allow a completely device independent X server written today to limp along on hardware of tomorrow. Some minimal mode setting is a must have part of that defined API. Get, set, put, copy seems a safe set of helpers. Just a way to get some minimal performance until you can get an actual DD driver created or installed. Experience has shown that there is almost no way to desing an API so generic that it can effectively deal with new features that come along in the future in an effective way. I agree. I think we are on the same page. A minimal set of features is all that would be part of the defined mode setting API. It is just a question of if some of the multi-head concepts are generic enough to be part of that defined set. --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
RE: [Dri-devel] Memory management of AGP and VRAM
Sottek, Matthew J writes: I for one have been waiting to see much of the graphics driver moved to the kernel as well. From a vendor perspective there is quite a lot to be gained. 1) If the mode setting can be removed from the X server then we can leverage that module for whatever graphics system is required. Some times we need an X server, some times we need something more like a framebuffer. Putting this in one place is a must. Moving mode setting from the Xsever does not necessarily mean it has to go into the kernel. 2) Providing one place for rendering code would be nice too. We cannot assume that X is the way to go for all customers. If there were a place to put the device dependent rendering code (kernel module or low level library) then we could write X servers or custom graphics interfaces to use that library. People seem to advocate to utilize OpenGL for 2D rendering on modern chipsets. It remains to be seem how feasable this alternative is. However a solution for this already exists. If we are talking about 3D rendering a solution for this already in the making with standalone MESA. For 2D rendering X has a rather smart infrastructure to map X drawing requests onto those 2D primitives that are commonly provided by chipsets. The driver part there is rather lightweight as most of the work is done by this abstraction layer. It would be great if this interface could be kept for chipsets that need 2D acceleration. 3) Some times you can just do the job easier or better from kernel space. Trapping interrupts instead of polling can save huge amounts of CPU cycles for some usage scenarios. Power management is easier. Sometimes the hardware needs some special memory considerations etc. No need to really harp on any of the details, it is just nice to have the full power of the OS when/if you need it. I think the best way to make everyone happy is to try to achieve two things. I would argue that as little as possible should go into the kernel. There is no question that the resource handling for buses, DMA and irq needs to live within the kernel. The same is true for code that uses DMA. However chipset probing/display device probing and mode setting isn't required to live in kernel space. Portability and system stability arguments speak against it. In fact only Apple MAC users seem to advocate this idea to be able to an initial video mode on their systems. 1) A small, device-independent, API that can be used to set modes and do some very simple rendering. I would suggest get, set, put, copy. Do you suggest to accelerate these and put the acceleration for them into the kernel? This would mean a longer path from user space. Since the these operations typically don't deal with huge areas this may mean a signifficant performance penalty. That would allow the kernel to render consoles or oopsen regardless of the mode (debugging the kernel on top of your X session?), and allow for any API of the month to make use of some very basic functionality. Mode setting should just be small as well, leave all the one-off features for extensions. No need to clutter an API with features that are rare. Although the fbdev is already available, I wouldn't suggest that it is a great platform to build on. The mode setting API is really not very good and it does not have modern concepts of twin, clone etc. I think a clean slate design that didn't try to accomplish everything in device independent manner could be a much more attractive target. Experience has shown that there is almost no way to desing an API so generic that it can effectively deal with new features that come along in the future in an effective way. Soon after XFree86 4 came out graphics cards with what you call twin view became available. We had to kludge around to make this work in XFree86. It was difficult but it was possible because the 4.x driver design was such that the driver was the controlling instance of everything - well, almost everything. All the pieces where this was not entirely true - and the number of heads per chipset was an example here - proved to be nightmares. Therefore the mode setting API should provide a minimal set of standard features a set of optional features (which may evolve over time) and allow a chipset specific API that may - over time - move into the optional features. 2) A mechanism to make all the device dependent extensions your heart desires. Then the X servers, opengl libs, etc can just have a DD component to access the hardware specific API. The more things you try to have a device independent API for, the more problems you will have trying to get agreement. Leave the API's to themselves. We should be trying to create a driver model, not a new graphics API. Ogl, X11, DirectFB, etc should be out of scope. Right. My experience has certainly shown that almost no assumption we have made in the past remained
Re: [Dri-devel] Memory management of AGP and VRAM
Jon Smirl writes: I'm putting together a document for Kernel Summit that describes the issues around graphics device drivers. The kernel developers are currently making first pass comments on it. As soon as I fold their comments in I'll post it to fb-dev, dri-dev and wherever else is appropriate for the next round of comments. Nobody is proposing final solutions yet, I'm just trying to collect everyone's opinion. I fear that we will get a very Linux-centric view on device drivers. This will leave us with device drivers for Linux and a different ones (or none!) for the rest of the world. From an X developers point of view this is a support nightmare as he is the first one users will turn to if things don't work as expected. Furthermore I'd argue that as little as necessary should live in the kernel space. One thing that - in my opinion - should *not* live in there is mode detection and initialization. First of all, we will not be able to do generic VESA mode initialization in the kernel (unless we decide to stick a complete x86 emulator into the kernel). Then many driver developers often take a very naive apporach at things and produce code that I know I don't want to see in my kernel. One can try to educate them which may not always be possible - especailly in the case of closed source drivers. Egbert. --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] Memory management of AGP and VRAM
--- Egbert Eich [EMAIL PROTECTED] wrote: I fear that we will get a very Linux-centric view on device drivers. This will leave us with device drivers for Linux and a different ones Tell me the right non-Linux lists and I will post there too. There have been significant complaints from the Linux kernel developers over the current DRM code. Most of it centers around the crossplatform support. Better division of the platform specfic code from the generic code should address these. kernel space. One thing that - in my opinion - should *not* live in there is mode detection and initialization. I'm making some progress on this front. I think I've talked benh into it, and he's started talking to Linus about it. If Linux goes this path then is someone going to move the other platforms onto this path too? Support is starting to grow for merging FB/DRM into a simgle driver. --- Benjamin Herrenschmidt [EMAIL PROTECTED] wrote: The proposal is for a user space library that does mode setting as well as other common things like cursor support. This library would in turn load plug-in libraries for each card. Ok, we have been discussing all of these points over and over again, and I will be at KS, so I didn't want to restart the whole thing on this list, but I wanted to note a few things though: For the mode setting case the library would read the DDC data for each head using the existing I2C drivers or the driver plug-in lib for non-standard cases. This data would then be combined with config files where the user can override modelines and such. I agree with the idea of moving the EDID decoding mode selection to userland. In this regard, though, I beleive we should aim toward some simple library that sits with the kernel, eventually distributed with the kernel tree, to live in initramfs optionally since it may be required to even get a console at boot (which is fine, initramfs is available early). The video cards themselves have PCI drivers that can trigger detection by the library via hotplug, the library could manage things like persistant configuration, either separate desktops or geometry of a complex desktop, etc... and eventually notification of userland clients of mode changes. One reason for that is lots of monitors lie about their capabilities in their EDID block, so we want override files. The kernel driver in this case doesn't need to be that much different than the current fbdev's though, except that we want to move the HW access for graphics commands to the kernel too, which basically turns into merging the DRI driver and the fbdev. There is no need, I think, to re-invent the wheel from scratch here, it would be a lot more realistic to build on top of those existing pieces. The modelines would be passed into the plug-in libs which would turn them into register values. Finally the plug-in lib would use a private IOCTL to set the state into the video hardware. Note that there are side effect. Changing the mode on a head can trigger a mode change on another. That typically happen when doing things like mirroring in which case you may want to turn your mirrored screen into a mode whose aspect ration is compatible with the second screen. (Typically the case with non-4:3 LCD laptop screens mirroring to 4:3 CRTs). In general, the thing should be designed so that clients can be notified at any time of configuration changes. That along with a lower level arbitration mecanism on actual hardware access. There are numerous pros and cons for both a both schemes. The user space code is swappable, easier to debug, and does not need to be run as root. Cons are that these are more pieces to track. Device driver code is minimized. On the other hand boot time mode setting forces the code back into the kernel. Early user space should also be considered. It may be possible to use the BIOS for display until early user space is there, then change the mode. There's only about a screenful of display before early user space starts. If the userland code is in initramfs, it can be run very early, we can use a small text engine like pmac btext for early debugging if necessary. A side effect of the whole mode setting issue is dual/tri head cards. Once there are multiple heads with multiple framebuffers. FB is going to have to start mem managing the VRAM which it does not currently do. DRI runs a memory manager over the same VRAM and this is a conflict. Yes, that, access arbitration, and config change notifications are the main issues at this time. Another conflict is that the OpenGL/xserver can move the display framebuffer around in memory, for example when going full screen on video. It will be complicated to coordinate the location of the current scan buffers between xserver and fb. Currently Xfree can't do this so it isn't a problem. Ben. = Jon Smirl [EMAIL PROTECTED]
Re: [Dri-devel] Memory management of AGP and VRAM
--- Egbert Eich [EMAIL PROTECTED] wrote: Jon Smirl writes: I'm putting together a document for Kernel Summit that describes the issues around graphics device drivers. The kernel developers are currently making first pass comments on it. As soon as I fold their comments in I'll post it to fb-dev, dri-dev and wherever else is appropriate for the next round of comments. Nobody is proposing final solutions yet, I'm just trying to collect everyone's opinion. I fear that we will get a very Linux-centric view on device drivers. This will leave us with device drivers for Linux and a different ones (or none!) for the rest of the world. From an X developers point of view this is a support nightmare as he is the first one users will turn to if things don't work as expected. We also have to consider the trade off between the interfaces a modern graphics driver needs verses maintianing multi-platform availability. If linux merges the FB/DRM drivers and moves certain things to the kernel, there is nothing stopping other OS kernel developers from adding similar features to their kernels, potentially even re-using the linux fb/drm model (pending licenses). If X standardizes on an interface to hardware, we can leave it up to the kernel people to implement that interface. X develpers won't have to worry about re-implementing support for various buses and quirks that the OS already handles. OSes that choose not to support the new interfaces can always fall back on the older releases of X. Future chipsets may not even be useable down the road in the current model. Furthermore I'd argue that as little as necessary should live in the kernel space. One thing that - in my opinion - should *not* live in there is mode detection and initialization. First of all, we will not be able to do generic VESA mode initialization in the kernel (unless we decide to stick a complete x86 emulator into the kernel). Then many driver developers often take a very naive apporach at things and produce code that I know I don't want to see in my kernel. One can try to educate them which may not always be possible - especailly in the case of closed source drivers. Egbert. __ Do you Yahoo!? Win a $20,000 Career Makeover at Yahoo! HotJobs http://hotjobs.sweepstakes.yahoo.com/careermakeover --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] Memory management of AGP and VRAM
--- Egbert Eich [EMAIL PROTECTED] wrote: Jon Smirl writes: I'm putting together a document for Kernel Summit that describes the issues around graphics device drivers. The kernel developers are currently making first pass comments on it. As soon as I fold their comments in I'll post it to fb-dev, dri-dev and wherever else is appropriate for the next round of comments. Nobody is proposing final solutions yet, I'm just trying to collect everyone's opinion. I fear that we will get a very Linux-centric view on device drivers. This will leave us with device drivers for Linux and a different ones (or none!) for the rest of the world. From an X developers point of view this is a support nightmare as he is the first one users will turn to if things don't work as expected. We also have to consider the trade off between the interfaces a modern graphics driver needs verses maintianing multi-platform availability. If linux merges the FB/DRM drivers and moves certain things to the kernel, there is nothing stopping other OS kernel developers from adding similar features to their kernels, potentially even re-using the linux fb/drm model (pending licenses). If X standardizes on an interface to hardware, we can leave it up to the kernel people to implement that interface. X develpers won't have to worry about re-implementing support for various buses and quirks that the OS already handles. OSes that choose not to support the new interfaces can always fall back on the older releases of X. Future chipsets may not even be useable down the road in the current model. Furthermore I'd argue that as little as necessary should live in the kernel space. One thing that - in my opinion - should *not* live in there is mode detection and initialization. First of all, we will not be able to do generic VESA mode initialization in the kernel (unless we decide to stick a complete x86 emulator into the kernel). Then many driver developers often take a very naive apporach at things and produce code that I know I don't want to see in my kernel. One can try to educate them which may not always be possible - especailly in the case of closed source drivers. Egbert. __ Do you Yahoo!? Win a $20,000 Career Makeover at Yahoo! HotJobs http://hotjobs.sweepstakes.yahoo.com/careermakeover --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] Memory management of AGP and VRAM
Alex Deucher writes: We also have to consider the trade off between the interfaces a modern graphics driver needs verses maintianing multi-platform availability. If linux merges the FB/DRM drivers and moves certain things to the kernel, there is nothing stopping other OS kernel developers from adding similar features to their kernels, potentially even re-using the linux fb/drm model (pending licenses). If X standardizes on an 1. We are not only supporting OS kernels. And if we do there may exist licenses problems (as you've noted already). Disfranchising these OSes and showing them the finger would be extremely rude. 2. A single code base with thin abstraction wrappers will help to reduce the support burdeon. Requiring to duplicate code in different kernels will introduce different errors on every OS. interface to hardware, we can leave it up to the kernel people to implement that interface. X develpers won't have to worry about re-implementing support for various buses and quirks that the OS already handles. OSes that choose not to support the new interfaces I agree that we should get rid of this crap. When we were finalizing XFree86 4.x I already suggested to move a lot of functionalities that currently exist in the Xserver to the kernel. At this time I was stonewalled by people saying that we will have to be able to support older kernels anyway. Finally I gave up and stuck everything into the Xserver (even duplicating stuff that was already in the newer kernel - because people wanted to use the old cruft) can always fall back on the older releases of X. Future chipsets may not even be useable down the road in the current model. That would be a support nightmare. We still occasionally see bug reports for XFree86 3.x. We probably can dump a lot of the stuff that is currently in the Xserver into an external lib and not worry about it much any more. This lib can be used by everybody who - for whatever reason - doesn't have the kernel interfaces. Egbert. --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
RE: [Dri-devel] Memory management of AGP and VRAM
I for one have been waiting to see much of the graphics driver moved to the kernel as well. From a vendor perspective there is quite a lot to be gained. 1) If the mode setting can be removed from the X server then we can leverage that module for whatever graphics system is required. Some times we need an X server, some times we need something more like a framebuffer. Putting this in one place is a must. 2) Providing one place for rendering code would be nice too. We cannot assume that X is the way to go for all customers. If there were a place to put the device dependent rendering code (kernel module or low level library) then we could write X servers or custom graphics interfaces to use that library. 3) Some times you can just do the job easier or better from kernel space. Trapping interrupts instead of polling can save huge amounts of CPU cycles for some usage scenarios. Power management is easier. Sometimes the hardware needs some special memory considerations etc. No need to really harp on any of the details, it is just nice to have the full power of the OS when/if you need it. I think the best way to make everyone happy is to try to achieve two things. 1) A small, device-independent, API that can be used to set modes and do some very simple rendering. I would suggest get, set, put, copy. That would allow the kernel to render consoles or oopsen regardless of the mode (debugging the kernel on top of your X session?), and allow for any API of the month to make use of some very basic functionality. Mode setting should just be small as well, leave all the one-off features for extensions. No need to clutter an API with features that are rare. Although the fbdev is already available, I wouldn't suggest that it is a great platform to build on. The mode setting API is really not very good and it does not have modern concepts of twin, clone etc. I think a clean slate design that didn't try to accomplish everything in device independent manner could be a much more attractive target. 2) A mechanism to make all the device dependent extensions your heart desires. Then the X servers, opengl libs, etc can just have a DD component to access the hardware specific API. The more things you try to have a device independent API for, the more problems you will have trying to get agreement. Leave the API's to themselves. We should be trying to create a driver model, not a new graphics API. Ogl, X11, DirectFB, etc should be out of scope. -Matt --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] Memory management of AGP and VRAM
Around 16 o'clock on May 6, Sottek, Matthew J wrote: 1) If the mode setting can be removed from the X server then we can leverage that module for whatever graphics system is required. Some times we need an X server, some times we need something more like a framebuffer. Putting this in one place is a must. 'one place' appears to mean a common library API and not a common kernel API; some cards require extensive code to manage mode selection which can't be effectively implemented in kernel mode (like the current X i810 drivers). 2) Providing one place for rendering code would be nice too. For cards which can support it, I'd like to suggest that the GL API seems a natural fit here. Retargeting the X server to GL appears possible, and I hope to have a proof of concept running by OLS to show people. For other cards, I suggest that there aren't a whole lot of useful accelerated operations; 2D only cards generally don't support general image compositing, so the only critical operations for modern applications are video-video blt and (optionally) solid fill. I've implemented rather a lot of X servers in this way to good effect. 1) A small, device-independent, API that can be used to set modes and do some very simple rendering. Yes, the lowest level graphics driver needs to be able to request a specific mode and find out how that affects the hardware. I would suggest that the 'mode' selected here be indirect -- a 'symbolic' mode which reflects a more sophisticated configuration as specified by some device-specific mechanism. For instance, it would be nice to start a graphics application in TV mode without that application needing to know about all of the underlying complexities. This is similar to how standard modes are specified in X today -- you request a resolution, which is really just a symbolic name for a list of modes. The driver then selects one of those modes which the monitor can support. 2) A mechanism to make all the device dependent extensions your heart desires. Absolutely -- both for driver writers and mode selection mechanisms. Of course, one thing here is to make sure the kernel API isn't just a 'bag of bits in an ioctl'. Perhaps the kernel API could accept a list of register name/value pairs for the desired mode; the kernel driver would then be responsible for setting the register values appropriately. -keith pgp0.pgp Description: PGP signature
RE: [Dri-devel] Memory management of AGP and VRAM
1) If the mode setting can be removed from the X server then we can leverage that module for whatever graphics system is required. Some times we need an X server, some times we need something more like a framebuffer. Putting this in one place is a must. 'one place' appears to mean a common library API and not a common kernel API; some cards require extensive code to manage mode selection which can't be effectively implemented in kernel mode (like the current X i810 drivers). Exactly. I have no real concern on where one place is, as long as all the clients (And the kernel booting is a client) can access it. I would contend that it is perhaps just a long held fear that mode setting is too big and complex for the kernel. It will be big, complex, and highly privileged code no matter where it lives. At some point we will cross the line between what is too complex. If we aren't there now we will be some day soon. Lots of OS designs have had mode setting kernel-side for a long time. The driver contains register level hardware knowledge and elevated privileges. The client does not have either. It knows only the driver's API. Same as DD OGL (client) and DRM (driver). 2) Providing one place for rendering code would be nice too. For cards which can support it, I'd like to suggest that the GL API seems a natural fit here. Retargeting the X server to GL appears possible, and I hope to have a proof of concept running by OLS to show people. I think you are straying into the area I wanted to stay away from. The driver model will have some chunk of device dependent code talking to the hardware that knows nothing of the high level API. It isn't X related, it isn't OGL related. It is hardware related only. Then there are corresponding DD parts for X, OGL or whatever that know this hardware specific API. As a vendor I want to write this low level component once no matter what the future design of the higher level clients. You cannot predict with any degree of accuracy what the future of graphics is going to be, but that doesn't preclude getting the driver model right. If you want to stack X on top of OGL... fine. I'm talking about the API between the DD parts of OGL and the hardware driver. For other cards, I suggest that there aren't a whole lot of useful accelerated operations; 2D only cards generally don't support general image compositing, so the only critical operations for modern applications are video-video blt and (optionally) solid fill. I've implemented rather a lot of X servers in this way to good effect. Exactly. The driver model interface (maybe call it a HAL) looks like the hardware. For older generation cards there would be very little here, just some 2d stuff. The DD portion of OGL or whatever would be doing it all in software (probably punting) This is similar to how standard modes are specified in X today -- you request a resolution, which is really just a symbolic name for a list of modes. The driver then selects one of those modes which the monitor can support. We have correctly moved most of the X world away from modelines and toward a symbolic name, however the DI-DD part is still broken. X tells the driver the set of timings but X knows nothing about the hardware so the more advanced drivers throw those timings away and use a correct set that most closely matches. It should be DD component telling DI component what is possible and DI component choosing from that list. 2) A mechanism to make all the device dependent extensions your heart desires. Absolutely -- both for driver writers and mode selection mechanisms. Of course, one thing here is to make sure the kernel API isn't just a 'bag of bits in an ioctl'. Correct. Bag o bits is what I'd like to see removed from the fbdev. I think a command interface would be a good design. i.e. Write commands rather than ioctls. (or one ioctl that takes a command buffer) that way you can get one context switch (all privilege elevations will require this, even if it isn't a Ring3-Ring0 thing) for a lot of commands. Plus it is easy to version, and support multiple versions concurrently. Command0 == Set command interface to version 1.0 command1 == draw foo command2 == dispatch dma buffer xyz and another client... command == set command interface to version 1.2 command1 == draw foo (slightly different interface than 1.0) Perhaps the kernel API could accept a list of register name/value pairs for the desired mode; the kernel driver would then be responsible for setting the register values appropriately. I would actually hate to see this for the mode setting part. This is a complete over-simplification of modern hardware. You would end up defining a whole language of sleeps, wait_for_bits, and register writes to get the job done correctly. You cannot just blast register values to hardware. On Intel hardware you will need to wait for vsyncs, talk to external devices over i2c, etc. Trying to let user-space determine all the register
Re: [Dri-devel] Memory management of AGP and VRAM
Around 18 o'clock on May 6, Sottek, Matthew J wrote: I would contend that it is perhaps just a long held fear that mode setting is too big and complex for the kernel. With a library API instead of a kernel API, each driver author can choose precisely where the split belongs. I think you are straying into the area I wanted to stay away from. The driver model will have some chunk of device dependent code talking to the hardware that knows nothing of the high level API. Yes, that's certainly true -- the kernel shouldn't know anything about the machinations of user-mode API layering. It should be DD component telling DI component what is possible and DI component choosing from that list. Yes, surely -- the graphics application has no business asserting what video timings the monitor can accept. The rest of your comments seem quite reasonable to me; let's keep discussion on this list focused on the kernel aspects of video card support and ignore what's going out in userland. -keith pgp0.pgp Description: PGP signature
Re: [Dri-devel] Memory management of AGP and VRAM
--- Jon Smirl [EMAIL PROTECTED] wrote: Is there a document describing how memory management is handled for the overall AGP/VRAM space? I've found where texture memory is handled, but who is allocating space for framebuffers on multi-head cards? Right now the framebuffer is managed in the DDX. Alan Hourihane and Ian Romanick both have done some work on new improved memory managers for X and the DRI. If we were to redo the memory management code to support mesa-solo (ie no X present) what would need to be changed? Should this code be in the driver or user space? That's a good question. Alex = Jon Smirl [EMAIL PROTECTED] __ Do you Yahoo!? Win a $20,000 Career Makeover at Yahoo! HotJobs http://hotjobs.sweepstakes.yahoo.com/careermakeover --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] Memory management of AGP and VRAM
I'm putting together a document for Kernel Summit that describes the issues around graphics device drivers. The kernel developers are currently making first pass comments on it. As soon as I fold their comments in I'll post it to fb-dev, dri-dev and wherever else is appropriate for the next round of comments. Nobody is proposing final solutions yet, I'm just trying to collect everyone's opinion. Memory management of AGP/VRAM space is identified as a problem area but nobody has proposed any solution for it. Any solution needs to take into account FB, DRM, mesa-solo and existing Xfree. There have been a few minor comments both ways for doing it in a driver and a library. --- Alex Deucher [EMAIL PROTECTED] wrote: --- Jon Smirl [EMAIL PROTECTED] wrote: Is there a document describing how memory management is handled for the overall AGP/VRAM space? I've found where texture memory is handled, but who is allocating space for framebuffers on multi-head cards? Right now the framebuffer is managed in the DDX. Alan Hourihane and Ian Romanick both have done some work on new improved memory managers for X and the DRI. If we were to redo the memory management code to support mesa-solo (ie no X present) what would need to be changed? Should this code be in the driver or user space? That's a good question. Alex = Jon Smirl [EMAIL PROTECTED] __ Do you Yahoo!? Win a $20,000 Career Makeover at Yahoo! HotJobs http://hotjobs.sweepstakes.yahoo.com/careermakeover = Jon Smirl [EMAIL PROTECTED] __ Do you Yahoo!? Win a $20,000 Career Makeover at Yahoo! HotJobs http://hotjobs.sweepstakes.yahoo.com/careermakeover --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
Jon Smirl wrote: I'm putting together a document for Kernel Summit that describes the issues around graphics device drivers. The kernel developers are currently making first pass comments on it. As soon as I fold their comments in I'll post it to fb-dev, dri-dev and wherever else is appropriate for the next round of comments. Nobody is proposing final solutions yet, I'm just trying to collect everyone's opinion. Memory management of AGP/VRAM space is identified as a problem area but nobody has proposed any solution for it. Any solution needs to take into account FB, DRM, mesa-solo and existing Xfree. There have been a few minor comments both ways for doing it in a driver and a library. That's not entirely true. I made a proposal last February (search the dri-devel archives for texmem-0-0-2) that used a combination of in-kernel and user-space. Basically, the memory management mechanism is implemented in-kernel, but the policy is implemented in user-space. --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
--- Ian Romanick [EMAIL PROTECTED] wrote: That's not entirely true. I made a proposal last February (search the dri-devel archives for texmem-0-0-2) that used a combination of in-kernel and user-space. Basically, the memory management mechanism is implemented in-kernel, but the policy is implemented in user-space. Here's a link to it: http://www.mail-archive.com/[EMAIL PROTECTED]/msg09472.html Do you have any updates to it? We can put a copy up on fd.o and I'll link it into the next round of discussions. Can any of the kernel memory management code be reused instead of building our own? Obviously this is a different pool but maybe we could use existing allocators. Are there any more design documents like this floating around that should be referenced? = Jon Smirl [EMAIL PROTECTED] __ Do you Yahoo!? Win a $20,000 Career Makeover at Yahoo! HotJobs http://hotjobs.sweepstakes.yahoo.com/careermakeover --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] Memory management of AGP and VRAM
On Wed, 5 May 2004, Jon Smirl wrote: I'm putting together a document for Kernel Summit that describes the issues around graphics device drivers. The kernel developers are currently making first pass comments on it. As soon as I fold their comments in I'll post it to fb-dev, dri-dev and wherever else is appropriate for the next round of comments. Nobody is proposing final solutions yet, I'm just trying to collect everyone's opinion. Memory management of AGP/VRAM space is identified as a problem area but nobody has proposed any solution for it. Any solution needs to take into account FB, DRM, mesa-solo and existing Xfree. There have been a few minor comments both ways for doing it in a driver and a library. This affects video capture as well. It would be nice to be able to reserve chunks of video ram from kernel-space. best Vladimir Dergachev --- Alex Deucher [EMAIL PROTECTED] wrote: --- Jon Smirl [EMAIL PROTECTED] wrote: Is there a document describing how memory management is handled for the overall AGP/VRAM space? I've found where texture memory is handled, but who is allocating space for framebuffers on multi-head cards? Right now the framebuffer is managed in the DDX. Alan Hourihane and Ian Romanick both have done some work on new improved memory managers for X and the DRI. If we were to redo the memory management code to support mesa-solo (ie no X present) what would need to be changed? Should this code be in the driver or user space? That's a good question. Alex = Jon Smirl [EMAIL PROTECTED] __ Do you Yahoo!? Win a $20,000 Career Makeover at Yahoo! HotJobs http://hotjobs.sweepstakes.yahoo.com/careermakeover = Jon Smirl [EMAIL PROTECTED] __ Do you Yahoo!? Win a $20,000 Career Makeover at Yahoo! HotJobs http://hotjobs.sweepstakes.yahoo.com/careermakeover --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Mesa3d-dev] Re: [Dri-devel] Memory management of AGP and VRAM
Jon Smirl wrote: --- Ian Romanick [EMAIL PROTECTED] wrote: That's not entirely true. I made a proposal last February (search the dri-devel archives for texmem-0-0-2) that used a combination of in-kernel and user-space. Basically, the memory management mechanism is implemented in-kernel, but the policy is implemented in user-space. Here's a link to it: http://www.mail-archive.com/[EMAIL PROTECTED]/msg09472.html Do you have any updates to it? We can put a copy up on fd.o and I'll link it into the next round of discussions. There was one posted after that. It was posted on 3-Mar-2003. For some reason, the attachment isn't on marc. http://marc.theaimsgroup.com/?l=dri-develm=104673516801006w=2 Since that point the design has changed some, but the document has not. I started writing a simulation of the design using pthreads. Some actual implementation experience exposed some problems in the design. Looking that the modification times on the files, I haven't worked on any of it since 27-May-2003. I *did* start looking at it again today. :) Can any of the kernel memory management code be reused instead of building our own? Obviously this is a different pool but maybe we could use existing allocators. That's a good question. I'd probably have to talk to someone that knows better what is available in the kernel. Are there any more design documents like this floating around that should be referenced? --- This SF.Net email is sponsored by Sleepycat Software Learn developer strategies Cisco, Motorola, Ericsson Lucent use to deliver higher performing products faster, at low TCO. http://www.sleepycat.com/telcomwpreg.php?From=osdnemail3 -- ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel