Re: [Dri-devel] GL_NV_texture_rectangle on radeon
Ian Romanick wrote: Keith Whitwell wrote: Here's a patch that mainly works. I've still seen the odd case of the texture apparently getting uploaded to the backbuffer. ...but *only* if you have a kernel module installed that understands rectangle state. As it is, the code in radeon_state_init.c allows texture rectangle state to be emitted even if the DRM cannot handle the RADEON_PP_TEX_SIZE_? packets. The result is an immediate drmCommandWrite: -22 for any GL program. :( I copied over the test from radeon_context.c to the two places in radeon_state_init.c that set-up the state atoms, but that's not a very pretty sollution. OK, my bad -- I'll fix this up. Keith --- This SF.Net email is sponsored by: INetU Attention Web Developers Consultants: Become An INetU Hosting Partner. Refer Dedicated Servers. We Manage Them. You Get 10% Monthly Commission! INetU Dedicated Managed Hosting http://www.inetu.net/partner/index.php ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
[Dri-devel] Your buglist for The XFree86 Project's Bugzilla needs attention.
[This e-mail has been automatically generated.] You have one or more bugs assigned to you in the Bugzilla bugsystem (http://bugs.xfree86.org/) that require attention. All of these bugs are in the NEW state, and have not been touched in 7 days or more. You need to take a look at them, and decide on an initial action. Generally, this means one of three things: (1) You decide this bug is really quick to deal with (like, it's INVALID), and so you get rid of it immediately. (2) You decide the bug doesn't belong to you, and you reassign it to someone else. (Hint: if you don't know who to reassign it to, make sure that the Component field seems reasonable, and then use the Reassign bug to owner of selected component option.) (3) You decide the bug belongs to you, but you can't solve it this moment. Just use the Accept bug command. To get a list of all NEW bugs, you can use this URL (bookmark it if you like!): http://bugs.xfree86.org//cgi-bin/bugzilla/buglist.cgi?bug_status=NEW[EMAIL PROTECTED] Or, you can use the general query page, at http://bugs.xfree86.org//cgi-bin/bugzilla/query.cgi. Appended below are the individual URLs to get to all of your NEW bugs that haven't been touched for a week or more. You will get this message once a day until you've dealt with these bugs! http://bugs.xfree86.org/cgi-bin/bugzilla/show_bug.cgi?id=25 http://bugs.xfree86.org/cgi-bin/bugzilla/show_bug.cgi?id=62 http://bugs.xfree86.org/cgi-bin/bugzilla/show_bug.cgi?id=78 http://bugs.xfree86.org/cgi-bin/bugzilla/show_bug.cgi?id=98 http://bugs.xfree86.org/cgi-bin/bugzilla/show_bug.cgi?id=118 http://bugs.xfree86.org/cgi-bin/bugzilla/show_bug.cgi?id=131 http://bugs.xfree86.org/cgi-bin/bugzilla/show_bug.cgi?id=185 http://bugs.xfree86.org/cgi-bin/bugzilla/show_bug.cgi?id=271 http://bugs.xfree86.org/cgi-bin/bugzilla/show_bug.cgi?id=314 http://bugs.xfree86.org/cgi-bin/bugzilla/show_bug.cgi?id=344 --- This SF.Net email is sponsored by: INetU Attention Web Developers Consultants: Become An INetU Hosting Partner. Refer Dedicated Servers. We Manage Them. You Get 10% Monthly Commission! INetU Dedicated Managed Hosting http://www.inetu.net/partner/index.php ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] Only normal DRI and Mesa CVS access as developer, now?
Am Montag, 23. Juni 2003 20:24 schrieb José Fonseca: On Mon, Jun 23, 2003 at 06:58:39PM +0200, Dieter Nützel wrote: These sourceforge backup server move is very annoying. It's hindering open source so much. Do we have other options? I've never tried but isn't possible for non-developers also access the SF CVS repository via SSH (read-only, of course)? If not then the solution would imply moving the CVS repository to a new machine, but where would that be? XFree86.org? Some SF look-a-like? Maybe at www.berlios.de? mesa3D (cvs.mesa3d.sourceforge.net) seems to be affected, too. Regards, Dieter --- This SF.Net email is sponsored by: INetU Attention Web Developers Consultants: Become An INetU Hosting Partner. Refer Dedicated Servers. We Manage Them. You Get 10% Monthly Commission! INetU Dedicated Managed Hosting http://www.inetu.net/partner/index.php ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] Last part of GLX_SGI_make_current_read support
Ian Romanick wrote: Brian Paul wrote: Ian Romanick wrote: I'm rounding out the final bits of support for SGI_make_current_read. I've hit a (hopefully) minor snag, and I'd like some advice on how to proceede. At this point, *all* of the client-side support, both in libGL.so and in at least one driver, is in place. I'm now working on the server-side support. My approach in libGL was to make a single master routine that glXMakeCurrent, glXMakeContextCurrent, and glXMakeCurrentReadSGI all call. The greatly simplified the code. I would like to duplicate that model on the server-side. My biggest uncertainty is where the API boundries are (i.e., where the binary compatability problems are). I believe that it is safe for me to modify the __GLXcontextRec (in programs/Xserver/GL/glx/glxcontext.h). I need to replace pGlxPixmap glxPriv with separate read draw pointers. I have also modified __glXMakeCurrent (in programs/Xserver/GL/glx/glxcmds.c) to be a new function called DoMakeCurrent that takes a client state pointer, read draw drawable pointers, a context ID, and a context tag and does the right thing. Things are pretty smooth up to this point. The problem comes in gc-exports.makeCurrent (called by __glXMakeCurrent in the original code). As near as I can tell, this is set to __MESA_makeCurrent (located in programs/Xserver/GL/mesa/src/X/xf86glx.c, line 775). This function makes an imported call to get the current drawable and calls XMesaMakeCurrent. Herein lies the problem. I need to add a new imported called (something like getReadablePrivate) and change __MESA_makeCurrent to call XMesaMakeCurrent2. The server-side code for MakeCurrent always struck me as being kind of weird. Instead of passing the new context and drawable to the core renderer's MakeCurrent function, __glXMakeCurrent() binds the drawable to the context (line 514), then calls the core renderer's MakeCurrent with just a context parameter. Just by looking at how things are called, making these changes would seem to be a binary compatability problem. Is that assessment correct? It seems like I can, with a certain amount of pain and suffering, work around the problem *if* I can detect when the different binaries are expecting different interfaces. Does anyone have any advice on how to do that? Is there any way for libglx.a and libGLcore.a to tell which version the other is? Is it safe to expect that both will always be in sync? After a quick code review, I think you'll have to add a new 'readbuffer' field to the end of the __GLXcontext struct in glxcontext.h. Right. I believe that there also needs to be a readPixmap field. Then, in __glXMakeCurrentRead (doesn't exist yet, but would wind up in glxcmds.c) you'd set that new field before calling gc-exports.makeCurrent() (i.e. __MESA_makeCurrent). Right. Then the gc-exports.makeCurrent calls back into the libglx.a side of things to get the drawable datastructures it needs. I coded it up so that __glXMakeCurrent and friends are just stubs that call a master function called DoMakeCurrent. __glXMakeCurrent looks like: int __glXMakeCurrent(__GLXclientState *cl, GLbyte *pc) { xGLXMakeCurrentReq *req = (xGLXMakeCurrentReq *) pc; return DoMakeCurrent( cl, req-drawable, req-drawable, req-context, req-oldContextTag ); } __glXMakeCurrentContext and __glXMakeCurrentReadSGI look very similar. All of the real work happens in DoMakeCurrent. I do this because glXMakeCurrent is the same as calling one of the other two functions with the same parameter for drawable and readdrawable. I didn't want to duplicate the code. Next, I think you'll need to add a new getReadablePrivate() function pointer to the __GLimports structure (in glcore.h). That's another ABI issue. Right. This is the same logic I was following. :) In __MESA_makeCurrent you'll call gc-imports.getReadablePrivate to get the read buffer to pass to XMesaMakeCurrent2(). Before doing this call, you'll have to somehow determine if the imports structure has the new field. Yes. Not only that, the libglx.a side has to know of the libGLcore.a will want getReadablePrivate to be there. That is, libglx.a has to know if libGLcore.a can support having the drawable and the readable be different. Another approach might be to add a new makeCurrentRead() function to the __GLexports structure (glcore.h) and call that from __glXMakeCurrentRead(), instead of gc-exports.makeCurrent. This would just move the ABI issue from one place to another though. I had thought about going that route as well. My main problem is that it adds another nearly complete code path, but doesn't buy us anything. I'm not sure of what to do about version/ABI checking. The server-side interactions between libglx.a and libGLcore.a is an area that I'm not especially knowledgable of. Ugh. So I guess that makes me the de facto expert? That's scarry! :)
Re: [Dri-devel] Last part of GLX_SGI_make_current_read support
On Tue, 2003-06-24 at 23:42, Ian Romanick wrote: Just by looking at how things are called, making these changes would seem to be a binary compatability problem. Is that assessment correct? It seems like I can, with a certain amount of pain and suffering, work around the problem *if* I can detect when the different binaries are expecting different interfaces. Does anyone have any advice on how to do that? Is there any way for libglx.a and libGLcore.a to tell which version the other is? Is it safe to expect that both will always be in sync? I can't think of a good reason why they would be out of sync. Just bump the module versions so we can diagnose when they aren't and tell people to unbreak their systems. :) -- Earthling Michel Dänzer \ Debian (powerpc), XFree86 and DRI developer Software libre enthusiast \ http://svcs.affero.net/rm.php?r=daenzer --- This SF.Net email is sponsored by: INetU Attention Web Developers Consultants: Become An INetU Hosting Partner. Refer Dedicated Servers. We Manage Them. You Get 10% Monthly Commission! INetU Dedicated Managed Hosting http://www.inetu.net/partner/index.php ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] GL_NV_texture_rectangle on radeon
Ian Romanick wrote: I only noticed this because when I try to insmod the new radeon.o, it fails on an undefined symbol mmu_cr4_features. The wierd part is that /boot/System.map-2.4.21-rc2-ac2 shows it as being in global BSS (c02ccd0c B mmu_cr4_features), *but* /proc/ksyms doesn't show it. I don't get it. I upgraded to 2.4.21-ac3, but I kept getting the same problem. To work around it, I had to apply the following patch. Alan, any chance of getting this in -ac4? :) --- arch/i386/kernel/i386_ksyms.c.orig Thu Jun 26 07:42:06 2003 +++ arch/i386/kernel/i386_ksyms.c Thu Jun 26 07:33:21 2003 @@ -77,6 +77,7 @@ EXPORT_SYMBOL(apm_info); EXPORT_SYMBOL(gdt); EXPORT_SYMBOL(empty_zero_page); +EXPORT_SYMBOL_NOVERS(mmu_cr4_features); #ifdef CONFIG_DEBUG_IOVIRT EXPORT_SYMBOL(__io_virt_debug);
Re: [Dri-devel] 3rd TMU on radeon
here is a patch that works at least for multiarb.c It is against HEAD from 19 June 2003 (I cleaned it up a bit but its not ready for merge: still some questions...) 1) could someone with an 8MB or 16MB Radeon check if the resulting max_texturesize is big enough? (just use mesa's glxinfo: glxinfo -l) 2) could someone try it out with a game/demo that makes use of the 3rd TMU? 3) could someone with knowledge about the vfmt and codegen stuff have a look on it? especially whether we need those dummys and what should be done in the fast-path and with vertex3f. best regards, Andreas diff -ru trunk_20030619/xc/xc/extras/Mesa/src/tnl_dd/t_dd_vbtmp.h tex3_20030619/xc/xc/extras/Mesa/src/tnl_dd/t_dd_vbtmp.h --- trunk_20030619/xc/xc/extras/Mesa/src/tnl_dd/t_dd_vbtmp.h Fri Apr 4 19:31:02 2003 +++ tex3_20030619/xc/xc/extras/Mesa/src/tnl_dd/t_dd_vbtmp.h Tue Jun 24 23:39:04 2003 @@ -381,6 +381,7 @@ v-pv.q0 = 1.0; v-pv.q1 = 0; /* radeon */ + v-pv.q2 = 0; /* should we do it this way or the other below? */ } else if (tc0_size == 4) { float rhw = 1.0 / tc0[i][3]; @@ -390,6 +391,9 @@ } } } + else if (DO_PTEX HAVE_PTEX_VERTICES) { + v-pv.q0 = 0; /* do we need this, too, for radeon ? */ + } if (DO_TEX1) { if (DO_PTEX) { v-pv.u1 = tc1[i][0]; @@ -403,6 +407,43 @@ v-v.u1 = tc1[i][0]; v-v.v1 = tc1[i][1]; } + } + else if (DO_PTEX HAVE_PTEX_VERTICES) { + v-pv.q1 = 0; /* do we need this, too, for the radeon ? */ + } + if (DO_TEX2) { + if (DO_PTEX) { + v-pv.u2 = tc2[i][0]; + v-pv.v2 = tc2[i][1]; + if (tc2_size == 4) + v-pv.q2 = tc2[i][3]; + else + v-pv.q2 = 1.0; + } + else { + v-v.u2 = tc2[i][0]; + v-v.v2 = tc2[i][1]; + } + } + else if (DO_PTEX HAVE_PTEX_VERTICES) { + v-pv.q2 = 0; /* do we need this, too, for the radeon ? */ + } + if (DO_TEX3) { + if (DO_PTEX) { + v-pv.u3 = tc3[i][0]; + v-pv.v3 = tc3[i][1]; + if (tc3_size == 4) + v-pv.q3 = tc3[i][3]; + else + v-pv.q3 = 1.0; + } + else { + v-v.u3 = tc3[i][0]; + v-v.v3 = tc3[i][1]; + } + } + else if (DO_PTEX HAVE_PTEX_VERTICES) { + v-pv.q3 = 0; /* do we need this, too, for the radeon ? */ } } } diff -ru trunk_20030619/xc/xc/lib/GL/mesa/src/drv/radeon/radeon_compat.c tex3_20030619/xc/xc/lib/GL/mesa/src/drv/radeon/radeon_compat.c --- trunk_20030619/xc/xc/lib/GL/mesa/src/drv/radeon/radeon_compat.c Mon Nov 25 21:20:09 2002 +++ tex3_20030619/xc/xc/lib/GL/mesa/src/drv/radeon/radeon_compat.c Tue Jun 24 23:56:46 2003 @@ -77,6 +77,7 @@ radeon_context_regs_t *ctx = sarea-ContextState; radeon_texture_regs_t *tex0 = sarea-TexState[0]; radeon_texture_regs_t *tex1 = sarea-TexState[1]; + radeon_texture_regs_t *tex2 = sarea-TexState[2]; int i; int *buf = state-cmd; @@ -180,14 +181,25 @@ tex1-pp_border_color = buf[i++]; sarea-dirty |= RADEON_UPLOAD_TEX1; break; + case RADEON_EMIT_PP_TXFILTER_2: + tex2-pp_txfilter = buf[i++]; + tex2-pp_txformat = buf[i++]; + tex2-pp_txoffset = buf[i++]; + tex2-pp_txcblend = buf[i++]; + tex2-pp_txablend = buf[i++]; + tex2-pp_tfactor = buf[i++]; + sarea-dirty |= RADEON_UPLOAD_TEX2; + break; + case RADEON_EMIT_PP_BORDER_COLOR_2: + tex2-pp_border_color = buf[i++]; + sarea-dirty |= RADEON_UPLOAD_TEX2; + break; case RADEON_EMIT_SE_ZBIAS_FACTOR: i++; i++; break; - case RADEON_EMIT_PP_TXFILTER_2: - case RADEON_EMIT_PP_BORDER_COLOR_2: case RADEON_EMIT_SE_TCL_OUTPUT_VTX_FMT: case RADEON_EMIT_SE_TCL_MATERIAL_EMMISSIVE_RED: default: diff -ru trunk_20030619/xc/xc/lib/GL/mesa/src/drv/radeon/radeon_context.c tex3_20030619/xc/xc/lib/GL/mesa/src/drv/radeon/radeon_context.c --- trunk_20030619/xc/xc/lib/GL/mesa/src/drv/radeon/radeon_context.c Wed Jun 11 00:06:16 2003 +++ tex3_20030619/xc/xc/lib/GL/mesa/src/drv/radeon/radeon_context.c Thu Jun 26 17:15:36 2003 @@ -301,7 +301,10 @@ */ ctx = rmesa-glCtx; - ctx-Const.MaxTextureUnits = 2; + if( getenv( RADEON_NO_3RD_TMU)) + ctx-Const.MaxTextureUnits = 2; + else + ctx-Const.MaxTextureUnits = RADEON_MAX_TEXTURE_UNITS; /* 3 */ driCalculateMaxTextureLevels( rmesa-texture_heaps, rmesa-nr_heaps, @@ -314,6 +317,8 @@ 12, GL_FALSE ); + /* FIXME: we should verify that we dont get limits below the minimum requirements of OpenGL */ + ctx-Const.MaxTextureMaxAnisotropy = 16.0; /* No wide points. @@ -374,13 +379,15 @@ _math_matrix_ctr( rmesa-TexGenMatrix[0] ); _math_matrix_ctr( rmesa-TexGenMatrix[1] ); + _math_matrix_ctr( rmesa-TexGenMatrix[2] ); _math_matrix_ctr( rmesa-tmpmat ); _math_matrix_set_identity( rmesa-TexGenMatrix[0] ); _math_matrix_set_identity( rmesa-TexGenMatrix[1] ); + _math_matrix_set_identity( rmesa-TexGenMatrix[2] ); _math_matrix_set_identity( rmesa-tmpmat );
[Dri-devel] DRM errors on an i830
I just tried to fire up Quake III on a laptop with an Intel 830 chipset and the i830 DRI snapshot from a couple of days ago. The game started, but only displayed one frame (I think). I switched consoles, killed Q3, and then got this message on the console: [drm:i830_wait_ring] *ERROR* space: 131048 wanted 131064 [drm:i830_wait_ring] *ERROR* lockup I've seen similar errors before, but I don't know if they were exactly the same. I can get NWN to play on here at about 13 fps with those drivers, but there are some other problems there. (It has stuttering audio, at roughly the frequency corresponding to the games frame rate. It also crashes eventually. I was trying out Q3 for the first time on this laptop to see if it experiences the same problems. I haven't been able to see if the same error appears on the console in NWN, though.) Glxgears runs fine on here, with 148 fps. (Slow, but is this normal for this chipset?) Glxinfo reports that direct rendering _is_ working. The X server, by the way, is the XFree86 4.3.0 shipped w/ Red Hat 9, and the kernel is the Red Hat 9 kernel w/ the Dec. 12 ACPI patches applied. As far as I can tell, the kernel drm module and server module for the Intel 830 driver I downloaded are installed correctly. I was getting DRM errors with Red Hat 8.0, too, but I don't know whether they were the same ones or not. They occured using Galeon, rather than a 3D client. I didn't write down the errors there, because I thought that upgrading to Red Hat 9 and the latest drivers would probably make them go away. But, they sure looked similar to what I'm seeing now. Does anyone have an idea of where the problem might be? (I've done a little bit of work on the R100 driver myself, and wanted to look through the i830 source myself, but the anonymous CVS servers have been a wreck for the last few days.) Thanks for the help! John --- This SF.Net email is sponsored by: INetU Attention Web Developers Consultants: Become An INetU Hosting Partner. Refer Dedicated Servers. We Manage Them. You Get 10% Monthly Commission! INetU Dedicated Managed Hosting http://www.inetu.net/partner/index.php ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] SiS news
I just saw this on extremetech today: http://www.extremetech.com/article2/0,3973,1101038,00.asp Looks like SiS is spinning off it's graphics chip division. perhaps this could mean better access to databooks! Now might be a good time to ask if they've considered it, get the idea out there while everything is being re-arranged... Liam it depends --- This SF.Net email is sponsored by: INetU Attention Web Developers Consultants: Become An INetU Hosting Partner. Refer Dedicated Servers. We Manage Them. You Get 10% Monthly Commission! INetU Dedicated Managed Hosting http://www.inetu.net/partner/index.php ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
[Dri-devel] Want to help enable DRI on multiple-head cfg's
I would like to help enable DRI for multiple head configurations but I need a little direction to get started. I understand from previous postings and comments I have seen in the source that DRI is currently disabled in multiple head cfg's because of sync. problems but I am not sure of the exact technical meaning of this. My development platform is an iBook w/ an ATI Radeon Mobility 7500, Debian (sid)(2.4.19 kernel), XFree86 v4.3.0 and Michael Danzer's DRI trunk. I did some initial work on the 2D radeon driver to enable dual independent heads and this works ok for now. I then backed-off to just the internal panel and using Michael's DRI trunk got DRI enabled. Now I would like to take the last step and get DRI going w/ this dual independent head configuration. I am not sure what modules to start looking at for this effort...i.e. are the changes required in the XFree86 3D drivers in xc/lib/GL/mesa/src/drv/radeon/ and/or in the kernel-mode drm modules in xc/programs/Xserver/hw/xfree86/os-support/linux/drm/kernel/. Also, some direction from a design standpoint and hints/tips on where to look would be useful. -dean andreakis --- This SF.Net email is sponsored by: INetU Attention Web Developers Consultants: Become An INetU Hosting Partner. Refer Dedicated Servers. We Manage Them. You Get 10% Monthly Commission! INetU Dedicated Managed Hosting http://www.inetu.net/partner/index.php ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] Want to help enable DRI on multiple-head cfg's
I don't know what's involved in making the DRI work on multiple physical graphics cards, but if you want HW accelerated 3D on a dualheaded radeon card check out my mergedfb patch: http://bugs.xfree86.org//cgi-bin/bugzilla/show_bug.cgi?id=276 I'm not sure you'll be able to get dual independant 3D accelerated heads from a dualheaded card since there is only one 3D engine. I don't know if it could be made to work with things like different color depths. Then there is the issue of 2 heads fighting for the same graphics engine. there would have to be some sort of coordination. perhaps you could somehow make the second head a 3D client of the first? I could be wrong though; I'm still learning the 3D side of things. Actually htinking about it more, you could take the mergedfb code and rearrange it a bit to give the impression of being two separate independant heads, but actually sharing a framebuffer on the backend. just divide the big frambeuffer into two virtual framebuffers that corresponded to each independant head. I think both heads would have to be the same color depth. Alex --- Andreakis, Dean (MED) [EMAIL PROTECTED] wrote: I would like to help enable DRI for multiple head configurations but I need a little direction to get started. I understand from previous postings and comments I have seen in the source that DRI is currently disabled in multiple head cfg's because of sync. problems but I am not sure of the exact technical meaning of this. My development platform is an iBook w/ an ATI Radeon Mobility 7500, Debian (sid)(2.4.19 kernel), XFree86 v4.3.0 and Michael Danzer's DRI trunk. I did some initial work on the 2D radeon driver to enable dual independent heads and this works ok for now. I then backed-off to just the internal panel and using Michael's DRI trunk got DRI enabled. Now I would like to take the last step and get DRI going w/ this dual independent head configuration. I am not sure what modules to start looking at for this effort...i.e. are the changes required in the XFree86 3D drivers in xc/lib/GL/mesa/src/drv/radeon/ and/or in the kernel-mode drm modules in xc/programs/Xserver/hw/xfree86/os-support/linux/drm/kernel/. Also, some direction from a design standpoint and hints/tips on where to look would be useful. -dean andreakis __ Do you Yahoo!? SBC Yahoo! DSL - Now only $29.95 per month! http://sbc.yahoo.com --- This SF.Net email is sponsored by: INetU Attention Web Developers Consultants: Become An INetU Hosting Partner. Refer Dedicated Servers. We Manage Them. You Get 10% Monthly Commission! INetU Dedicated Managed Hosting http://www.inetu.net/partner/index.php ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
RE: [Dri-devel] Want to help enable DRI on multiple-head cfg's
Alex, Thanks for the information. I will look thru the mergedfb patch code asap. I think your idea of sharing one framebuffer as 2 virtual fb's could work...thanks! -dean andreakis -Original Message- From: Alex Deucher [mailto:[EMAIL PROTECTED] Sent: Thursday, June 26, 2003 3:27 PM To: Andreakis, Dean (MED); [EMAIL PROTECTED] Subject: Re: [Dri-devel] Want to help enable DRI on multiple-head cfg's I don't know what's involved in making the DRI work on multiple physical graphics cards, but if you want HW accelerated 3D on a dualheaded radeon card check out my mergedfb patch: http://bugs.xfree86.org//cgi-bin/bugzilla/show_bug.cgi?id=276 I'm not sure you'll be able to get dual independant 3D accelerated heads from a dualheaded card since there is only one 3D engine. I don't know if it could be made to work with things like different color depths. Then there is the issue of 2 heads fighting for the same graphics engine. there would have to be some sort of coordination. perhaps you could somehow make the second head a 3D client of the first? I could be wrong though; I'm still learning the 3D side of things. Actually htinking about it more, you could take the mergedfb code and rearrange it a bit to give the impression of being two separate independant heads, but actually sharing a framebuffer on the backend. just divide the big frambeuffer into two virtual framebuffers that corresponded to each independant head. I think both heads would have to be the same color depth. Alex --- Andreakis, Dean (MED) [EMAIL PROTECTED] wrote: I would like to help enable DRI for multiple head configurations but I need a little direction to get started. I understand from previous postings and comments I have seen in the source that DRI is currently disabled in multiple head cfg's because of sync. problems but I am not sure of the exact technical meaning of this. My development platform is an iBook w/ an ATI Radeon Mobility 7500, Debian (sid)(2.4.19 kernel), XFree86 v4.3.0 and Michael Danzer's DRI trunk. I did some initial work on the 2D radeon driver to enable dual independent heads and this works ok for now. I then backed-off to just the internal panel and using Michael's DRI trunk got DRI enabled. Now I would like to take the last step and get DRI going w/ this dual independent head configuration. I am not sure what modules to start looking at for this effort...i.e. are the changes required in the XFree86 3D drivers in xc/lib/GL/mesa/src/drv/radeon/ and/or in the kernel-mode drm modules in xc/programs/Xserver/hw/xfree86/os-support/linux/drm/kernel/. Also, some direction from a design standpoint and hints/tips on where to look would be useful. -dean andreakis __ Do you Yahoo!? SBC Yahoo! DSL - Now only $29.95 per month! http://sbc.yahoo.com --- This SF.Net email is sponsored by: INetU Attention Web Developers Consultants: Become An INetU Hosting Partner. Refer Dedicated Servers. We Manage Them. You Get 10% Monthly Commission! INetU Dedicated Managed Hosting http://www.inetu.net/partner/index.php ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
RE: [Dri-devel] Want to help enable DRI on multiple-head cfg's
you might want to move the framebuffer info up to the entity level and then move the initialization of the DRI to the entity level rather than the pScrn level. then when the individual pScrns map set up their framebuffers and initialize the dri, it would basically just be a call to some resource manager that would allocate a chunk of framebuffer for that head. individual pScrn DRI initializations would just be stubb functions. stuff like pixmap caches and such could get hairy... Alex --- Andreakis, Dean (MED) [EMAIL PROTECTED] wrote: Alex, Thanks for the information. I will look thru the mergedfb patch code asap. I think your idea of sharing one framebuffer as 2 virtual fb's could work...thanks! -dean andreakis -Original Message- From: Alex Deucher [mailto:[EMAIL PROTECTED] Sent: Thursday, June 26, 2003 3:27 PM To: Andreakis, Dean (MED); [EMAIL PROTECTED] Subject: Re: [Dri-devel] Want to help enable DRI on multiple-head cfg's I don't know what's involved in making the DRI work on multiple physical graphics cards, but if you want HW accelerated 3D on a dualheaded radeon card check out my mergedfb patch: http://bugs.xfree86.org//cgi-bin/bugzilla/show_bug.cgi?id=276 I'm not sure you'll be able to get dual independant 3D accelerated heads from a dualheaded card since there is only one 3D engine. I don't know if it could be made to work with things like different color depths. Then there is the issue of 2 heads fighting for the same graphics engine. there would have to be some sort of coordination. perhaps you could somehow make the second head a 3D client of the first? I could be wrong though; I'm still learning the 3D side of things. Actually htinking about it more, you could take the mergedfb code and rearrange it a bit to give the impression of being two separate independant heads, but actually sharing a framebuffer on the backend. just divide the big frambeuffer into two virtual framebuffers that corresponded to each independant head. I think both heads would have to be the same color depth. Alex --- Andreakis, Dean (MED) [EMAIL PROTECTED] wrote: I would like to help enable DRI for multiple head configurations but I need a little direction to get started. I understand from previous postings and comments I have seen in the source that DRI is currently disabled in multiple head cfg's because of sync. problems but I am not sure of the exact technical meaning of this. My development platform is an iBook w/ an ATI Radeon Mobility 7500, Debian (sid)(2.4.19 kernel), XFree86 v4.3.0 and Michael Danzer's DRI trunk. I did some initial work on the 2D radeon driver to enable dual independent heads and this works ok for now. I then backed-off to just the internal panel and using Michael's DRI trunk got DRI enabled. Now I would like to take the last step and get DRI going w/ this dual independent head configuration. I am not sure what modules to start looking at for this effort...i.e. are the changes required in the XFree86 3D drivers in xc/lib/GL/mesa/src/drv/radeon/ and/or in the kernel-mode drm modules in xc/programs/Xserver/hw/xfree86/os-support/linux/drm/kernel/. Also, some direction from a design standpoint and hints/tips on where to look would be useful. -dean andreakis __ Do you Yahoo!? SBC Yahoo! DSL - Now only $29.95 per month! http://sbc.yahoo.com __ Do you Yahoo!? SBC Yahoo! DSL - Now only $29.95 per month! http://sbc.yahoo.com --- This SF.Net email is sponsored by: INetU Attention Web Developers Consultants: Become An INetU Hosting Partner. Refer Dedicated Servers. We Manage Them. You Get 10% Monthly Commission! INetU Dedicated Managed Hosting http://www.inetu.net/partner/index.php ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] Only normal DRI and Mesa CVS access as developer,now?
On Tue, 2003-06-24 at 14:44, Michel Dänzer wrote: On Tue, 2003-06-24 at 17:37, José Fonseca wrote: But we really need to get a solution around the backup CVS server as it really damages the beneficial intervention of non-commiters can have since they are days behind what the commiters are doing. I wonder if it's really such a pressing problem though. How many people have complained? Allow me to voice my complaints. After reading some of the previous posts on the list, I decided to checkout the trunk to test the 3rd TMU patch on 16 MB radeon and to try the MergeFB patch. For eight hours now, I have not been able to get the trunk. It took about 25 tries to login and even more for the checkout to start (and then the checkout hangs forever). I tried yesterday and maybe the day before also, with the same results. --Jonathan Thambidurai --- This SF.Net email is sponsored by: INetU Attention Web Developers Consultants: Become An INetU Hosting Partner. Refer Dedicated Servers. We Manage Them. You Get 10% Monthly Commission! INetU Dedicated Managed Hosting http://www.inetu.net/partner/index.php ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [Dri-devel] 3rd TMU on radeon
Andreas Stenglein wrote: here is a patch that works at least for multiarb.c It is against HEAD from 19 June 2003 (I cleaned it up a bit but its not ready for merge: still some questions...) I've taken a quick peek at this patch, and I have a couple comments. I hope to be able to look at it in more detail in the next week or so, but I can't make any promises. 1) could someone with an 8MB or 16MB Radeon check if the resulting max_texturesize is big enough? (just use mesa's glxinfo: glxinfo -l) I should be able to test this on an M6 w/8MB next. Keep in mind that the amount of available AGP memory also plays a role. If the card only has 1MB of available memory for textures, but there is 256MB of AGP memory, it probably won't be a limitation. 2) could someone try it out with a game/demo that makes use of the 3rd TMU? Other than multiarb, I think UT2k3 is the only option. Did anyone ever get that working on R100? I know that someone (Keith?) finally got it working on R200. 3) could someone with knowledge about the vfmt and codegen stuff have a look on it? especially whether we need those dummys and what should be done in the fast-path and with vertex3f. That's where I'll try to focus my attention when I look at it deep. At first glance (see my notes below), it look pretty good in this respect. diff -ru trunk_20030619/xc/xc/lib/GL/mesa/src/drv/radeon/radeon_context.c tex3_20030619/xc/xc/lib/GL/mesa/src/drv/radeon/radeon_context.c --- trunk_20030619/xc/xc/lib/GL/mesa/src/drv/radeon/radeon_context.c Wed Jun 11 00:06:16 2003 +++ tex3_20030619/xc/xc/lib/GL/mesa/src/drv/radeon/radeon_context.c Thu Jun 26 17:15:36 2003 @@ -314,6 +317,8 @@ 12, GL_FALSE ); + /* FIXME: we should verify that we dont get limits below the minimum requirements of OpenGL */ + ctx-Const.MaxTextureMaxAnisotropy = 16.0; /* No wide points. Yes and no. If we persist with the current memory sizing scheme, we'll need to disable texture units if there is not enough memory to provide the required minimum texture size. My opinion, however, is that we should bag that scheme altogether. I think we'll leave that particular issue for a (slightly) later date... @@ -374,13 +379,15 @@ _math_matrix_ctr( rmesa-TexGenMatrix[0] ); _math_matrix_ctr( rmesa-TexGenMatrix[1] ); + _math_matrix_ctr( rmesa-TexGenMatrix[2] ); _math_matrix_ctr( rmesa-tmpmat ); _math_matrix_set_identity( rmesa-TexGenMatrix[0] ); _math_matrix_set_identity( rmesa-TexGenMatrix[1] ); + _math_matrix_set_identity( rmesa-TexGenMatrix[2] ); _math_matrix_set_identity( rmesa-tmpmat ); driInitExtensions( ctx, card_extensions, GL_TRUE ); - if( rmesa-dri.drmMinor = 9 || getenv( RADEON_RECTANGLE_FORCE_ENABLE)) /* FIXME! a.s. */ + if( rmesa-dri.drmMinor = 9) _mesa_enable_extension( ctx, GL_NV_texture_rectangle); radeonInitDriverFuncs( ctx ); radeonInitIoctlFuncs( ctx ); The various bits of texture-rectangle cleanup like this should be commited separately from the 3rd TMU stuff. It will make it easier to grok the log messages roll stuff back if needed. diff -ru trunk_20030619/xc/xc/lib/GL/mesa/src/drv/radeon/radeon_context.h tex3_20030619/xc/xc/lib/GL/mesa/src/drv/radeon/radeon_context.h --- trunk_20030619/xc/xc/lib/GL/mesa/src/drv/radeon/radeon_context.h Wed Jun 11 00:06:16 2003 +++ tex3_20030619/xc/xc/lib/GL/mesa/src/drv/radeon/radeon_context.h Thu Jun 26 17:22:54 2003 @@ -635,30 +636,37 @@ GLuint prim; }; +/* FIXME: do we really need add. 2 to prevent segfault if someone */ +/* specifies GL_TEXTURE3 (esp. for the codegen-path) ? */ +#define RADEON_MAX_VERTEX_SIZE 19/* 17 + 2 */ + If you're going to use a define for this, which is a good idea, you should move the commentary about the chosen value out of radeon_vbinfo and put it next to the define. I've done some thinking about this particular fast-path. I believe that we should try to keep it as much as possible. I don't think there's any compelling reason not to. I would point out two things here. 1. The extra 2 elements (or MAX_TEXTURE_COORD_ELEMENTS elements, which is 2 right now, but will eventually grow to 3) is to give the appearance of having a power-of-two number of texture units. This allows some optimizations. 2. This is safe because the texture coordinates are always the last elements in a vertex. Therefore the actual vertex data is packed into the start of the buffer. struct radeon_vbinfo { GLint counter, initial_counter; GLint *dmaptr; void (*notify)( void ); GLint vertex_size; - /* A maximum total of 15 elements per vertex: 3 floats for position, 3 + /* A maximum total of 17 elements per vertex: 3 floats for position, 3 * floats for normal, 4 floats for color, 4 bytes for secondary color, -* 2 floats for each texture unit (4 floats total). +* 2 floats for each texture unit (6 floats total). * -* As soon as the 3rd TMU is supported or cube maps (or 3D
Re: [Dri-devel] Only normal DRI and Mesa CVS access as developer,now?
Jonathan Thambidurai wrote: On Tue, 2003-06-24 at 14:44, Michel Dänzer wrote: On Tue, 2003-06-24 at 17:37, José Fonseca wrote: But we really need to get a solution around the backup CVS server as it really damages the beneficial intervention of non-commiters can have since they are days behind what the commiters are doing. I wonder if it's really such a pressing problem though. How many people have complained? Allow me to voice my complaints. After reading some of the previous posts on the list, I decided to checkout the trunk to test the 3rd TMU patch on 16 MB radeon and to try the MergeFB patch. For eight hours now, I have not been able to get the trunk. It took about 25 tries to login and even more for the checkout to start (and then the checkout hangs forever). I tried yesterday and maybe the day before also, with the same results. That's an orthogonal issue to what was being discussed. The problem that they were discussing is the 24-hour lag between the contents of the main CVS server and the contents of the backup server. The fact that the backup server is still so loaded that it's virtually unavailable is another issue. :( One piece of advice that I might offer is to write a script that will do all the CVS stuff you need. At each step have it touch a different file. If the file exists already before starting the step, skip the step. Now, start this script once every hour or two during off-peak (i.e., between 9PM PDT and 5AM EDT) from cron. When sf.net was having so many problems with developer access before, I found that accessing CVS either very early in the morning (before eastern US people got to work) or very late in the evening (after western US people went home for the night) gave the best results. YMMV. --- This SF.Net email is sponsored by: INetU Attention Web Developers Consultants: Become An INetU Hosting Partner. Refer Dedicated Servers. We Manage Them. You Get 10% Monthly Commission! INetU Dedicated Managed Hosting http://www.inetu.net/partner/index.php ___ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel