By the way, if you want a simple, limited and temporary, but very effective, way to optimize shaders, here it is: 1. Trivially convert TGSI to GLSL 2. Feed the GLSL to the nVidia Cg compiler, telling it to produce optimized output in ARB_fragment_program format 3. Ask the Mesa frontend/state tracker to parse the ARB_fragment_program and give you back TGSI
This does actually optimize the program well and does all the nice control flow transformations desired. If your GPU can support predicates or condition codes, you can also ask the Cg compiler to give you NV_fragment_program_option, which will use them efficiently. If it also supports control flow, you can ask for NV_fragment_program2 and get control flow too where appropriate. Of course, if this does not happen to do exactly what you want, you are totally out of luck, since it is closed source. With an ad-hoc TGSI optimizer, you can modify it, but that will often require to rearchitecture the module, since it may be too primitive for the new feature you want, and implement everything from scratch with no supporting tools to help you. With a real compiler framework, you already have the optimization ready for use, or you at least have a comprehensive conceptual framework and IR and a full set of analyses, frameworks and tools to use, not to mention a whole community of compiler developers that can at least tell you what is the best way of doing what you want (actually giving out competent advice), if not even have already done or planned to do it themselves. ------------------------------------------------------------------------------ Download Intel® Parallel Studio Eval Try the new software tools for yourself. Speed compiling, find bugs proactively, and fine-tune applications for parallel performance. See why Intel Parallel Studio got high marks during beta. http://p.sf.net/sfu/intel-sw-dev _______________________________________________ Mesa3d-dev mailing list Mesa3d-dev@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/mesa3d-dev