Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-25 Thread Gareth Hughes
Keith Whitwell wrote:
Yes, very nice.

Utah did have some stuff going for it.  It was designed as a 
server-side-only accelerated indirect renderer.  My "innovation" was to 
figure out that the client could pretty easily play a few linker tricks 
& load that server module with dlopen(), and then with minimal 
communication with the server, do 90% of the direct rendering tasks 
itself.  (This was after my first encounter with PI, I think, until then 
I hadn't heard of direct rendering).

The nice thing about this was that the same binary was running the show 
on both the client and the server.  That really was obvious in the 
communication between them -- all the protocol structs were private to 
one .c file.
That's what we do -- the NVIDIA libGLcore.so driver backend does both 
client-side direct rendering and server-side indirect rendering. 
libGL.so or libglx.so does the necessary work to allow the main driver 
to have at it.

It really shouldn't be that hard.  Against it are:

- XFree's dislike of native library functions, which the 3d driver 
uses with abandon.
You can avoid these issues by using imports -- the server-side native 
library function imports would just call the appropriate XFree86 
routine, while the client-side imports would just call the regular C 
library versions.  I think Brian added stuff like this at some point, 
not sure however.

- XFree's love of their loadable module format, which the 3d driver 
isn't...
Our libGLcore is a regular shared library (as is our libglx.so, for that 
matter).  Doesn't seem to be an issue, AFAIK.

-- Gareth



---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Fallback in radeon_Materialfv() doesnt work

2003-03-25 Thread Gareth Hughes
Andreas Stenglein wrote:
Yes, at least the part with GL_TRIANGLE_STRIP.
In case of "0" you can just return 0, no copying is needed.
case 0: return 0; break;
You're going to do that, just in a slightly different manner:

switch (nr) {
case 0: ovf = 0; break;
case 1: ovf = 1; break;
default: ovf = 2; break;
}
for (i = 0 ; i < ovf ; i++)
   copy_vertex( rmesa, nr-ovf+i, tmp[i] );
return i;   
When nr == 0, ovf gets set to 0 and you do no iterations of the for 
loop.  You'll then return i, which was initialized to 0.

-- Gareth



---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Fallback in radeon_Materialfv() doesnt work

2003-03-25 Thread Gareth Hughes
Perhaps:

switch (nr) {
case 0:  return 0;
case 1:  ovf = 1; break;
case 2:  ovf = 2; break;
default: ovf = MIN2(nr-1, 2); break;
}
(or similar) would be better, if the code below does indeed fix the problem?

-- Gareth

Andreas Stenglein wrote:
unfortunately it was only a partial fix
(but was enough for that particular program)
heres a new one: (and maybe it should be handeld this way
with GL_QUAD_STRIP, too)
--- radeon_vtxfmt.c_origFri Mar 21 17:22:23 2003
+++ radeon_vtxfmt.cTue Mar 25 07:45:52 2003
@@ -312,7 +312,14 @@
  return 2;
   }
case GL_TRIANGLE_STRIP:
-  ovf = MIN2( nr-1, 2 );
+  if (nr == 0) /* dont let verts go negative! */
+ return 0;
+  if (nr == 1) /* copy the right one ? */
+ ovf = 1;
+  else if (nr == 2) /* copy 2 verts, not only one */
+ ovf = 2;
+  else
+ ovf = MIN2( nr-1, 2 );
   for (i = 0 ; i < ovf ; i++)
  copy_vertex( rmesa, nr-ovf+i, tmp[i] );
   return i;
could that be a bit faster in the hole thing?

--- radeon_vtxfmt.cTue Mar 25 07:57:34 2003
+++ radeon_vtxfmt.c_origFri Mar 21 17:22:23 2003
@@ -312,17 +312,7 @@
  return 2;
   }
case GL_TRIANGLE_STRIP:
-  if (nr < 3)
-  {
- if (nr == 2)   /* copy 2 verts, not only one */
-ovf = 2;
- else if (nr == 1)  /* copy the right one ? */
-ovf = 1;
- else   /* nr==0: dont let verts go negative! */
-return 0;
-  }
-  else
- ovf = MIN2( nr-1, 2 );
+  ovf = MIN2( nr-1, 2 );
   for (i = 0 ; i < ovf ; i++)
  copy_vertex( rmesa, nr-ovf+i, tmp[i] );
   return i;
Am 2003.03.24 22:13:12 +0100 schrieb(en) Keith Whitwell:

Andreas Stenglein wrote:

this patch helps for the demo.
but someone more familiar with radeon_vtxfmt should
check if it really fixes all cases...
I think in case of GL_QUAD_STRIP we should check
for 0, too.
(and maybe for 1?)
--- radeon_vtxfmt.c_origFri Mar 21 17:22:23 2003
+++ radeon_vtxfmt.cMon Mar 24 21:52:58 2003
@@ -312,6 +312,8 @@
  return 2;
   }
case GL_TRIANGLE_STRIP:
+  if (nr == 0) /* dont let verts go negative! */
+ return 0;
   ovf = MIN2( nr-1, 2 );
   for (i = 0 ; i < ovf ; i++)
  copy_vertex( rmesa, nr-ovf+i, tmp[i] );


Good catch!

I'll commit fixes for this.

Keith


---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Mesa3d-dev] Re: [Dri-devel] Mesa C++ driver framework update

2003-03-10 Thread Gareth Hughes
Keith Whitwell wrote:
> 
> libGL.so provides a dispatch table that can be efficiently switched.  The
real 
> 'gl' entrypoints basically just look up an offset in this table and jump
to 
> it.  No new arguments, no new stack frame, nada -- just an extremely
efficient 
> jump.  Note that this is the library entrypoint, so we can't ask the
caller to 
> use a function pointer instead:
> 
> 00041930 :
> 41930:   a1 00 00 00 00  mov0x0,%eax
> 41935:   ff a0 c4 03 00 00   jmp*0x3c4(%eax)
> 
> unfortunately, that version isn't threadsafe, but Gareth is relentlessly 
> persuing an efficient threadsafe equivalent.

That's certainly one way to put it ;-)  I'll probably send you and Brian an
email about this pretty soon, I'd imagine.  Things are looking good.

-- Gareth


---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


RE: [Dri-devel] RE: future of DRI?

2003-02-28 Thread Gareth Hughes
> NVidia wanted to keep the source code base of the Windows drivers and the
> Linux drivers as close as possible, including what would be considered
> kernel mode stuff.  They started with windows drivers and adapted that to
> linux.  Part of their porting effort was bulding a kernel level wrapper,
> which "emulated" the minimum win32 kernel service API's the rest of the
> kernel module needed.

I'm always amused by the reasons people come up with for things like this...

Note: In no way am I speaking officially as an employee of NVIDIA
Corporation.

-- 
Gareth Hughes ([EMAIL PROTECTED])
OpenGL Developer, NVIDIA Corporation


---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] future of DRI?

2003-02-28 Thread Gareth Hughes
Jon Smirl wrote:
>
> I really don't understand ATI's position on Linux
> drivers. They have better hardware but they are losing
> because of their drivers. I can't think of a better
> solution than having a couple hundred highly skilled,
> performance obsessed, unpaid hackers fixing their code
> for them.
Somehow, I dont' think this is an entirely accurate description of the 
situation.  When I was heavily involved in the DRI project, there were 
significant contributions made by perhaps a dozen people, many of whom 
worked for PI/VA (now TG).  By the looks of things, the list of active 
developers has grown some, which is great to see, but there has never 
been anything approaching "a couple hundred highly skilled, performance 
obsessed, unpaid hackers" just waiting to "fix their code for them".

-- Gareth



---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Re: OpenGL and the LinuxThreads pthread_descr structure

2002-05-24 Thread Gareth Hughes

Okay, here's an almost-functional implementation of an OpenGL dispatch 
layer and driver backend.  The dispatching into a dlopened driver 
backend works, the backend just doesn't do anything terribly interesting 
yet (been struggling with bad allergies all week, so I'm not thinking 
very clearly at the moment).

If you have any questions, please don't hesitate to ask.

-- Gareth




sample.tar.bz2
Description: BZip2 compressed data


Re: [Dri-devel] Re: OpenGL and the LinuxThreads pthread_descr structure

2002-05-23 Thread Gareth Hughes

I'm putting the finishing touches on some example asm code that might be
generated at runtime by an OpenGL driver, to go with a sample dispatch
layer, that exercises some of the issues we've been discussing over the
past week.  As it's 6:20am, I might go home and sleep first though ;-)

Thanks to Jakub for clearing up the __thread and -fPIC issue yesterday.
 From memory, this leaves one unaddressed issue:

 - Has the issue with LDT allocation in the kernel, as described by
   Ulrich Drepper here:

http://sources.redhat.com/ml/libc-hacker/2002-02/msg00131.html

   been addressed?  If so, what release(s) of the kernel work
   reliably with __thread?

Also, let's make sure we have a clear understanding about __thread 
variables and dlopenable libraries.  If I declare a __thread variable in 
one dlopenable library (libGL.so) and reference it in another dlopenable 
library (driver.so), is that always a function call per reference (i.e., 
the General Dynamic access model)?

Thanks again!

-- Gareth


___

Don't miss the 2002 Sprint PCS Application Developer's Conference
August 25-28 in Las Vegas -- http://devcon.sprintpcs.com/adp/index.cfm

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Re: OpenGL and the LinuxThreads pthread_descr structure

2002-05-22 Thread Gareth Hughes

Keith Whitwell wrote:
> 
> Gareth,
> 
> A simplified example of the dispatch & codegen layers sounds like an 
> excellent way to get across the performance environment we're working 
> in.  Let me know if I can help putting this together.

Agreed -- hence the effort to put this together ;-)

I'm expecting to be done in a couple of hours, but if anything comes up 
I'll get in touch with you.

-- Gareth




___

Don't miss the 2002 Sprint PCS Application Developer's Conference
August 25-28 in Las Vegas -- http://devcon.sprintpcs.com/adp/index.cfm

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Re: OpenGL and the LinuxThreads pthread_descr structure

2002-05-22 Thread Gareth Hughes

Keith Whitwell wrote:
> 
>>
>> __thread doesn't require -fpic. There are 4 different TLS models (on 
>> IA-32):
>> -ftls-model=global-dynamic
>> -ftls-model=local-dynamic
>> -ftls-model=initial-exec
>> -ftls-model=local-exec
>>
>> Neither of these require -fpic, though the first 3 use pic
>> register (if not -fpic, they just load it into some arbitrary register).
>> The GD model is for dlopenable libraries referencing __thread variables
>> that can be anywhere (and is most expensive, a function call), LD is for
>> dlopenable libraries referencing __thread variables within that library
>> (again, a function call, but can be one per whole function for all __thread
>> vars mentioned in it), IE is for libraries/programs which cannot be dlopened
>> and can reference __thread variables anywhere in the startup program
>> or its dependencies and LE is for programs only, referencing
>> __thread variables in it. IE involves a memory load from GOT and subtracts
>> that value from %gs:0, LE results in immediate being added to %gs:0.
> 
> 
> 
> It doesn't sound like there's anything in there for us that's a real 
> improvement:  Both of the 'dlopenable' varients require a function call? 
> That's a huge overhead for the application we're talking about.

Yes, in coding up an example to send you all, this issue became clear to 
me.  We need to define thread-local variables in libGL.so and reference 
them from a dlopened driver backed.  The important functions that 
reference these variables are often tiny (less than 10 instructions), so 
a function call here is a killer.

However, here's a critical issue that came to my attention over the 
weekend:  How do you generate code at runtime to reference __thread 
variables?  Doing runtime code generation for the immediate mode API 
calls in the driver backend is quite common (there's an example of this 
in Keith's T&L driver for the Radeon, mentioned earlier in this thread). 
  It's not clear to me how a library generating code to dereference a 
__thread variable can know where that variable is.  Am I mistaken?

To give you an idea of how important runtime code generation is to 
modern OpenGL drivers, my Viewperf scores are easily three or four times 
faster with an online generated API front end (plus the optimizations 
that this allows further down the pipe).

-- Gareth

___

Don't miss the 2002 Sprint PCS Application Developer's Conference
August 25-28 in Las Vegas -- http://devcon.sprintpcs.com/adp/index.cfm

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Re: OpenGL and the LinuxThreads pthread_descr structure

2002-05-21 Thread Gareth Hughes

David S. Miller wrote:
> 
> Even if this were not the case, stupid compilation tools are not an
> excuse to put changes into the C library.  That is a fact.

We've been talking about two completely separate issues:

   - Fast thread-local storage for libGL and GL drivers.
   - PIC for libGL and GL drivers.

The only "changes" being talked about relate to the first of these 
issues, and have nothing to do with whether libGL is a true shared 
library or not.

I'm interested to know if using __thread forces the use of -fPIC, 
because my first reading of Ulrich's document seemed to suggest this was 
the case.  You say it's irrelevant, but I'd still like to know.  If I am 
incorrect, then I have obviously misundersood some aspects of Ulrich's 
document and would like some clarification.

-- Gareth

___

Don't miss the 2002 Sprint PCS Application Developer's Conference
August 25-28 in Las Vegas -- http://devcon.sprintpcs.com/adp/index.cfm

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Re: OpenGL and the LinuxThreads pthread_descr structure

2002-05-21 Thread Gareth Hughes

David S. Miller wrote:
>
> Why does it matter?  Jakub has shown how to get the same kind of
> non-PIC relocations you want in the GL libraries by using private
> versions of symbols.

Using a feature that is "a very new thing" (to quote Jakub) -- only "GCC 
3.2 (mainline CVS), the Red Hat GCC 3.1 package and gcc-2.96-RH >= 
2.96-108" support this.

> Also, the PIC register argument is bogus too.  I know for a fact that
> current GCC will fully allow allocation of the x86 PIC register
> if you make no references to PIC relocatable data.  %99 of functions
> in an OpenGL implementation will get full use of the PIC register,
> _ESPECIALLY_ if you use the privatization symbol tricks Jakub
> mentioned.
> 
> It should be rare to reference PIC symbols from within OpenGL if done
> properly, thus the PIC register and the relocation arguments are null
> and void.

That may be so, using a bleeding-edge version of GCC, but you haven't 
answered my question.  Besides, moving a workstation-quality OpenGL 
driver to a new compiler like this, just to avoid the penalties 
associated with -fPIC, is not something done lightly.  I know for a fact 
that versions of gcc-2.96-RH have produced a non-functional driver for 
me in the past (missing triangles while running Viewperf, etc).

-- Gareth

___

Don't miss the 2002 Sprint PCS Application Developer's Conference
August 25-28 in Las Vegas -- http://devcon.sprintpcs.com/adp/index.cfm

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



[Dri-devel] Re: OpenGL and the LinuxThreads pthread_descr structure

2002-05-21 Thread Gareth Hughes

Sorry for the delay in getting back to you, I've been offline since late 
last week moving into a new building at work.

I've been working on some sample code that clearly demonstrates the 
issues we (as in vendors of OpenGL on Linux) face.  I'm hoping to have 
that wrapped up this afternoon and will send it out when it's complete.

In the mean time, a few questions:

   - Does __thread require -fPIC?  From my initial reading of the PDF
 document on your website, I was under the impression that this was
 the case.

   - Has the issue with LDT allocation in the kernel, as described by
 you here:

http://sources.redhat.com/ml/libc-hacker/2002-02/msg00131.html

 been addressed?  If so, what release(s) of the kernel work with
 __thread?

Thanks!

-- Gareth

___

Don't miss the 2002 Sprint PCS Application Developer's Conference
August 25-28 in Las Vegas -- http://devcon.sprintpcs.com/adp/index.cfm

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



[Dri-devel] Re: OpenGL and the LinuxThreads pthread_descr structure

2002-05-17 Thread Gareth Hughes

Dieter Nützel <[EMAIL PROTECTED]> wrote:
> Only short question/remark.
> 
> Does this all play nicely with IBM's NGPT 
> http://oss.software.ibm.com/pthreads/ ?

I'm not familiar with that project yet, but there's no reason why it
can't (if they hang the TCB off %gs as well).

-- Gareth


__
Do You Yahoo!?
LAUNCH - Your Yahoo! Music Experience
http://launch.yahoo.com

___
Hundreds of nodes, one monster rendering program.
Now that's a super model! Visit http://clustering.foundries.sf.net/

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



[Dri-devel] Re: OpenGL and the LinuxThreads pthread_descr structure

2002-05-17 Thread Gareth Hughes

Jakub Jelinek wrote:
> On Thu, May 16, 2002 at 08:08:02PM -0700, Gareth Hughes wrote:
> 
>>Let's be clear about what I'm proposing: you agree to reserve an 
>>8*sizeof(void *) block at a well-defined and well-known offset in the 
>>TCB.  OpenGL is free to access that block, but only that block.
> 
> 
> But you define no way how libraries can acquire some offset in
> it for its exclusive use, so basically you want a libGL private TCB block.

Yes, that's exactly what I'm proposing.

Having a fixed-size block at a well-known location allows both libGL.so 
and the driver backends to reference things like the current context at 
p_libgl_specific[0] (p_header.__padding[8] today), or in assembly:

movl %gs:32,%eax

Other variables like the dispatch pointer, GLX context and so on would 
be referenced in the same manner.

The thing is, this works on all implementations from glibc-2.2 onwards. 
  It doesn't require a bleeding-edge binutils or glibc, and it's faster 
than the __thread support.  It requires no functional code changes in 
LinuxThreads (you could even implement the reservation of space with a 
comment specifying that p_header.__padding[8-15] in pthread_descr are 
for libGL internal use only).

You may argue that this places an unmanageable maintenance burden on 
glibc.  This side of the problem boils down to the following: you could 
completely reimplement LinuxThreads, changing the contents of the 
pthread_descr structure (even redo the regular pthreads TLS storage), if 
you just start with the following:

struct _pthread_descr_struct {
  union {
/* use this space if required */
void *__padding[8];
  } p_header;

  /* libGL.so internal use only */
  void *p_libgl_specific[8];

  /* go crazy from here down */
  ...
};

You'll always have some form of TCB, all I'm asking for is that you 
reserve that chunk of space for libGL at the start.

Don't get me wrong -- I think the __thread stuff is great and certainly 
a step in the right direction.  However, as I described in my original 
posting, OpenGL has special requirements when dealing with thread local 
storage.  In such a performance-critical setting, I'd agree with Keith 
when saying a little bit of special treatment goes a long way.

-- Gareth

___

Have big pipes? SourceForge.net is looking for download mirrors. We supply
the hardware. You get the recognition. Email Us: [EMAIL PROTECTED]
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



[Dri-devel] Re: OpenGL and the LinuxThreads pthread_descr structure

2002-05-16 Thread Gareth Hughes

A question about the __thread stuff: does it require -fPIC?  What 
happens if you don't compile a library with -fPIC, and have __thread 
variables declared in that library?

-- Gareth

___

Have big pipes? SourceForge.net is looking for download mirrors. We supply
the hardware. You get the recognition. Email Us: [EMAIL PROTECTED]
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



[Dri-devel] Re: OpenGL and the LinuxThreads pthread_descr structure

2002-05-16 Thread Gareth Hughes

Gareth Hughes wrote:
 >
> Let's be clear about what I'm proposing: you agree to reserve an 
> 8*sizeof(void *) block at a well-defined and well-known offset in the 
> TCB.

Of course, I should add that space for such a block exists, and has 
existed for some time.  My proposal requires no real changes on the 
glibc side of things, other than to set in stone the agreement between 
pthreads and OpenGL to ensure this block is there in the future.

-- Gareth

___

Have big pipes? SourceForge.net is looking for download mirrors. We supply
the hardware. You get the recognition. Email Us: [EMAIL PROTECTED]
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



[Dri-devel] Re: OpenGL and the LinuxThreads pthread_descr structure

2002-05-16 Thread Gareth Hughes

Ulrich Drepper wrote:
> 
> Everything which is not part of glibc is third-party.  It's the same as
> if some program would require access to internal data structures of
> libGL.  There are several different layouts of the thread descriptor and
> it's only getting worse.  The actual layout doesn't matter since
> everything is internal to glibc and the other libraries which come with
> it so this is no problem.

I don't understand the reference to the multiple layouts of the thread 
descriptor structure.  Can you exlain this?

> Beside, I don't understand why you react like this.  Using __thread is
> the best you can ever get.  It'll be portable (Solaris 9 already has the
> support as well) and it's faster than anything you can get to access the
> data.

I disagree that __thread is the best you can ever get.  In the best 
case, you have an extra load and subtraction before you have the address 
of a thread-local variable.  In the worst case, you have a function call 
in there as well.

That is:

movl %gs:0,%eax
subl $foo@tpoff,%eax
movl (%eax),%eax
jmp *1234(%eax)

versus:

movl %gs:32,%eax
jmp *1234(%eax)

for instance.  When the function you are jumping to consists of five or 
six instructions, say, an extra two instructions are significant.

Recall that a competing operating system on x86 allows access to the 
context and dispatch pointers with two instructions, so what you are 
suggesting will mean we always have an inferior implementation.

You also need -fpic, which burns a whole register.  This is a 
non-trivial sacrifice, particularly on x86.

Let's be clear about what I'm proposing: you agree to reserve an 
8*sizeof(void *) block at a well-defined and well-known offset in the 
TCB.  OpenGL is free to access that block, but only that block.

-- Gareth


___

Have big pipes? SourceForge.net is looking for download mirrors. We supply
the hardware. You get the recognition. Email Us: [EMAIL PROTECTED]
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



[Dri-devel] Re: OpenGL and the LinuxThreads pthread_descr structure

2002-05-16 Thread Gareth Hughes

Ulrich Drepper wrote:
> 
> This is the only way you'll get access to thread-local storage.  It is
> out of question to allow third party program peek and poke into the
> thread descriptor.

What do you mean, a third party program?  We're talking about a system 
library (libGL.so) here.  There is a similar shortcut for libc 
(p_libc_specific) already in there.

-- Gareth


___

Have big pipes? SourceForge.net is looking for download mirrors. We supply
the hardware. You get the recognition. Email Us: [EMAIL PROTECTED]
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



[Dri-devel] Re: OpenGL and the LinuxThreads pthread_descr structure

2002-05-16 Thread Gareth Hughes

Jakub Jelinek wrote:
> Hi!
> 
> What percentage of applications use different dispatch
> tables among its threads? How often do dispatch table changes
> occur? If both of these are fairly low, computing a dispatch table
> in an awx section at dispatch table switch time might be fastest

I should also point out that display list compilation and playback is 
another place where the dispatch table changes (typically, you have at 
least a dispatch table for regular immediate mode, display list 
compilation and display list playback).  One of the ugliest things about 
the Microsoft Windows implementation of OpenGL is that the driver 
backend must call a function to register a new dispatch table, and the 
OpenGL library then makes several copies of this table internally. 
Being able to switch dispatch tables with a single pointer reassignment 
makes it easy to do very powerful optimizations.

-- Gareth

___

Have big pipes? SourceForge.net is looking for download mirrors. We supply
the hardware. You get the recognition. Email Us: [EMAIL PROTECTED]
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Re: OpenGL and the LinuxThreads pthread_descr structure

2002-05-16 Thread Gareth Hughes

Keith Whitwell wrote:
 >>
>>2) last time I looked, libGL.so was linked unconditionally against
>>   libpthread. This is punnishing all non-threaded apps, weak undefined
>>   symbols work very well
> 
> 
> This is because we currently use the standard way of getting thread-local-data
> and detecting multi-thread situations.  I'm not sure how Gareth is able to
> detect threaded vs. non-threaded situations without making any calls into the
> pthreads library, but once you know which one you're in, with his trick, you
> don't need to make any more.
> 
> Currently we do something like this in MakeCurrent:
> 
> void
> _glapi_check_multithread(void)
> {
> #if defined(THREADS)
>if (!ThreadSafe) {
>   static unsigned long knownID;
>   static GLboolean firstCall = GL_TRUE;
>   if (firstCall) {
>  knownID = _glthread_GetID();
>  firstCall = GL_FALSE;
>   }
>   else if (knownID != _glthread_GetID()) {
>  ThreadSafe = GL_TRUE;
>   }
>}
>if (ThreadSafe) {
>   /* make sure that this thread's dispatch pointer isn't null */
>   if (!_glapi_get_dispatch()) {
>  _glapi_set_dispatch(NULL);
>   }
>}
> #endif
> }
> 
> where _glthread_GetID() is really pthread_self().
> 
> How do you detect threading without making these calls to libpthreads.so?

The important point is that you don't really need to detect threading 
anymore.  The Linux OpenGL ABI states that multithreaded apps must link 
with pthreads.  Thus, at startup, you can detect the presence of 
pthreads or otherwise.  Basically, if pthreads is present, you just use 
the pthread_descr that it set up, otherwise you create a dummy one and 
plug it into the segment registers (or whatever) and be done with it. 
 From that point on, you don't care how many threads there are. 
Accessing "global" data is always done the same way, independant of the 
number of threads running.

In any case, it would be great to remove the need of apps that link with 
libGL to also link with pthreads, and to force the use of pthreads even 
for single-threaded apps.

> The thing that really bites with -fpic is the bs you have to go through to get
> access to static symbols (forgive my loose terminology) like static variables
> or other functions you want to call.  Gareth's trick means that two very
> important variables avoid this, but it's still going to be necessary to call
> other functions often enough...

I'd like to hear a strong arguement as to why you *would* want to link 
with -fpic.  Like Keith, I'm also not familiar with some of the more 
in-depth aspects w.r.t. relocation/fpic etc, so feel free to enlighten us.

-- Gareth

___

Have big pipes? SourceForge.net is looking for download mirrors. We supply
the hardware. You get the recognition. Email Us: [EMAIL PROTECTED]
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



[Dri-devel] Re: OpenGL and the LinuxThreads pthread_descr structure

2002-05-16 Thread Gareth Hughes

Jakub Jelinek wrote:
> Hi!
> 
> What percentage of applications use different dispatch
> tables among its threads? How often do dispatch table changes
> occur? If both of these are fairly low, computing a dispatch table
> in an awx section at dispatch table switch time might be fastest
> (ie. prepare something like:
> .section dispatch, "awx"
>   .align 8
> .globl glFoobar
> glFooBar:
>   jmp something
>   nop; nop; nop
> 
> and  would be changed whenever a dispatch table switch happens
> for all dispatch table members).

That's not really feasible, as the tables can change very frequently (as 
often as every glBegin/glEnd, or maybe even every function call between 
glBegin and glEnd).

Also, dispatch tables will *always* be different between threads, that's 
why they need to be accessed in a thread-safe manner.

Finally, rewriting the instructions like this will have very bad trace 
cache behaviour on the Pentium 4, where touching instructions that have 
already been decoded causes the entire trace cache to be flushed.

> BTW: Last time I looked at libGL (in March), these were things which I
> came over:
> 1) libGL should IMHO use a version script (at least an anonymous one
>if you want to avoid assigning a specific GL_x.y symbol version to it),
>that way you get rid of thousands of expensive run-time relocations

Can you explain this in more detail?  I'm not sure I understand what 
you're saying.

> 2) last time I looked, libGL.so was linked unconditionally against
>libpthread. This is punnishing all non-threaded apps, weak undefined
>symbols work very well

I agree.

> 3) I don't think building without -fpic is a good idea, 1) together with
>other tricks might speed things up while avoiding DT_TEXTREL
>overhead

Again, could you explain this in more detail?  Thanks.

-- Gareth

___

Have big pipes? SourceForge.net is looking for download mirrors. We supply
the hardware. You get the recognition. Email Us: [EMAIL PROTECTED]
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



[Dri-devel] OpenGL and the LinuxThreads pthread_descr structure

2002-05-16 Thread Gareth Hughes

I would like to propose a small change to the pthread_descr structure in
the latest LinuxThreads code, to better support OpenGL on GNU/Linux
systems (particularly on x86, but not excluding other platforms).  The
purpose of this patch is to provide efficient thread-local storage for
both libGL itself and loadable OpenGL driver modules, so that they can
be made thread-safe without any impact on performance.  Indeed, using
this mechanism, an OpenGL driver can ignore the difference between
running with a single thread and running with multiple threads, as
"global" data will be accessed in the same way independent of the number
of threads running.

To understand the need for such a change, one should consider what goes
on inside an OpenGL implementation when an application makes an OpenGL
API call.  One of the primary tasks of the driver-independent libGL is
to dispatch function calls to the driver backend(s), usually through a
large function pointer table containing entries for the several hundred
API entrypoints.  Central to this process is the notion of a rendering
context, or an abstraction of the OpenGL state machine.  A context is
required to perform OpenGL commands.  The GLX specification states:

 Each thread can have at most one current rendering context. In
 addition, a rendering context can be current for only one thread
 at a time.

The dispatch table for a context depends on the current state of OpenGL
for that context, as things like display list compilation, display list
playback and plain old immediate mode rendering change the behaviour of
many API entrypoints.  We see from the quote above that each thread has,
at most, a single context, and this context has a single current
dispatch table.

The top-level API entrypoints can be implemented like the following:

 struct gl_dispatch {
   ...
   void (*Foo)(GLint bar);
   ...
 };

 void glFoo(GLint bar)
 {
   struct gl_dispatch *current = __get_current_dispatch();
   current->Foo(bar);
 }

Similarly, a driver's implementation of the above entrypoint might look
like the following:

 void __my_Foo(GLint bar)
 {
   struct gl_context *gc = __get_current_context();

   /* remember the current setting of bar */
   gc->state.current.bar = bar;

   /* do stuff with bar, like program hardware registers */
   ...
 }

We want __get_current_context() and __get_current_dispatch() (at a
minimum) to be as efficient as possible, while still providing thread
safety.  Suppose we add a libGL-specific area to pthread_descr.  This
would allow us to implement these (and other similar) functions like so:

 void *__get_current_context(void)
 {
   pthread_descr self = thread_self();
   return THREAD_GETMEM(self,
p_libGL_specific[_LIBGL_TSD_KEY_CONTEXT]);
 }

 void *__get_current_dispatch(void)
 {
   pthread_descr self = thread_self();
   return THREAD_GETMEM(self,
p_libGL_specific[_LIBGL_TSD_KEY_DISPATCH]);
 }

This would allow us to hand-code the top-level dispatch functions on x86
as:

 glFoo:
 movl %gs:__gl_context_offset, %eax
 jmp *__glapi_Foo(%eax)

where __gl_context_offset is the byte offset of the thread-local context
pointer and __glapi_Foo is the byte offset of the Foo entry in the
dispatch table.  Clearly this is an efficient implementation of the
dispatch mechanism required by OpenGL, and is completely thread-safe to
boot.

With modern OpenGL applications and benchmarks dealing with datasets
containing over 1 million vertices, with one or more function calls per
vertex, you can see that an efficient dispatching mechanism is crucial
for a high-performance OpenGL implementation.  For example, the SPEC
Viewperf benchmark's Light test (as described at
http://www.spec.org/gpc/opc.static/light05.html) includes a subtest that
renders over half a million wireframe primitives like so:

 GLfloat color[][4];
 GLfloat position[][4];

 glBegin(GL_LINE_LOOP);
   glColor3fv(color[i]);
   glVertex3fv(position[i]);
   glColor3fv(color[i+1]);
   glVertex3fv(position[i+1]);
   glColor3fv(color[i+2]);
   glVertex3fv(position[i+2]);
   glColor3fv(color[i+3]);
   glVertex3fv(position[i+3]);
 glEnd();

With 10 function calls per primitive, this equates to over 5 million
function calls per frame.  This is certainly a worst-case scenario, and
there are certainly more efficient methods of rendering such large
amounts of data, but Viewperf (the industry-standard OpenGL benchmark)
deliberately stresses this path to measure the cost of API calls, as
many workstation OpenGL apps (engineering, CAD and 3D modelling tools)
still operate like this.

An important point to understand is that the round trip through the API,
into the driver and back out again for this immediate mode path can
often be counted in tens of instructions.  State of the art Op

Re: [Dri-devel] New cards (GPU's) from old card makers? DRI support?

2002-04-18 Thread Gareth Hughes

José Fonseca wrote:
> 
> So what do you think the future reserves for the open-source OSs? Just 
> closed-source drivers, perhaps some Wine-alike binary emulated Windows 
> drivers and a bunch of opensource legacy cards drivers..?
> 
> I really hope not... and at least in with the cards that I may aquire in 
> future I'll surely avoid that that happens.

I don't think anyone can answer that question.  I know I can't, at any 
rate.  It all depends on how important Linux is to the different 
vendors.  You may not believe and/or like it, but those that take it 
seriously are probably more likely to release their own drivers, which 
are more likely to be closed source than open source.  Of course, I 
don't really want to get into a philosophical debate on the matter, but 
a dose of reality every now and then helps, I think...

-- Gareth


___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] New cards (GPU's) from old card makers? DRI support?

2002-04-18 Thread Gareth Hughes

Smitty wrote:
 >
> Are there any plans to implement drivers for these cards? 
> Have any of the manufactureres made an approach or started asking
 > questions about DRI?
> 
> Or is this all completely off the radar screen at the moment?

Getting specs (full or otherwise) for DX8.1 and/or DX9.x compliant cards 
may be difficult.  You should also think about the effort it will take 
to support the DX8 and beyond feature sets in a competitive manner.

Now, I don't want to discourage you all from trying, but times have 
changed a lot since we were dealing with rasterization-only cards like 
the G200 and Rage Pro.  I'll certainly be watching with interest :-)

-- Gareth


___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Mach64 PCI support added to 2D driver

2002-04-13 Thread Gareth Hughes

Leif Delgass wrote:
> 
> Do we know for sure that pci gart is supported on mach64?  The rage 128
> and radeon drivers both write to PCI GART registers, but I don't see
> anything analogous in the Rage PRO docs.  My understanding is that to use
> the scatter/gather memory, the card has to implement it's own address
> translation table.  Your checkin adds allocation of scatter/gather memory,
> but can PCI mach64 use this memory?

You are correct, there's no such thing as PCI GART on the mach64.  From 
memory, the only difference between AGP and PCI DMA scatter-gather 
tables is you need to set a bit somewhere to specify the pages are AGP 
memory, when required.  Other than that, the DMA mechanism is the same, 
from the driver's point of view.  As far as I can remember, anyway :-)

-- Gareth


___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Mesa3d-dev] Re: [Dri-devel] Mesa MMX blend code finished

2002-04-12 Thread Gareth Hughes

Allen Akin wrote:
> 
> If the expected value is 255 and the OpenGL implementation yields 254,
> that's only one LSB of error, so glean probably won't complain about it.
> 
> We could make the test more stringent, but then some reasonable
> implementations (especially some hardware implementations) would fail.
> Also, maintaining enough accuracy to yield results correct to 1 LSB is
> already pretty challenging when color channels are deeper than 8 bits.

Perhaps we need a "-pedantic"-like command-line option to force more 
stringent tests?  It could be used when testing software implementations 
and (perhaps) newer hardware implementations, but not used on older 
cards (I seem to recall the 3dfx cards were a major source of problems 
here...).

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] mach64: performanc = f (phys. screen res.)

2002-04-11 Thread Gareth Hughes

Felix Kühling wrote:
> Hi,
> 
> I recently found out that the 3d performance of the mach64 branch (in 
> terms of glxgears frame rates) is related to the physical screen 
> resolution. I got the following glxgears frame rates with different 
> resolutions:
> 
> 1152x864: 155.2 fps
> 1024x768: 165.6 fps
>  800x600: 209.6 fps
>  640x480: 229.4 fps

You aren't running the app maximized, are you? :-)

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Mesa3d-dev] Re: [Dri-devel] Mesa software blending

2002-04-09 Thread Gareth Hughes

Stephen J Baker wrote:
 >
Everything starts out in hardware and eventually moves to software.
>>>
>>>That's odd - I see the reverse happening.  First we had software
>>
>>The move from hardware to software is an industry-wide pattern for all
>>technology.  It saves money.  3D video cards have been implementing new
>>technologies that were never used in software before.  Once the main
>>processor is able to handle these things, they will be moved into software.
>>This is just a fact of life in the computing industry.  Take a look at what
>>they did with "Winmodems".  They removed hardware and wrote drivers to
>>perform the tasks.  The same thing will eventually happen in the 3D card
>>industry.
> 
> 
> That's not quite a fair comparison.

I agree.  You may want to take a look at the following article:

http://www.tech-report.com/reviews/2001q2/tnl/index.x?pg=1

It shows, among other things, a 400MHz PII with a 3dfx Voodoo2 (hardware 
rasterization) getting almost double the framerate of a 1.4GHz Athlon 
doing software rendering with Quake2 -- and the software rendering is 
not even close to the quality of the hardware rendering due to all the 
shortcuts being taken.

What we are seeing, throughout the industry, is a move to programmable 
graphics engines rather than fixed-function ones.  Programmable vertex 
and fragment pipelines are not the same as a software implementation on 
a general purpose CPU, as the underlying hardware still has the special 
functionality needed for 3D graphics.  I suspect that this will continue 
to be true for a very, very long time.

-- Gareth


___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



[Dri-devel] Re: Radeon Converted to DrmCommand Interface

2002-03-31 Thread Gareth Hughes

Sounds great!  I'll be arrving home after several weeks of holidays
next week, but I'm interested to see what you've done and will take a
look soon.

-- Gareth

--- Jens Owen <[EMAIL PROTECTED]> wrote:
> I've checked into the drmcommand-0-0-1-branch the complete conversion
> of
> the Radeon driver suite to the drmCommand interface.  Take a look,
> and
> let me know if you see any problems, or if you have any questions.
> 
> If anyone is interested in converting any of the other drivers,
> please
> let me know and we'll coordinate the work.  I won't be able to get to
> any of the other drivers until sometime next week.
> 
> Regards,
> Jens
> 
> -- /\
>  Jens Owen/  \/\ _
>   [EMAIL PROTECTED]  /\ \ \   Steamboat Springs, Colorado


__
Do You Yahoo!?
Yahoo! Greetings - send holiday greetings for Easter, Passover
http://greetings.yahoo.com/

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



[Dri-devel] [Fwd: [Mesa3d-dev] viewperf]

2002-03-12 Thread Gareth Hughes

Forwarding to dri-devel.

 Original Message 
Subject: [Mesa3d-dev] viewperf
Date: Tue, 12 Mar 2002 13:51:52 +0100 (MET)
From: Klaus Niederkrueger <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]

Hi,

In the last week I have been playing with the spec-viewperf programs and
(at least on my computer) I get very strange results with "DRV" (the
oil-drilling-platform):

With Mesa in software-mode (latest CVS) I do not see almost anything. It
looks like the triangle-culling was killing the wrong side of the objects.

If I use XFree-4.2 hw-rendering (Radeon VE), I see many more pipes though
the ladders are not visible until the test uses only lines, instead of
filled polygons. Also here there still seem to be some mistakes either
with clipping (polygons at the border of the screen look strange) or
culling (very small polygons seem to be missing) (or both).

But then I went a step further and compiled the view-perf with the
switches "-DMP" and "-lpthread", which seem to enable threads.
While all other tests work (as far as I can tell), DRV crashes now. In
Software-mode I get a segmentation fault and in hardware mode it says
"RadeonSwapBuffers error: some number".

Can anybody reproduce these problems?

Klaus


___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] DRI based X drivers; SMP testing

2002-03-11 Thread Gareth Hughes

Brian Paul wrote:
> 
> One question to ask is: regardless of the vertex buffer size, typically
> how many vertices are issued between glBegin/End or state changes?  Does
> Q3 (for example) render any objects/characters with > 1000 vertices?

Never.  The maximum size of any locked array from the Q3 engine is ~1000 
vertices (probably 1024).  This is well documented in JohnC's "How to 
write an OpenGL driver for Quake 3" doc.  Typically, you see numbers in 
the range of 100-300 or so.

-- Gareth


___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Texture compression on mach64?!?

2002-03-11 Thread Gareth Hughes

Seriously guys, this kind of thing is the last thing you should be 
worrying about, at least until you have a working DMA implementation and 
a fully-featured Mesa 4.x driver...

Just my 2c.

-- Gareth


___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Radeon TCL driver tested

2002-03-10 Thread Gareth Hughes

Trond Eivind Glomsrød wrote:
> Gareth Hughes <[EMAIL PROTECTED]> writes:
>>That's probably more of a rasterization/fill test than a T&L test, so
>>it's not surprising there isn't a more significant increase.
>>
> 
> What tests do you recommend?

Pretty much all of the Mesa/GLUT demos are pointless these days for 
measuring T&L performance.  Enlarging the window to these sizes makes 
them a fill test, and not a very good one at that :-)

For real T&L performance measurements, you need to look at things like 
viewperf, maybe glperf, things like that.  You basically need a 
polygonal model with 100K+ triangles before it gets interesting.  Even 
then, you have to be careful about things like data submission and so 
on, to ensure you're really testing T&L performance and not AGP 
bandwidth, rasterization throughput etc.

-- Gareth




___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Radeon TCL driver tested

2002-03-09 Thread Gareth Hughes

Jeffrey W. Baker wrote:
 >
> glxgears at 1600x1200x24 improved from 804 to 824fps.

That's probably more of a rasterization/fill test than a T&L test, so 
it's not surprising there isn't a more significant increase.

-- Gareth


___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] mach64 mipmapping

2002-03-07 Thread Gareth Hughes

Leif Delgass wrote:
> In looking at the docs, I realized that the mach64 seems not to support 
> mipmapping on the secondary texture, as there is only one register for a 
> secondary texture offset, as opposed to the 11 for the primary texture. It 
> seems as the though the second "texture unit" isn't really a fully fledged 
> texture pipeline as it is in other cards (Rage 128 has 2 separate tex_cntl 
> regs and a full set of offsets for the second texture unit).
> 
> And lending credence to the assertion that mipmapping is broken in 
> hardware on mach64, I compiled the mipmap Mesa demo in Windows and the ATI 
> driver doesn't support mipmapping either, you just get a yellow triangle.

Mipmapping on the Rage Pro is busted, period.

-- Gareth


___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Re: GLperf and SPECviewperf

2002-03-03 Thread Gareth Hughes

Dieter Nützel wrote:
> 
> Yes, SPECviewperf worked out of the box, GLperf never tried.
> Got SPECviewperf running several times for testing the tdfx driver since 
> '2000.
> 
> I only changed CDEBUGFLAGS in makefile.linux for better Athlon optimization.
> 
> CDEBUGFLAGS = -O -mcpu=k6 -pipe -mpreferred-stack-boundary=2 
> -malign-functions=4 -fschedule -insns2 -fexpensive-optimizations

Hmmm, I'm not really sure that's legal (in terms of having reportable 
results with a binary compiled like that) :-)

-- Gareth


___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Unreal

2002-02-21 Thread Gareth Hughes

Bill Currie wrote:
> 
> I can only speak about the quake 1/quakeworld source (I haven't studied the
> quake2 code enough yet), but it's actually nothing that complex. In fact,
> it's the opposit. quake doesn't do the integeration properly at all. It just
> adds the gravity acceleration to the velocity then the velocity to the
> location. I'm not sure if t squared shows up in the physics code or not (it
> looks like it does, but indirectly), but the 1/2 for the 1/2at**2 doesn't.
> 
> Basicly, a projectile (player, grenade) has a piece-wise linear trajectory
> that doesn't touch the correct parabola except at the start point.
> 
> Quake: the universe where G depends on how fast you can blink your eyes.

I'm pretty sure Dave was talking about Quake3 ;-)  He certainly
plays it enough...

-- Gareth


___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Unreal [was: Mach 64 success and problems]

2002-02-20 Thread Gareth Hughes

Keith Whitwell wrote:
>>
>>What is the point of sustaining such a frame rate that has no pratical
>>advantage?
>>
> 
> You do "see" the partial frames, it seems.  The eye seems to do a reasonable
> job of integrating it all, providing you with a low-latency view of the game
> world.

Hardcore gamers want ~100fps so the game clock is updated enough to
allow smooth gameplay.  This is particularly important with
network-based deathmatch games.

-- Gareth


___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Unreal [was: Mach 64 success and problems]

2002-02-20 Thread Gareth Hughes

Jose Fonseca wrote:
> 
> The maximum framerate you'll ever get is limited by your screen refresh
> rate.

If you implement sync-to-vblank, which no DRI driver other than tdfx does...

-- Gareth


___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] libGL(U?) problem

2002-02-20 Thread Gareth Hughes

Brian Paul wrote:
> 
> OK, it looks like the templatized code for texture image conversion is
> the problem.  It's using the CONVERT_TEXEL_DWORD macro even when the
> texture width is one, causing an out-of-bounds write.
> 
> I'll fix this up, Gareth :)

Hmmm, looks like my assumption that allocations are dword-aligned was 
false and actually caused crashes...  My bad, indeed.  Is this for 1x1 
textures only?

-- Gareth


___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Complain and request for Mach64 binaries a la Gatos

2002-02-18 Thread Gareth Hughes

Sergey V. Udaltsov wrote:
> Hello all
> 
> Just tried to build mach64 branch. Got an error:
> 
> make[4]: Entering directory `/db2/xfree/xc/xc/lib/GL/mesa/src/drv/sis'
> make[4]: *** No rule to make target
> `../../../../../../lib/GL/dri/drm/sis_drm.h', needed by `sis_alloc.o'. 
> Stop.
> make[4]: Leaving directory `/db2/xfree/xc/xc/lib/GL/mesa/src/drv/sis'
> make[3]: *** [all] Error 2
> make[3]: Leaving directory `/db2/xfree/xc/xc/lib/GL/mesa/src/drv'
> make[2]: *** [all] Error 2
> make[2]: Leaving directory `/db2/xfree/xc/xc/lib/GL'
> make[1]: *** [all] Error 2
> make[1]: Leaving directory `/db2/xfree/xc/xc/lib'
> make: *** [all] Error 2
> 
> Any clue? Is it FAQ?

No idea...

> The main point of this letter: could someone please consider the
> possibility of periodical publishing mach64.tar.gz using the method of
> Gatos project: just XFree modules and drm kernel modules (I think, for
> libGL.* will go there too). The building of the whole tree is time- and
> space-consuming task, so these builds could simplify the life for
> "ordinary but adventurous people" like me. Is it wrong idea? I do not
> think it is very difficult to hack little shell script which makes this
> archive...

Why limit this to the mach64 driver?  We don't build binaries for 
anything else, and some might argue that other drivers are used more 
than this one and are thus more worthy of having pre-built binaries :-)

-- Gareth


___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



RE: [Dri-devel] Pseudo DMA?

2002-02-10 Thread Gareth Hughes

Gareth Hughes wrote:
> 
> It's a damned slow chip.  With anything over about a 4-500MHz
> processer, you'll still be able to saturate the chip -- we got
> it to hit the hardware limit doing PIO with a 600MHz PIII, I
> seem to recall...

Actually, no, that was with the Utah-GLX style DMA with a much
less efficient version of Mesa (that did extra copies of all
vertex array data).  I'd err on the side of caution here.  Why
don't you try the secure way and see what kind of performance
you get?

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



RE: [Dri-devel] Pseudo DMA?

2002-02-10 Thread Gareth Hughes

Frank C. Earl wrote:
> 
> The command pathway doesn't seem to allow for that.  Only the blit
pathway.  
> I've coded only inbound to the aperture writes with that pathway, but not 
> outbound (there's very little that anything other than the X server needs
to 
> do that sort of thing).

How do you prevent a client-side "driver" from sending down blit
commands, without inspecting the DMA buffer?

> Ok, that's going to make the machine do the work twice- once for filling
the 
> buffer with verticies and then again to unpack them and submit commands to

> the engine.  I was trying to avoid a bottleneck- but, okay, if that's not 
> acceptable, I'll do it the other way.

You can't allow the user-space portion of the driver to fill in
the register programming words in the DMA buffer.  Vertex data,
sure, that's okay.  The registers are mapped into two separate
pages for PIO access, but all that goes away when you're doing
DMA transfers...

> The engine doesn't seem to allow for that if you disallow the user to set
up 
> blit operations.  I'd have to run a test to be sure, but I think the
engine 
> would lock up if you messed with the setup registers and tried to
transform 
> the gui-master into a blit and that'd be only way what you suggest could 
> happen with the RagePRO from what I can tell.

Exactly how do you do this?

> The other cards seem to mainly have nice pathways for submitting
verticies, 
> etc.  This one doesn't.  I'll recode it to accomodate no commands but pure

> data- it's not just a pain, it's horribly inefficient with this card the
way 
> it's designed.  

It's a damned slow chip.  With anything over about a 4-500MHz
processer, you'll still be able to saturate the chip -- we got
it to hit the hardware limit doing PIO with a 600MHz PIII, I
seem to recall...

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



RE: [Dri-devel] Pseudo DMA?

2002-02-10 Thread Gareth Hughes

Frank C. Earl wrote:
> 
> On Friday 08 February 2002 07:09 pm, José Fonseca wrote:
> 
> > Does this mean that client code can lock the card but is not really
> > capable of putting the security of the system in danger?
> 
> Depends on what you define as "in danger".  It won't allow a user to
commit 
> local or remote exploits to gain root, etc., but it could be used to lock
up 
> the console such that it requires a remote session of some sort to do a 
> reboot.  This is why, when it passes muster, I'm putting it in a seperate 
> branch- it's usable, and should be fairly stable, but it's not protected
from 
> malicious clients, etc.  To be a "proper" secure driver, it HAS to be 
> such that you can't DoS the box via the driver if at all possible.  Since 
> there is no specialized groupings of commands (such as one to push a lot
of 
> verticies to the card), I would have to create a shorthand system that the

> module would interpret and then generate the right commands for the DMA 
> channel to use, or come up with a way to detect a lockup on the chip and
do a 
> proper reset.  The first option has an extra inner loop to process against

> (It took cycles to FILL the list, now you're taking cycles to unpack and 
> re-express them in a form that the card can handle...), eating cycles that

> could be used elsewhere in the system.  The second option presumes one can

> detect cleanly a locked up chip always and do a reset always.

I don't think you should count on being able to reset the chip once
you "detect" a lockup.  Things like bus lockups are pretty fatal
events.  Besides, I've yet to see the sample code for resetting
the ATI chips actually work reliably, if at all :-)

W.r.t. your first comment, think long and hard about what you could
do with a user-space programmable DMA engine that can read and write
arbitrary locations in system memory.  It may be hard, but it's entirely
within the real of possibility that you can become root.

Similarly, you should not design a mechanism that allows the chip to
lock up for any other reason than a bug in the driver.  There is
nothing but "proper" security -- it's either a secure driver, or it's
not.  It is unacceptable to have a non-alpha quality driver that has
backdoors like this.

> > Sorry for the dumb question (my knowledge of the DMA mechanism is still
> > rather incomplete..), but in what way the distinction of the set of
> > commands (in the DMA buffers I presume) affects the security?
> 
> Depends on the chip.  In the case of the RagePRO, there is literally
nothing 
> keeping you from submitting commands in the DMA stream that can lock up
the 
> chip.

Again, nothing stopping you from submitting commands that changes
your UID to '0', hence becomming root...  Difficult, but not
impossible.

>Not very many (if any of them...) of the routines in the XAA driver

> for the RagePRO expect a hung card (because they're not doing anything
that 
> COULD lock the card) and there's a couple of locked up states from the DMA

> pass that leave the engine state as being busy "forever"- and end up being

> deadlocked, thus hanging the console but good.  If the XAA driver and the 
> kernel module could check unobtrusively for a locked up state and do a
reset 
> if it is locked up, the situation would go from being an insecure one to a

> relatively secure one (As secure as it's going to get and not impair 
> performance...)

Please think about what you're suggesting.  These chips can read
and write arbitrary locations in system memory.  For all chips that
have this feature, the only safe way to program them is from within
a DRM kernel module.  Only clients that have been authenticated via
the usual (X auth) means are able to talk to such modules.  There is
simply no other way to do it.  You can trust the X server and the
kernel module.  You CANNOT trust anything else -- a client-side 3D
driver, something masquerading as one, whatever...

There is a reason why all the DRI drivers for commodity cards are
designed like this.  It's a pain, but that's the price you pay for
a secure system.

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



RE: [Dri-devel] How to build DRI to use gprof see profile data

2002-02-04 Thread Gareth Hughes

> Shouldn't it work for the whole tree if we provided a wrapper for mcount
> and whatever for the modules?

Possibly not, given the method we use to actually process the profiling
data.  Besides, it's never been a priority -- the current method works
great for the 3D drivers.

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



RE: [Dri-devel] How to build DRI to use gprof see profile data

2002-02-04 Thread Gareth Hughes

> Hello, Every body:
>  
>  Do any know how to build Xserver and DRI that can see profiling
>  data through  gprof?? I've see there is an option in host.def(in )
>  DRI cvs):
> 
> /* To do profiling of the dynamically loaded 'xyz_dri.so' object, turn
>  * this on.
>  * Use 'xc/lib/GL/makeprofile.sh' to make it work.
>  */
> /* #define GlxSoProf YES */
> 
> #ifdef GlxSoProf
> #  undef DefaultCCOptions
> #  define DefaultCCOptions -ansi GccWarningOptions -pipe -g -p
> #endif
> 
>but if I enable #define GlxSoProf YES, then try to build Xserver
>  (after make lowpc.o, highpc.o by hand).When I start Xserver 
> , it cause
>  unsolved symbol and core dump.
> 
> Symbol mcount from modules /usr/X11R6/lib/modules/fonts/libbitmap.a is
> unresolved!
> 
>Do any one has ever try this??  Do any one can tell me how to prof
> xxx_dri.so ??

You can only build what's inside xc/lib/GL with GlxSoProf set to YES.
What you'll need to do is build the tree as usual, then do the
following:


cd xc/lib/GL
make 
make Makefile
make Makefiles
make
make install

Keith Whitwell can correct me if I'm wrong (it's been a while since I've
done this).  I think there's some documentation about this somewhere...

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



RE: [Dri-devel] Re: GL testing Re: [Xpert]problem with ati and mandrake 8.1

2002-02-03 Thread Gareth Hughes

> On Sun, 3 Feb 2002, Dr Andrew C Aitchison wrote:
> 
> >
> > On Sun, 3 Feb 2002, Vladimir Dergachev wrote:
> >
> > >  In fact, I do not even know a good app to test GL functionality,
> > > not perfomance (i.e that all calls do as they should and do not crash
the
> > > machine).
> >
> > I'm fairly sure that there is an OpenGL test suite; although I doubt
> > that it is freely available. Should XFree86 (or another organization)
> > have a licence that can be used to test DRI ?
> 
> Well, I tried to search for one and asking on DRI list.. I was not able to
> compile the GL benchmark (viewperf ?) and there was nothing better, so in
> the end I settled for testing with Quake, Descent and glxgears.

Try glean: http://glean.sourceforge.net/

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



RE: [Dri-devel] DRI development IRC meeting

2002-02-03 Thread Gareth Hughes

> Just a reminder to all developers interested in contributing to
> the DRI project that there is an developmental IRC meeting
> scheduled for Monday February the 4th 2002 (today) at 2100h UTC
> (4:00pm EST).

Does this mean the time has changed again?  Is this going to be a
reasonably final time?  Just want to make sure I can attend this
every week :-)

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



RE: [Dri-devel] DRI / IHV Contact people.

2002-02-02 Thread Gareth Hughes

Smitty wrote:
> 
> Maybe a bit of a strange question, but I think it should be asked.
> 
> Does the DRI project have a contact person at each of the IHV's?
> 
> Specifically ATI, Nvidia seems to be more of a closed source 
> house, and 3dfx is now defunct. 

Alexander Stohr <[EMAIL PROTECTED]> is subscribed to the list, he
might be a good point of contact -- or could point you in the right
direction.

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



RE: [Xpert]RE: [Dri-devel] Re: [GATOS]Re: Bugfix for br0ken DMA o n r128 andpossibly radeonDMA functions

2002-02-01 Thread Gareth Hughes

> I can't believe I am hearing this. The major benifit of free software is
> that if there is a new and better design you can try it out and then
> everyone can upgrade. It's not as if we are charging them money they way
> Microsoft does.
> 
> Are you saying that progress stops with the inclusion of the driver into
> the kernel tree ? Could it be that you misunderstood Linus and he meant:
> during stable series ?

No, there was no misunderstanding.  You can't update the DRM code in
the kernel in such a way that it breaks older versions of the 2D and
3D drivers.  Simple as that.

> As for understanding the hardware - you can't possibly understand which
> has not been released yet. Heck, with the state of things that is now,
> we often don't know details about the hardware that was just released.

That's my point.  We didn't know how to program the offsets, hence
the current situation.

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



RE: [Xpert]RE: [Dri-devel] Re: [GATOS]Re: Bugfix for br0ken DMA on r128 andpossibly radeonDMA functions

2002-02-01 Thread Gareth Hughes

> That's too bad because this will imply a _lot_ of hair in the drivers.

That's the way it has to be, for the DRM code to remain in the stock
kernel distro.  Linus has make this crystal clear.

> The fact is that we have a driver split several ways: 3 portions from
> XFree tree (2d, 2d and drm), capture (km, GATOS) and kernel framebuffer.
> 
> The only right way, IMO, is to simply request that all driver versions
> must match. Maybe it is good idea to change drm to allow driver libraries,
> where we do not simply request radeon driver but, "radeon driver version
> X.Y.Z"

The only safe way to ensure this is to distribute the drivers (all parts)
as a separate package.  There are obviously pros and cons to doing it
that way.

> Now, I'll be the first to agree that for this particular change (memory
> controller) we can get by with one extra IOCTL or poking in the card's
> registers or even passing invormation in the lower bits of aperture
> addresses MS-Windows style.
> 
> The problem is what the code is going look like.. And the more important
> question is: what it will look like after another change like that ?
> 
> This memory controller patch is not the last change that would make DRM
> incompatible with older drivers. Let me see:
>* TV out might cause it to happen again (I don't know as this code has
>   not been written yet)
>* 8500 3d driver might do it too.
>* whatever ATI might come up with next.

Perhaps, if we were able to start from scratch, we could come up with a
clean way to avoid these problems.  Unfortunately, a lot of the early
design decisions were made when, quite frankly, we didn't understand
the current -- and future -- hardware well enough.

> So, it is possible to make this change work, but I do not see this worth
> it in the end.

What you're suggesting boils down to shipping the DRI drivers (incl. the
DRM portions) as a separete package.  If you can't maintain backwards
compatibility, this is the way it will be.  End of story.

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



RE: [Dri-devel] Re: [GATOS]Re: Bugfix for br0ken DMA on r128 andp ossibly radeonDMA functions

2002-01-30 Thread Gareth Hughes

> Gareth, the current driver is broken. If someone wants to use video
> capture they _need_ both GATOS 2d driver and GATOS drm driver, period.
> 
> What's so wrong about upgrading ?

Guaranteed, someone will get a mismatch -- your changes may go back
into the stock kernel, breaking DRI CVS or whatever, who knows.  Forcing
everyone to upgrade their kernel, 2D and 3D drivers to the right magic
revision is a recipe for disaster, one that the kernel people have
already kicked our arses over (rightly so).

> Also, I can make drm driver work nice with older 2d drivers - as soon as
> someone will show me a way to tell the version of the 2d driver that is
> accessing the drm driver.

Sounds like it'll need a 2D driver upgrade :-)

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



RE: [Dri-devel] Re: [GATOS]Re: Bugfix for br0ken DMA on r128 andpossibly radeonDMA functions

2002-01-30 Thread Gareth Hughes

> The assumption was only made for experimental GATOS drivers. It is a
> practical one. More people come and ask: "I upgraded to GATOS driver and
> DRI won't work anymore !" Answer: RTFM, upgrade drm driver.

It's already been determined that:

"I just upgraded my kernel, and DRI won't work anymore!"
"RTFM, upgrade your X server"

"I just upgraded my X server, and DRI doesn't work anymore!"
"RTFM, upgrade your kernel"

just doesn't cut it.  You aren't allowed to do anything that
requires a response of "RTFM, upgrade ..."

Start thinking of alternatives...

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



[Dri-devel] IRC meeting is on NOW

2002-01-28 Thread Gareth Hughes

In case you missed it, or forgot, the IRC meeting is taking place
right now on #dri-devel.

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



RE: [Dri-devel] IRC meeting log

2002-01-22 Thread Gareth Hughes

Thanks for this, I skimmed through it and will take some time later
today (yes, something other than 4:18am) to read it properly.  I look
forward to participating in next week's discussion!

And now, back to hacking code...

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



[Dri-devel] IRC logs?

2002-01-22 Thread Gareth Hughes

I had a meeting last night over an early dinner, so was unable to
attend yesterday's IRC session.  Does anyone have a log they could
send me, or post on the web somewhere?  I do plan on attending
these sessions every week, for those that were interested...

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



RE: [Dri-devel] SGI transfers 3D graphics patents to MS

2002-01-21 Thread Gareth Hughes

Frank C. Earl wrote:
> 
> On Monday 21 January 2002 09:21 am, Mike Westall wrote:
> > Conversely, if  "MS considers OpenGL to be dead and buried,
> > period", it seems that Bill would be "bit silly" to want to
> > spend $62.5 to become the owner of said dead + buried
> > technology!!
> 
> OpenGL is not really technology- it's an API that drives the technology.
MS 
> is very likely shopping more tech for DirectX.

Yes, owning SGI patents != owning the OpenGL trademark.

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



RE: [Dri-devel] dri-devel FAQ

2002-01-20 Thread Gareth Hughes

Very nicely done!  Hopefully, this will be expanded upon in the future,
but it looks great already.

-- Gareth

> -Original Message-
> From: José Fonseca [mailto:[EMAIL PROTECTED]]
> Sent: Sunday, January 20, 2002 10:20 AM
> To: [EMAIL PROTECTED]
> Subject: [Dri-devel] dri-devel FAQ
> 
> 
> Hi all,
> 
> I've finished compiling the the information gathered from the 
> dri-devel 
> archives into the FAQ. Since my university network was again 
> down I was 
> not able to put in my workstation's web server so I took the 
> liberty of 
> attach it in this mail. I'll publish in the same site 
> (http://mefriss1.swan.ac.uk/~jfonseca/dri/ ) in the meanwhile anyway.
> 
> 
> I hope that you don't get disappointed - it's not yet 
> complete but has 
> several pieces of wisdom. I'm sure that some of the original 
> authors will 
> get nostalgic feelings when reading it.. :-)
> 
> I would like to get feedback on it. Either personally or to 
> the dri-devel 
> mailing list (to receive peer review).
> 
> 
> I especially want that you make corrections on:
> 
> - Incorrect information: e.g., I've taken some assumptions is the 
> questions as right since they were'nt refuted in the answers 
> but that is 
> not necessarily true.
> 
> - Out of date information: e.g., There are reference to 
> branches which I 
> don't know if they were merged in the trunk.
> 
> Please don't bother yourself to make comments/suggestions on:
> 
> - FAQ Structure: There are obviously stuff misplaced but the 
> structure 
> will evolve as I include stuff from the DRI original documents.
> 
> - Style or typos: Only once the whole information is gathered 
> I will start 
> looking on this.
> 
> 
> If you have memory of interesting emails in the dri-devel 
> that weren't 
> included please tell me so. You may feel free to give more FAQs (with 
> answers of course!) to include.
> 
> In summary, now it only matters that the information is _here_ and is 
> _correct_, as much as possible.
> 
> Regards,
> 
> Jose Fonseca
> 
> 

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



RE: [Dri-devel] [PATCH] 4.2.0 DRM fixes to delay loops

2002-01-20 Thread Gareth Hughes

Mike A. Harris wrote:
> 
> The i830 DRM driver contains empty for loops used for short
> delays.  Modern gcc and other compilers, when used with
> optimization switches will optimize these empty for loops out,
> leaving no delay.  In addition, CPU's such as the Pentium 4, will
> needlessly overheat when executing empty for loops such as this
> (assuming they're not optimized out by the compiler) which can
> cause the chip to kick in it's thermal protection and lower the
> CPU speed.

You've got to be kidding...  The P4 docs suggest using a PAUSE
insn to reduce the otherwise-huge branch misprediction penalty
associated with busy-wait loops like this.  It also allows the
processor to consume less power, which is handy for mobile
platforms.  To suggest that a busy-wait loop will cause the
processor to overheat (given a functioning -- and attached --
heatsink/fan combo) and kick in the thermal protection is
absurd! :-)

> Instead of using empty for loops for delays, udelay() should be 
> used to provide delays.

Either way, this is A Good Thing(TM).

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



RE: [Dri-devel] SGI transfers 3D graphics patents to MS

2002-01-20 Thread Gareth Hughes

Philip Brown wrote:
> 
> but I would say that microsoft DOES want to kill OpenGL, 
> since then they
> would control the only useful 3D API.
> It's all about creating monopolies. (so he can build hotels?)

Allen's original statement made the point that MS considers OpenGL
to be dead and buried, period.  They've fought that battle, and in
their mind, won.  If this is the case, suggesting MS is out buying
patents to kill off the DRI seems a bit silly...

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



RE: [Dri-devel] SGI transfers 3D graphics patents to MS

2002-01-18 Thread Gareth Hughes

> I think microsoft is trying to kill DRI. It is a big threat 
> to all their products. If the open source community can offer 
> good 3d graphics at low cost then their system will suffer a 
> good loss in market share.

Ummm, somehow I don't think so...

The DRI is encompassed by OpenGL (as a whole), and if Microsoft
isn't interested in killing OpenGL because they don't consider
it a threat (*), one would reach the conclusion they don't care
about the DRI either.

-- Gareth

(*) Based on Allen Akin's original comment.

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



RE: [Dri-devel] DRM/DRI porting guide?

2002-01-17 Thread Gareth Hughes

Brian Paul wrote:
> 
> Even before VA Linux laid-off everyone we were losing momentum on the
> DRI project because the engineers had to work on other projects that
> generated revenue.  After everyone was laid-off we all went in different
> directions.  I think I'm one of the few who still reads this list.

It seems that most of us are still around...

> Back when we were actively writing the DRI drivers we were working
> our asses off.  Gareth, for example, was routinely working 80+ hours
> per week on this stuff.  We thought it was more important to invest
> our time in the drivers and infrastructure code than writing/updating
> design documents.  The DRI is very complicated and takes a lot of
> time to understand.  We didn't try to make it complicated - that's
> just the reality of it.

80?  Huh, that was a light week :-)

Regarding documentation, and the comparisons to other large projects
like the Linux kernel, think back to what it must have been like in
the early 90s.  Do you think there was anywhere near the amount of
documentation there is now on kernel development?  Do you think it
was easy for people to jump in and start making serious contributions?
Writing a whole driver is a *serious* contribution.  I'd say it took
me about a year of working with various drivers, and watching people
like Keith and Brian do their magic, before I felt comfortable in my
level of understanding of how the Mesa-based DRI drivers worked and
how all the pieces fit together.

You really are kidding yourself if you think you'll be able to pick
this up in a matter of weeks.  Not that this should discourage you
from working on the DRI or Mesa...  While we're making comparisons
with the Linux kernel, you need to think in terms of porting the
kernel to a new architecture, or maintaining one of the core bits
like the VM, networking, VFS etc, instead of a character device
driver for the real-time clock, say, when you think about writing
a solid, fully-featured driver for a modern 3D graphics processor.

To continue the comparisons with the kernel, how much documentation
do you think there is on the bleeding-edge work being done by the 
core developers?  Is there a "How To Hack On The New 2.4.10 VM"
document anywhere?  Or "Ten Easy Steps For Writing Your Own
Kernel-Based HTTP Server"?  "DaveM's Guide To The TCP/IP Stack"?

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



RE: [Dri-devel] [goran@ucw.cz: drm_agpsupport.h]

2002-01-15 Thread Gareth Hughes

> I would be quite surprised if the two chipsets had the same PCI id (have 
> a look at the pci.ids in the linux kernel)... they should only share the 
> same vendor id, which makes the agpgart code work properly (I think Via 
> is less silly than Intel that has the nasty habit of changing the 
> adresses of AGP registers throughout their chipset releases)

Perhaps it was different revisions of the same chipset.  I'm not sure
of the exact details, however I do remember someone complaining about
this fact.

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



RE: [Dri-devel] [goran@ucw.cz: drm_agpsupport.h]

2002-01-15 Thread Gareth Hughes

> I've found something strange in reporting the chipset:
> 
> I've got ATI R128 on VIA kt 266 chipset, Yet the driver writes:
> 
> [drm] AGP 0.99 on VIA Apollo KT133 @ 0xe000 64MB
> [drm] Initialized r128 2.1.6 20010405 on minor 0
> 
> the chipset IS kt266
> goran@glaugrung:~\>> /sbin/lspci
> 00:00.0 Host bridge: VIA Technologies, Inc. VT8367 [KT266]
> 00:01.0 PCI bridge: VIA Technologies, Inc. VT8367 [KT266 AGP]
> 
> I couldn't find why it doesnt report KT 266 how it should.
> any ideas?
> please cc: me, as I'm not on the list.

The two chipsets have the same PCI ID, as far as I know.  Thus,
AGPGART will think it's a KT133.  This shouldn't be a problem
in general.

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



RE: [Dri-devel] Mach64 DMA

2002-01-13 Thread Gareth Hughes

Frank C. Earl wrote:
> 
> While we're discussing things here, can anyone tell me why 
> things like the emit state code is in the DRM instead of in
> the Mesa drivers?  It looks like it could just as easily be
> in the Mesa driver at least in the case of the RagePRO code-
> is there a good reason why it's in the DRM?

Security.

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



RE: [Dri-devel] weird modulepath problem with DRI

2002-01-10 Thread Gareth Hughes

Sounds like you haven't set the permissions on /dev/dri to allow user
access.

>From the DRI User's Guide:


If you want all of the users on your system to be able to use
direct-rendering, then use a simple DRI section like this: 

Section "DRI"
 Mode 0666
EndSection

This section will allow any user with a current connection to the X server
to use direct rendering. 


That might be worth a try...

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



[Dri-devel] RE: bug in drivers/char/drm/drm_vm.h?

2002-01-08 Thread Gareth Hughes

Forwarding this to a more appropriate discussion forum...

-- Gareth

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, January 08, 2002 5:06 AM
> To: [EMAIL PROTECTED]
> Cc: [EMAIL PROTECTED]
> Subject: bug in drivers/char/drm/drm_vm.h?
> 
> 
> in 2.4.17 we have this in drivers/char/drm/drm_vm.h at line 491:
> 
> if (!capable(CAP_SYS_ADMIN) && (map->flags & 
> _DRM_READ_ONLY)) {
> vma->vm_flags &= VM_MAYWRITE;
> 
> should not this rather be:
> 
> vma->vm_flags &= ~VM_MAYWRITE;
> 
> ?
> 

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



RE: [Dri-devel] Radeon 7200

2002-01-07 Thread Gareth Hughes

> Does anyone know of a OpenGL benchmarking program I can use to test
> compare various facets of the performance of graphics cards?

Quake 3, Viewperf (and/or GLperf), Mesa demos etc...

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] drmHashCreate() : problems finding it, and a cleanup request

2001-12-27 Thread Gareth Hughes

On Thu, Dec 27, 2001 at 03:27:07AM -0800, Philip Brown wrote:
> I'm finally looking at xf86drm.c again. The first routine that looked
> interesting was  
>   drmGetEntry(int fd)
> The second call in that function is drmHashCreate()
> 
> and it was not picked up with ctags, so I was wondering where the heck it
> was.
> 
> I did a whole lot of global grepping through files.
> FINALLY, I figured out that it was in xf86drmHash.c, hidden from the ctags
> program because of the funky
> #define N(x)  drm##x
> 
> 
> WHY is that there? I can find no reason for it, and it greatly hinders
> peole's understanding of the code, since it breaks both greps and ctags.
> How about getting rid of that dumb #define hack in the next release?

If you can think of a way of including the common files in each driver,
and building some drivers into the kernel and some as modules and having
it all Just Work, please feel free to share your ideas with the list.

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] agp: what if memory is fragmented?

2001-12-23 Thread Gareth Hughes

On Fri, Dec 21, 2001 at 05:34:43PM -0800, Philip Brown wrote:
> On Sat, Dec 22, 2001 at 02:30:14AM +0100, Alexander Stohr wrote:
> > The GART is the paging unit of the AGP system.
> > 
> > It deals nicely with fragmented chunks of page sized
> > memory chunks. So you only need some sort of memory
> > allocation and a way to determine eachs pages physical
> > adress to use it for those GART purposes.
> 
> thats what I figured. But then what is the point of returning the 
> starting physical address to the user-space caller?

You return the physical address of the *AGP aperture*, not the first
page *in* the aperture.  Remember, the AGP aperture is a
physically-contiguous block of memory that can have scattered pages
mapped into it.  Thus, graphics cards can access the memory as one
physical range, even though the pages in that range aren't really
contiguous.  That's the whole point of having a GART remapper...

> If the user-space asks for 1 meg of memory==256 pages,
> and gets the physical address of the first page back, but
> all the oteher 255 pages are non-physically contiguous... then whats the
> point of returning the physical address of that first block?

Don't do that.  Return the physical address of the aperture, which the
user-space process can map into it's virtual address space.  Then, both
the client and graphics card can talk to the same block of memory --
client through the virtual mapping, hardware through the AGP GART
remapping.

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Radeon corruption problem in SOF and Descent 3

2001-12-14 Thread Gareth Hughes

On Fri, Dec 14, 2001 at 03:41:01PM -0500, Vladimir Dergachev wrote:
> 
> Yes, but I was looking for something that would allow me to exercise each
> primitive separately - so as not to cause overflowing of dmesg buffer ;)

Try SPEC glperf then.

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Radeon corruption problem in SOF and Descent 3

2001-12-14 Thread Gareth Hughes

On Fri, Dec 14, 2001 at 02:54:33PM -0500, [EMAIL PROTECTED] wrote:
> 
> I am seeing this too. I thought it was from me tweaking
> stuff. Interestingly enough, quake works fine. 
> 
> Is there any kind of DRM/DRI test app along the lines of x11perf ?

Quake3, viewperf... :-)

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] can AGP memory be unidirectional?

2001-12-13 Thread Gareth Hughes

On Thu, Dec 13, 2001 at 07:12:31PM -0800, Philip Brown wrote:
> It's turning out to be a real pain to port the current schema of 
> AGP usage, due to memory mapping issues.
> 
> It would be more doable if I could assume that the AGP memory to be
> allocated+bound would ALWAYS, 100% be only read by the device, and
> never written to by the device.
>  (doesnt matter if the user app reads and writes: just the device needs
>   to be unidirectional)
> 
> Can I make that assumption?

No.

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] gl extensions on/off

2001-12-12 Thread Gareth Hughes

On Wed, Dec 12, 2001 at 04:30:56PM +, Sergey V. Udaltsov wrote:
> > Why force any application to implement some more or less wide
> > set of external shell varibles to query while the same is much
> > easier to maintain if its part of a "gatekeeper" library?
> Exactly! That's what I meant!

Quake3 allows a user to selectively enable or disable its *use* of
certain GL extensions.  If this is the behaviour you want, then it's
more an application thing than a driver thing.  Look at some of the Mesa
demos -- you may bind the 't' key to toggle multitexturing, for
example.  Environment variables in the driver are good for driver
development, when the implementation of a certain extension may be buggy
and thus disabled by default for instance.

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] comments for agpgart.h needed

2001-12-12 Thread Gareth Hughes

On Wed, Dec 12, 2001 at 01:36:10PM +0100, Alexander Stohr wrote:
> 
> Suggestion:
>   typedef unsigned intelcount_t;
> or 
>   #define elcount_t   unsigned int

Ack.  Don't do that.

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] gl extensions on/off

2001-12-11 Thread Gareth Hughes

On Tue, Dec 11, 2001 at 07:28:54PM -0500, Leif Delgass wrote:
> 
> I think the point is (but I could be wrong) whether this is
> user-configurable without recoding/recompiling anything, and it seems the
> answer is no.  The driver can enable/disable extensions for all apps using
> the driver, or an app using GL through the driver can choose to enable or
> disable extensions supported by the driver.  So it's up to the application
> (in this case celstia) to let the user configure which are used, right?

Make the enable/disable configurable by an environment variable, like
so:

if ( getenv( "LIBGL_DISABLE_MULTITEXTURE" ) ) {
   gl_extensions_disable( ctx, "GL_ARB_multitexture" );
}
if ( getenv( "LIBGL_ENABLE_TEXTURE_ENV_ADD" ) ) {
   gl_extensions_enable( ctx, "GL_EXT_texture_env_add" );
   gl_extensions_enable( ctx, "GL_ARB_texture_env_add" );
}

Then, a user/app can just do something equivalent to:

export LIBGL_DISABLE_MULTITEXTURE=1
./my_app

And you're done.  Variable naming left as an exercise for the user.

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] DRM programming help

2001-12-10 Thread Gareth Hughes

On Mon, Dec 10, 2001 at 07:29:47PM -0800, [EMAIL PROTECTED] wrote:
> > 
> > I'm not sure but maybe this is related to the AGP aperture size as
> > configured in the BIOS setup?
> 
> I found out what it is, but I do not know yet what to do about it.
> 
> Gareth - could you comment please ?

Sure.

> What happens is that Radeon has its internal view of physical space an
> important part of which is the location of framebuffer - determined by
> MC_FB_LOCATION register. Now it so happens that DRI sets this to start at
> 0. Hence when I try to DMA from the card it thinks that any small physical
> address is a place in video ram and not system memory. So any attempt to
> DMA into pages with small physical addresses fail.
> 
> I have tried resetting MC_FB_LOCATION to it's pci aperture as
> recommended in the documentation. The problem is that at the minimum it
> screws up display - so something else needs to be changed. Additionally I
> get a hard lockup first chance DRI code kicks in.
> 
> Gareth, in radeon_cp.c I see the following code:
> 
> static void radeon_cp_init_ring_buffer( drm_device_t *dev,
> =09=09=09=09drm_radeon_private_t *dev_priv )
> {
> =09u32 ring_start, cur_read_ptr;
> =09u32 tmp;
> 
> =09/* Initialize the memory controller */
> =09RADEON_WRITE( RADEON_MC_FB_LOCATION,
> =09=09  (dev_priv->agp_vm_start - 1) & 0x );
> 
> =09if ( !dev_priv->is_pci ) {
> =09=09RADEON_WRITE( RADEON_MC_AGP_LOCATION,
> =09=09=09  (((dev_priv->agp_vm_start - 1 +
> =09=09=09=09 dev_priv->agp_size) & 0x) |
> =09=09=09   (dev_priv->agp_vm_start >> 16)) );
> =09}
> 
> 
> How do you know that you will never need to use system ram with small
> physical address ? I have looked at agpgart and it allocates memory with=20
> alloc_page(GFP_KERNEL) and can, in principle, give you any kind of memory.
> 
> Was there any particular reason for this setting ?

I wouldn't know, I didn't write that code and I no longer have access to
Radeon specs.  Perhaps you should talk with Kevin Martin...

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] simplified AGP - oops

2001-12-10 Thread Gareth Hughes

On Mon, Dec 10, 2001 at 05:33:09PM -0800, Philip Brown wrote:
> On Mon, Dec 10, 2001 at 08:52:26PM +0100, Benjamin Herrenschmidt wrote:
> > ...
> > Some chipsets (and the original agpgart supported those only) can
> > let the CPU access the AGP aperture directly. All mmap had to do
> > was then to map the user pages to the aperture physical pages,
> > if they had memory bound or not didn't matter.
> 
> waitamint... I just read
> http://developer.intel.com/design/intarch/techinfo/440BX/addrmap.htm
> 
> which describes the AGP Graphics Aperture, as the
> "AGP Dram Graphics Aperture".
> 
> Which means 'A way for the AGP device to access main RAM'.
> All this time I thought it was a way for programs to access the RAM
> on-board the card! 
> Well that clears things up a bit :-/
> 
> But... isnt there a straightforward mmap type way to access the RAM on
> board the card, then?
> Or is it usually "load the data to the aperture, then copy it
>  into card-local RAM from there"  ?
>  [or via the "drmDMA" stuff, I suppose?]

Why do you want to touch the framebuffer (video memory, that is)?
That's one way to make your 3D graphics go really slowly...

AGP was designed to let the graphics processor pull data from system
memory, instead of having the CPU push data out.

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] simplified AGP

2001-12-10 Thread Gareth Hughes

On Mon, Dec 10, 2001 at 09:50:42AM -0800, Philip Brown wrote:
> 
> But I thought that GATT is simply a scatter/gather table, so
> you only have to update the GATT when you "allocate and bind" pages.
> Then, if you "allocate and bind" the whole range at once, you're done, and
> you dont have to do any cache flushing from that point on.
> So sounds like you are agreeing with me?? :->

Yes.  Hence, the "allocate and bind" part *is* necessary, which
contradicts your statement that only the mmap() was needed and that
allocate/bind was redundant.

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] simplified AGP

2001-12-10 Thread Gareth Hughes

On Mon, Dec 10, 2001 at 12:39:57AM -0800, Philip Brown wrote:
> So I'm looking through the AGP stuff, still learning...
> and it seems that there's a whole lot of redundancy in the current API.
> 
> If I'm understanding the sequence properly, generally programs do the
> following:
> 1. open /dev/agpgart
> 2. ioctl(ACQUIRE)
> 3. ioctl(INFO) to determine amountof memory for AGP
> 4. mmap the device
> 5. ioctl(SETUP)to set the AGP mode
> 6. ioctl(ALLOCATE) a chunk o memory, specifying offset in aperture
> 7. ioctl(BIND) that same chunk o memory
> 
> The allocate and bind parts seem to be useless, since the program has
> to call mmap() anyway [right?] 

Every time you update the GATT, you have to flush the cache and/or
TLBs.  This is expensive.  Therefore, you allocate and bind the pages
you'll use, and mmap() just returns the right pages when needed.

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Fwd: [Newbie]Dell Optiplex GX1 and Accelerated Graphics

2001-12-06 Thread Gareth Hughes

On Thu, Dec 06, 2001 at 11:56:07AM -0600, Frank Earl wrote:
> 
> I looked at it and it didn't seem to me to be optimal compared to what you'd 
> done with Utah-GLX.  I'll go ahead and plug it and the pcigart code sometime 
> in the next couple of days so that we can move forward.  

Optimal or not shouldn't matter at this stage of development.  Just get
DMA working, get it stable, and then move onto optimizing the
implementation.

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Fwd: [Newbie]Dell Optiplex GX1 and Accelerated Graphics

2001-12-06 Thread Gareth Hughes

On Thu, Dec 06, 2001 at 10:53:41AM -0600, Frank Earl wrote:
> 
> Manuel Tiera migrated the old codebase to something relatively close to the 
> head of the CVS tree.  This code didn't work very well because the DMA test 
> would hang your machine up hard.  If you didn't load the DRM module, you 
> could get some accelerated 3D support because Gareth Hughes had coded in a 
> kludge in the Mesa driver to verify that the code there was working properly- 
> this is in the form of direct register writes for the 3D operations.  It 
> won't take too much to migrate the placeholder code to the real thing once we 
> have the DRM layer working correctly.
> 
> After getting that going, Manuel and others tinkered with the code to find 
> out what gave with the DMA pass.  Manuel found the problem which was some 
> register setup in the 2D driver layer that broke pretty much everything for 
> the DMA engine use.  Right now, I am working on getting the DRM layer's DMA 
> scheme working- I'm trying to come up with a workable buffer management 
> algorithm, etc. as the RagePRO does things differently than most other cards 
> done to date.  I've got a good idea where I need to go with it, but my time's 
> been rather limited of late because of varying things coming up (Like playing 
> the "scramble to find work" game as my employer's not making payroll...).

Frank, the buffer management problem has largely been solved.  So has
the PCI-based DMA (pcigart in r128).  Just get it working in the context
of the existing code, and then move onto more fancy things that are
perhaps more suited to the Rage Pro.

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Minor remaining problem with DRI on iBook with Mobility M3

2001-11-30 Thread Gareth Hughes

On Fri, Nov 30, 2001 at 09:17:07AM -0700, Derrik Pates wrote:
> On Fri, 30 Nov 2001, Gareth Hughes wrote:
> 
> > If I remember correctly, the hardware requires pitches to be multiples
> > of 64 (that's pixels, not bytes).  It's been a while, but we don't do
> > that sort of thing for nothing...
> 
> Well, how would an 800x600 display work on it then? The iBook's display is
> 800x600, and as it works now (the 64 X pScrn->bitsPerPixel value) the X
> server always gets the wrong line pitch (832) because 800 is not a round
> multiple of 64. With the existing code, the X server has to be forced into
> doing the "right" thing. With my change, it does the right thing
> completely by itself.
> 
> I don't have the Rage128 docs, I can only tell you what I'm seeing here -
> and that's what I'm seeing. If the docs claim that the line pitch must be
> a multiple of 64 pixels, maybe the docs are incorrect. (Would it be the
> first time?)

I'm not familiar with your problem, or your proposed fix.

Two things, however:

1) These pitches may have nothing to do with the current mode.  Mode
   initialization probably touches different registers.

2) If the pitch register looks like this, say:

BitsField

0 - 12  Pitch, in multiples of 64 pixels
... ...

(and I seem to remember that all the ATI cards look like this), then
there is no way to program a pitch that *isn't* a multiple of 64
pixels.  Hence:

/* Offset in multiples of some large number.  Radeon has
 * 4k alignment, from memory -- something large in any case.
 */
hw_offset = real_offset / 4096;

/* Pitch in multiples of 64 pixels.
 */
hw_pitch = real_pitch / 64;

reg.src_pitch_offset = ((hw_pitch  <<  0) |
(hw_offset << 12));

Of course, it's been almost a year since I worked on that part of the
driver, so I may be completely wrong.

Note that even though a surface is allocated with a pitch that is a
multiple of 64 pixels (832 in this case), the RAMDAC may only scan out
the first 800 pixels or whatever the current mode is set to.

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Minor remaining problem with DRI on iBook with Mobility M3

2001-11-30 Thread Gareth Hughes

On Fri, Nov 30, 2001 at 01:48:16AM -0700, Derrik Pates wrote:
> On 30 Nov 2001, Michel Dänzer wrote:
> 
> > I'll see to it that it gets fixed, but I'd like to check the docs for
> > what the value should really be. I hope I'll get around to it this
> > weekend.
> 
> Well, the tdfx driver uses 16 * pScrn->bitsPerPixel, and that's the value
> I've been using. 64 * pScrn->bitsPerPixel obviously can't be right,
> it's why the X server wants a row with of 832 on this iBook, requiring you
> to jump through weird hoops to force it into doing the right thing. 32 *
> pScrn->bitsPerPixel should also work, unless I'm just not thinking of any
> "unusual" dimension modes that would require 16-pixel resolution to get
> the right line width.
> 
> I'm actually surprised it's not just standardized across all the drivers.

If I remember correctly, the hardware requires pitches to be multiples
of 64 (that's pixels, not bytes).  It's been a while, but we don't do
that sort of thing for nothing...

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Mach64

2001-11-20 Thread Gareth Hughes

On Tue, Nov 20, 2001 at 07:18:16PM -0500, Frank C. Earl wrote:
> 
> I don't think there's any more available from ATI than what we already have.  
> If memory serves, Gareth and John worked from the register docs and the 2D 
> coding info from the Programmer's guide.

Yep, that's correct.  Had to deal with missing and/or conflicting info
to boot...

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Mach64: cannot map registers

2001-10-25 Thread Gareth Hughes

Peter Lemken wrote:

> It is, actually. At least if you are stuck with a notebook computer. The
> Rage LT Pro and Rage Mobility are among the most popular graphics
> adapters around. I wish I could just put in a different card...

Yes, I understand that.  That's not the point I was making, or what we 
were talking about.  Frank made the comment that his Rage Pro was "more 
than powerful enough" and that peak performance could be obtained at 
640x480@16/32.

Remember, I have fairly intimate knowledge of this chip.  You will 
never, ever, ever get more than ~41 fps in 'Fastest' mode (512x384@16, 
butt-ugly quality) on a Rage Pro.

Compare that with sub-$150 current generation graphics processors that 
are pushing 60 fps at 1600x1200@32 with max quality settings, and 
sub-$200 chips that are pushing 100 fps.  "More than powerful enough" 
depends on your frame of reference: making gears break 200 fps on a chip 
that's maybe four or five years old is a worthy goal, I'm not denying 
that.  However, you need to keep things in perspective.  Check out the 
latest DOOM engine stuff coming out of id Software, or imagine a Final 
Fantasy demo running at interactive framerates on a desktop PC 
(interactive as in being able to move the camera around the scene while 
it's rendering).  Then come and talk with me about "more than powerful 
enough" :-)

A Rage Pro DRI driver is a great project for people to hack on, 
particularly those who are new to device driver and/or 3D graphics 
programming.  It's great to finally see some community involvement in 
the DRI project, and given the current state of things I hope you guys 
really make some progress.

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Mach64: cannot map registers

2001-10-24 Thread Gareth Hughes

Frank C. Earl wrote:

> Now, now, not everybody can use your employer's gear, Gareth...  :->

It's not hard to get something rather more powerful than a Rage Pro -- 
anandtech.com lists current-generation hardware for under $120.  One 
would guess going back a generation or two would bring the price down to 
under $50.  This is still three or four generations after the mach64.

http://www.anandtech.com/guides/showdoc.html?i=1545&p=4

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Mach64: cannot map registers

2001-10-24 Thread Gareth Hughes

Frank C. Earl wrote:

> On Wednesday 24 October 2001 07:17 pm, Carl Busjahn wrote:
> 
>>Your depth is 24.  3D depths are only 16bit and 32bit.  The Mach64 is
>>really not powerful enough to handle 32bit (which is what 24 yeilds in
>>XFree86 4.1).  I'm not even sure if the driver supports 32bit depth, but
>>it's not a good idea anyway.  Plus you're going to get better overall
>>quality at 1024x768@16.  I can never notice a difference between 16 and
>>32 bit, but then, I don't have a ugly nvidia ;-)


Care to elaborate on the "ugly"?


> Actually, it's more than powerful enough- I was running with 32-bit color 
> space when I was testing most of the games out there against the Utah-GLX 
> drivers.  Peak perfomance is from 640x480@16 or 640x480@32- anything higher 
> in pixel resolution is accordingly slower.

Hmm, in a day where you can get PC graphics hardware that can run Quake3 
at 1600x1200@32 with maximum quality settings at around 100 fps, perhaps 
you should reevaluate your idea of "powerful enough"...

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Mach64: BusMastering test seems to work

2001-10-19 Thread Gareth Hughes

Leif Delgass wrote:

> Great work!  I'll check this out soon.
> 
> Once we get DMA working for the 3D operations, I guess the next task is to
> get the 2D acceleration routines synchronizing with the 3D ones so we can
> reenable XAA, right?  Also, it looks as if the AGP setup has not been
> finished yet.  At this point, atidri.c allocates 8M (hardcoded value, but
> the agpgart module tells me I have a 64M AGP aperture) and maps 2M of it
> for vertex buffers, but it never sets AGP_BASE or AGP_CNTL.  There's
> currently no allocation of an AGP region for textures.  I'm seeing
> problems with missing textures in the Quake 3 demo and some other GL apps.

Remember, the chip may only be able access 8 or 16MB of AGP memory.  The 
docs should specify this somewhere -- the r128 could only access 32MB if 
I remember correctly...

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Mach64: Behind DMA BusMastering problem

2001-10-19 Thread Gareth Hughes

Manuel Teira wrote:

> 

 > [snip]
 >

> Now, WITHOUT STARTING X, test the GUI DMA transfer, for example, typing:
> # dd if=/dev/atidma of=atiout.dat bs=512 count=1
> 
> It worked, the module dumped:
> (Before DMA Transfer) PAT_REG0 = 0x
> (After DMA Transfer) PAT_REG0 = 0x


Great job!


-- Gareth


___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Using drm

2001-10-08 Thread Gareth Hughes

[EMAIL PROTECTED] wrote:

>  After reading documentation (and confirming that I already know all the
> acronyms) and sifting thru driver code I decided to ask on the list, while
> the people who wrote this stuff can still answer ;)
> 
>  Basically I want to DMA a chunk of video ram into plain RAM. This is
> useful for: video capture, VBI/closed captioning, taking screen snapshots.
> 
> I got drm handle, size of the buffer, AGP offset and pointer to the buffer
> in XFree86. What can I do with this ? At the moment I am mostly concerned
> with Radeon AIW AGP, but this will spread to other (non AGP) cards.
> 
> The docs seem to indicate that both XFree and client can use this - it
> would be really fabulous if clients could just mmap the buffer.

I think it'll be hard to set up and initiate DMA transfers without 
proper hardware documentation.  Do you have Radeon specs?

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Radeon 8500, what's the plan?

2001-10-02 Thread Gareth Hughes

Jeffrey W. Baker wrote:

> On Wed, 3 Oct 2001, David Johnson wrote: 
> 
>>There is some seriously proprietary stuff with idct that for legal
>>reasons ATI wouldn't want to expose.
>>
> 
> That is one of the most ridiculous statements I have heard.  Substitute
> some equivalent terms in there:
> 
> "There is some seriously proprietary stuff with the Pythagorean Theorem
> that for legal reasons ATI wouldn't want to expose."
> 
> "There is some seriously proprietary stuff with the quadratic equation
> that for legal reasons ATI wouldn't want to expose."

Okay then...

I think what David's suggesting is that ATI's implementation of an iDCT 
in hardware is pretty cool, and they're not about to go and tell 
everyone how they did it.  Last time I checked, they were the only 
vendor to offer such a solution, and thus you may want to consider their 
position on the matter.

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Radeon 8500, what's the plan?

2001-09-27 Thread Gareth Hughes

David Johnson wrote:

> 
> They did release specs (under NDA) to many people (including yourself 
> through PI/VA Linux).


Sure, but not to people in the general open source community, and with the demise of 
PI/VA, I would say the chances of a driver done by anyone other than ATI are slim to 
nil.  Isn't that what we're talking about?


-- Gareth


___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Radeon 8500, what's the plan?

2001-09-27 Thread Gareth Hughes

Dacobi Coding wrote:

> 
> But are they planing to, or have they allready releaced the specs 
> for the new Radeon chips? And I mean full specs complete
> with V/P Shaders and TL? 

Did they ever release specs for the original Radeon?  No.  One would 
guess the same policy will apply in this case as well.

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Radeon 8500, what's the plan?

2001-09-27 Thread Gareth Hughes

Dacobi Coding wrote:

> Hello people!
> 
> I was just wondering, what's the plan regarding
> Radeon 8500 DRI support?
> 
> I been hearing  rumors about ATI switching to 
> a unified driver structure much like Nvidia's.
> Can anyone verify wether this is true or not,
> and if it is true, how it will affect the DRI project?


Binary only != unified driver architecture.


Not to say that ATI won't switch to a binary-only driver as well, but 
anyway...

-- Gareth


___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Discussion of donation driven DRI project.

2001-09-18 Thread Gareth Hughes

Mike A. Harris wrote:

>
> After reading some people's postings on donating X amount of 
> money for feature Y, and the like, I thought about it and come to 
> the conclusion that donation driven DRI project even partially is 
> quite unrealistic.  I'd like to discuss why I think that is so.


Mike, couldn't have said it better myself.  Thanks for your thoughtful 
analysis of the situation!

-- Gareth



___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Re: Progress of mesa-3.5 tree? Update to Mesa-3.6/4.0

2001-09-15 Thread Gareth Hughes

Michel Dänzer wrote:

> 
> I certainly don't question your past dedication. I appreciate it very much. I
> was a bit deceived by your abandoning it though.


Mate, if you understood the situation, you wouldn't be saying this.  I will 
let this pass by as a result.


> My point is that nobody is asking anything like this of you. I think you could
> still be of very much help by giving advice, leading others in the right
> direction or whatever.


See my previous comment.  I've posted suggestions on how to get the mach64 
DMA code working on several occasions, in case you'd forgotten.  Are you 
suggesting that writing an almost-fully functional driver isn't leading 
other developers in the right direction?  Wow...


> My concern is primarily about r128 BTW, which suffers from serious instability
> at least on PPC.

Have you forgotten all the work I did to get this working in the first 
place?  I think this was pretty good, considering I've never actually owned 
a PPC machine.

The ball's in your court -- as my good friend David S. Miller says, "show me 
the code"...

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Re: Progress of mesa-3.5 tree? Update to Mesa-3.6/4.0

2001-09-15 Thread Gareth Hughes

Will Newton wrote:

> 
> 4 or 5? Please elaborate.


Very roughly:

Rage Pro Rage 128 Radeon   Radeon 8500

G200 G400 ???

  Voodoo3  Voodoo5
TNT  TNT2 GeForce GeForce2 GeForce3   ???

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Re: Progress of mesa-3.5 tree? Update to Mesa-3.6/4.0

2001-09-15 Thread Gareth Hughes

Michel Dänzer wrote:

> 

 > Probably, but I sure don't think it's about them saying "I don't care
 > about problems with my code" ;)

What are you trying to say here?  I'm not sure I see the connection...


> Fair enough, I just wish you'd be a bit more supportive to those of us who are
> trying (with very limited resources, mind you) to fix it, as you're probably
> the man with the single most intimate knowledge of the driver. You don't have
> to do it yourself, just help us do it. :)


What do you want from me?  I've spent a long time:

1) Working on the Utah-GLX driver for the Rage Pro.

2) Working on the DRI drivers for the Rage 128 (including PPC), G400, V3/V5 
and Radeon.

3) Trying to get the DRI driver for the Rage Pro working.

Anyone who questions my dedication to this project can take it up with me 
offline -- I'd be more than happy to discuss it with them.  As for helping 
you to get the driver working, every bit of work I did is available from the 
CVS repository.  Yes, I couldn't get it fully working, but that does not 
mean I didn't try.

I think you are missing my point -- having to maintain a driver for a card 
that's four or five generations old as a full-time job just plain sucks 
arse.  Doing it as a project in your spare time will most likely be a lot of 
fun, particularly if it's something new.  Thus, your efforts to get a driver 
working for the Rage Pro, or adding things to the Rage 128 driver, are quite 
valid and I hope you get something working.

-- Gareth

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



  1   2   >