On Mon, 27 May 2002, Jens Owen wrote:
>
> This is an education for me, too.  Thanks for the info.  Any idea how
> heavy IOCTL's are on a P4?

Much heavier. For some yet unexplained reason, a P4 takes about 1us to do
a simple system call. That's on a 1.8GHz system, so it basically implies
that a P4 takes 1800 cycles to do a "int 0x80 + iret", which is just
ludicrous. A 1.2Gz athlon does the same in 0.2us, ie around 250 cycles
(the 200+ cycles also matches a pentium reasonably well, so it's really
the P4 that stands out here).

The rest of the ioctl overhead is not really noticeable compared to those
1800 cycles spent on the "enter/exit kernel mode".

Even so, those memcpy vs pipe throughput numbers I quoted were off my P4
machine:  _despite_ the fact that a P4 is inexplicably bad at system
calls, those 1800 CPU cycles is just a whole lot less than a lot of cache
misses with modern hardware. It doesn't take many cache misses to make
1800 cycles "just noise".

And if the 1800 cycles are less than cache misses on normal non-IO
benchmarks, they are going to be _completely_ swamped by any PCI/AGP
overhead.

> You bet--and the real issue we're constantly swimming up stream against
> is "security" in open source.  Most hardware vendors design the hardware
> for closed source drivers and don't put many (or sometimes any) time
> into making sure their hardware is optimized for performance *and*
> security.

I realize this, and I feel for you. It's nasty.

I don't know what the answer is. It _might_ even be something like a
bi-modal system:

 - apps by default get the traditional GLX behaviour: the X server does
   all the 3D for them. No DRI.

 - there is some mechanism to tell which apps are trusted, and trusted
   apps get direct hw access and just aren't secure.

I actually think that if the abstraction level is just high enough, DRI
shouldn't matter in theory. Shared memory areas with X for the high-level
data (to avoid the copies for things like the obviously huge texture
data).

>From a game standpoint, think "quake engine". The actual game doesn't need
to tell the GX engine everything over and over again all the time. It
tells it the basic stuff once, and then it just says "render me". You
don't need DRI for sending the "render me" command, you need DRI because
you send each vertex separately.

In that kind of high-level abstraction, the X client-server model should
still work fine. In fact, it should work especially well on small-scale
SMP (which seems inevitable).

Are people thinkin gabout the "next stage", when 2D just doesn't exist any
more except as a high-level abstraction on top of a 3D model? Where the X
server actually gets to render the world view, and the application doesn't
need to (or want to) know about things like level-of-detail?

                Linus


_______________________________________________________________

Don't miss the 2002 Sprint PCS Application Developer's Conference
August 25-28 in Las Vegas -- http://devcon.sprintpcs.com/adp/index.cfm

_______________________________________________
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to