Keith Whitwell wrote:
> 
> Linus Torvalds wrote:
> >
> > On Mon, 27 May 2002, Jens Owen wrote:
> >
> >>This is an education for me, too.  Thanks for the info.  Any idea how
> >>heavy IOCTL's are on a P4?
> >>
> >
> > Much heavier. For some yet unexplained reason, a P4 takes about 1us to do
> > a simple system call. That's on a 1.8GHz system, so it basically implies
> > that a P4 takes 1800 cycles to do a "int 0x80 + iret", which is just
> > ludicrous. A 1.2Gz athlon does the same in 0.2us, ie around 250 cycles
> > (the 200+ cycles also matches a pentium reasonably well, so it's really
> > the P4 that stands out here).
> 
> This is remarkable.  I thought things were getting better, not worse.
> 
> ...
> >
> >>You bet--and the real issue we're constantly swimming up stream against
> >>is "security" in open source.  Most hardware vendors design the hardware
> >>for closed source drivers and don't put many (or sometimes any) time
> >>into making sure their hardware is optimized for performance *and*
> >>security.
> >>
> >
> > I realize this, and I feel for you. It's nasty.
> >
> > I don't know what the answer is. It _might_ even be something like a
> > bi-modal system:
> >
> >  - apps by default get the traditional GLX behaviour: the X server does
> >    all the 3D for them. No DRI.
> >
> >  - there is some mechanism to tell which apps are trusted, and trusted
> >    apps get direct hw access and just aren't secure.
> >
> > I actually think that if the abstraction level is just high enough, DRI
> > shouldn't matter in theory. Shared memory areas with X for the high-level
> > data (to avoid the copies for things like the obviously huge texture
> > data).
> 
> I like this because it offers a way out, although I would keep the direct,
> secure approach to 3d we currently have for the other clients.  Indirect
> rendering is pretty painful...

A bi-modal system could be very possible from an implementation
perspective in the short term.

We have a security mechanism in place now for validating which processes
are allowed to access the direct rendering mechanism.  It is based on
user ID's, and no process is allowed access to these resources unless
they have:

1) Access to the X Server as an X client.

2) Their permission is acceptable based on how the DRI permissions are
defined in the XF86Config file.

Most distributions have picked up on this and now have a typical usage
model that allows the DRI to work for all desktop users.

If we do get some type of indirect rendering path working quicker, then
perhaps we could tighten up these defaults so that the usage model
required explicit administrative permision to a user before being
allowed access to direct rendering.

However, after going to all this trouble of making a decent level of
fall back performance, I would then want to push the performance envelop
for those processes that did meet the criteria for access to direct
rendering resources, and soften the security requirements for just those
processes.  This could possible be users that have been given explicit
permission and the X server itself (doing HW accellerated indirect
rendering).

There would really be three prongs of attach for this approach:

1) Audit the current DRI security model and confirm that it is strong
enough to be used to prevent non authorized users from gaining access to
the DRI mechanisms.  Work with distros to tighten up the usage model
(and possible the DRI security mechanism itself) so only explicit
desktop users are allowed access to the DRI.

2) Develop a device independent indirect rendering module that plugs
into the X server to utilize our 3D drivers.  After getting some HW
accel working, look at speeding up this path by utilizing Chormium-like
technologies and/or shared memory for high level data.

3) Transition the direct rendering drivers to take full advantage of
their user space DMA capabilities.

The is a large amount of work, but something we should consider if step
1 can be achieved to the kernel teams satisfaction.  It is even possible
the direct path could be obsoleted over the long term as step 2 becomes
more and more streamlined.

> However:  The applications that most people would want to 'trust' are things
> like quake or other closed source games, which makes the situation a little
> murkier.

Yes, but is this really any worse than a typical install for these apps
that requires root level access.
 
> >>From a game standpoint, think "quake engine". The actual game doesn't need
> > to tell the GX engine everything over and over again all the time. It
> > tells it the basic stuff once, and then it just says "render me". You
> > don't need DRI for sending the "render me" command, you need DRI because
> > you send each vertex separately.
> 
> You could view the static geometry of quake levels as a single display list
> and ask for the whole thing to be rendered each frame.
> 
> However, the reality of the quake type games is anything but - huge amounts of
> effort have gone into the process of figuring out (as quickly as possible)
> what minimal amount of work can be done to render the visible portion of the
> level at each frame.
> 
> Quake generates very dynamic data from quite a static environment in the name
> of performance...

I think I understand...even though Linus is refering to Quake's wire
protocol here, you are pointing out that the real challenge is the
underlying game engine which is highly optimized for that specific
application.  Am I correct?

> > In that kind of high-level abstraction, the X client-server model should
> > still work fine. In fact, it should work especially well on small-scale
> > SMP (which seems inevitable).
> 
> Games are free to partition themselves in other ways that help smp but keep
> their ability for a tight binding with the display system -- for example the
> physics (rigid body simulation) subsytem is a big and growing consumer of cpu
> and is quite easily seperated out from the graphics engine.  AI is also a
> target for its own thread.
> 
> > Are people thinkin gabout the "next stage", when 2D just doesn't exist any
> > more except as a high-level abstraction on top of a 3D model? Where the X
> > server actually gets to render the world view, and the application doesn't
> > need to (or want to) know about things like level-of-detail?
> 
> Yes, but there are a few steps between here and there, and there have been a
> few differences of opinion along the way.  It would have been possible to get
> a lot of the X render extension via a client library emitting GL calls, for
> example.

Yes, we're still just at the thinking stage...

--                             /\
         Jens Owen            /  \/\ _    
  [EMAIL PROTECTED]  /    \ \ \   Steamboat Springs, Colorado

_______________________________________________________________

Don't miss the 2002 Sprint PCS Application Developer's Conference
August 25-28 in Las Vegas -- http://devcon.sprintpcs.com/adp/index.cfm

_______________________________________________
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to