X display locking

2013-03-07 Thread Torsten Jager
Hello!

What is the proper usage of XLockDisplay () / XUnlockDisplay ()
when an application has multiple threads using

  * normal Xlib functions
  * Xitk functions
  * libGL and/or
  * libvdpau ?

On my machine (libX11 1.4.0) bracketing all 4 seems to be necessary
to avoid lockups and stack corruption. At least doing so does
work here.

Hovever, on my mate's box the same code traps both libGL and
libvdepau into an infinite sched_yield () polling loop.

What am I doing wrong?

Torsten
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.x.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


Re: X display locking

2013-03-07 Thread Alan Coopersmith
On 03/ 7/13 06:31 AM, Torsten Jager wrote:
 What is the proper usage of XLockDisplay () / XUnlockDisplay ()
 when an application has multiple threads using

Applications should never call those functions - they are Xlib's
internal locking mechanism for the request buffers.

Applications should only call XInitThreads() to set up the locks
before any Xlib calls are made.

-- 
-Alan Coopersmith-  alan.coopersm...@oracle.com
 Oracle Solaris Engineering - http://blogs.oracle.com/alanc
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.x.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


Re: X display locking

2013-03-07 Thread Aaron Plattner

On 03/07/2013 06:31 AM, Torsten Jager wrote:

Hello!

What is the proper usage of XLockDisplay () / XUnlockDisplay ()
when an application has multiple threads using

   * normal Xlib functions
   * Xitk functions
   * libGL and/or
   * libvdpau ?


XLockDisplay / XUnlockDisplay is only required when you need multiple 
requests to be atomic with respect to requests being sent by other 
threads.  For example, if you have a function like


XGrabServer()
XGetImage()
XUngrabServer()

then you'll probably want to bracket the whole thing with XLockDisplay / 
XUnlockDisplay if you have another thread that could otherwise perform 
rendering during the grab or destroy the window you're trying to 
GetImage or something.



On my machine (libX11 1.4.0) bracketing all 4 seems to be necessary
to avoid lockups and stack corruption. At least doing so does
work here.


You shouldn't get lockups unless you take the lock in one thread and 
don't release it.  You did call XInitThreads() as the very first thing, 
right?



Hovever, on my mate's box the same code traps both libGL and
libvdepau into an infinite sched_yield () polling loop.


Sounds like a bug.


What am I doing wrong?


My guess would be calling XInitThreads too late.  You have to call it 
before anything else, including libraries like libGL and libvdpau, make 
any Xlib calls.



Torsten


--
Aaron
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.x.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


[ANNOUNCE] xorg-server 1.13.3

2013-03-07 Thread Matt Dew

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Dave Airlie (1):
  randr: cleanup provider properly

Matt Dew (1):
  Bump version to 1.13.3

git tag: xorg-server-1.13.3

http://xorg.freedesktop.org/archive/individual/xserver/xorg-server-1.13.3.tar.bz2
MD5:  63c2530476cba4f0283570bbb650  xorg-server-1.13.3.tar.bz2
SHA1: e6a56b7ece11c1e68b280714c934dd6c602565fa  xorg-server-1.13.3.tar.bz2
SHA256: c9e38eb6404749cab9e3c4a4901d951d4d62958b11f002ce968225ef92902762 
 xorg-server-1.13.3.tar.bz2


http://xorg.freedesktop.org/archive/individual/xserver/xorg-server-1.13.3.tar.gz
MD5:  2d924ba826a9d64330fa2d00280670eb  xorg-server-1.13.3.tar.gz
SHA1: 091ace82f040bd0afeab07daf5b3c81ff364ce62  xorg-server-1.13.3.tar.gz
SHA256: b7ded70682e5b11697449435a498fd9e83ac57da3639c2a08b75855f3a53ca45 
 xorg-server-1.13.3.tar.gz


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJROYMnAAoJEKa/N3H3w77DnFwIAIgxz9ptZRJxrTy7lqmpaOSU
LF9HDgdPDMunStVJJrGEFnufHInRkJP/JImUUhLxtOK7Qpy2lYqGwBS/QPd94T+4
8vbaDiYabL1oyyD7mMOiukaF9cRvD0bVQu3FLcaHA5PH9F7Yba9uXWobJqr89JjP
B5BLeemw4q88NYhNYLtfJ98knIqrHBoWSabRTmA2TO4r2IT8UYECqsbwoaUztxx/
Ok68kAc2Z2iyaRcTJeygpb/zLixi1vdCih0l9+eJ8tH0TeFi89ZKijrWqm6d/JNh
ZZ5K7CcSoIeDd5a1UunJH3uk+RSOI0evnaizEcELXH1QUEkv5MqAl/XIGPvbym4=
=YyW7
-END PGP SIGNATURE-
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.x.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


Re: [PATCH 4/5] glx: Implement GLX_PRESERVED_CONTENTS drawable attribute

2013-03-07 Thread Tomasz Lis
I agree with Ian's proposition to add integer counter which replaces
hard-coded index. This also allows to add:

assert(i = sizeof(attribs)/sizeof(attribs[0]))

While both GLX 1.4 spec and older GLX_SGIX_pbuffer do not mention what
value to return if GLX_PRESERVED_CONTENTS is unset on creation, they
clearly indicate that the new buffer should behave as if
GLX_PRESERVED_CONTENTS was = True:



If this (GLX_PRESERVED_CONTENTS) attribute is not
specified, or if it is specified as True in attrib_list, then when a
resource conflict occurs the contents of the pbuffer will be preserved
(most likely by swapping out portions of the buffer to main memory).


I don't think this code returns apps settings, i think it is always used to
query a specific buffer (but I may be wrong).

2013/3/1 Adam Jackson a...@redhat.com

 On Thu, 2013-02-28 at 13:55 -0800, Ian Romanick wrote:
  On 02/25/2013 02:04 PM, Adam Jackson wrote:
   We back pixmaps with pbuffers so they're never actually clobbered, so
   we're really just recording the value passed in at create time so that
   you get the same thing when you query.
 
  Is that actually right?  There are some cases where they query returns
  the actual state, and there are some cases where the query returns the
  apps setting.  I haven't checked to see which this is.  In Mesa, we've
  taken to quoting the spec in cases like this.

 If the spec had anything to say about the matter, I would happily quote
 it.  Unfortunately GLX is much less rigorous about this than GL.  My
 reply to Eric was based on a careful reading of GLX 1.4.  While it's
 clearly within the law to treat all pbuffers as preserved, the spec is
 silent on whether the queried attribute value reflects the value given
 at pbuffer creation.

 - ajax

 ___
 xorg-devel@lists.x.org: X.Org development
 Archives: http://lists.x.org/archives/xorg-devel
 Info: http://lists.x.org/mailman/listinfo/xorg-devel

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH] xserver: add monitor Option ZoomModes

2013-03-07 Thread vdb
 On 11/21/2012 04:12 AM, v...@picaros.org wrote:
  Add support for the Option ZoomModes in a monitor section:
 
  Section Monitor
 Identifier a21inch
 Option PreferredMode 1600x1200
 Option ZoomModes 1600x1200 1280x1024 1280x1024 640x480
  EndSection
 
 ZoomModes seems like an unfortunate name to me, but there's precedent 
 for it in the DontZoom option so I won't object to it too strongly.

Well, I settled on ZoomModes since xserver/hw/xfree86 functions with 
'Zoom' in their name relate to the Ctrl+Alt+Keypad-{Plus,Minus} mode 
switch.  The 'Modes' part comes from the Section Screen/Subsection 
Display/Modes statement.  

  diff --git a/hw/xfree86/man/xorg.conf.man b/hw/xfree86/man/xorg.conf.man
  index 5d92bbe..729d301 100644
  --- a/hw/xfree86/man/xorg.conf.man
  +++ b/hw/xfree86/man/xorg.conf.man
  @@ -1676,6 +1676,16 @@ This optional entry specifies a mode to be marked as 
  the preferred initial mode
of the monitor.
(RandR 1.2-supporting drivers only)
.TP 7
  +.BI Option \*qZoomModes\*q \*q name   name   ... \*q
  +This optional entry specifies modes to be marked as zoom modes.
  +It is possible to switch to the next and previous mode via
  +.BR Ctrl+Alt+Keypad\-Plus  and  Ctrl+Alt+Keypad\-Minus .
  +All these keypad available modes are selected from the screen mode
  +list.  This list is a copy of the compatibility output monitor mode
 
 roff requires each sentence to begin on its own line, though you can add 
 additional line breaks in the middle of sentences for readability.
 
  +list.  Since this output is the output connected to the lowest
  +dot\-area monitor, as determined from its largest size mode, that
 
 Should this be a hyphen ('-') rather than a literal dash ('\-')?
 
  +monitor defines the available zoom modes.
 
 This only applies to RandR 1.2 drivers, so it should probably get the 
 RandR 1.2-supporting drivers only text.

Indeed, text modified per suggestions, thank you.

  diff --git a/hw/xfree86/modes/xf86Crtc.c b/hw/xfree86/modes/xf86Crtc.c
  index 154f684..2e46885 100644
  --- a/hw/xfree86/modes/xf86Crtc.c
  +++ b/hw/xfree86/modes/xf86Crtc.c

  @@ -1424,6 +1426,88 @@ preferredMode(ScrnInfoPtr pScrn, xf86OutputPtr 
  output)
return preferred_mode;
}
 
  +/** identify a token
  + * args
  + *   *src a string with zero or more tokens, e.g. tok0 tok1,
  + *   **token  stores a pointer to the first token character,
  + *   *len stores the token length.
  + * return
  + *   a pointer into src[] at the token terminating character, or
  + *   NULL if no token is found.
  + */
  +static const char *
  +gettoken(const char *src, const char **token, int *len)
  +{
  +const char *next, *delim =  \t;
  +int skip;
  +
  +if (!src)
  +return NULL;
  +
  +skip = strspn(src, delim);
  +*token = src[skip];
  +
  +*len = strcspn(*token, delim);
  +/* Support for backslash escaped delimiters could be implemented
  + * here.
  + */
  +
  +/* (*token)[0] != '\0'  ==  *len  0 */
  +next = *len  0 ? (*token)[*len] : NULL;
 
 This would probably be clearer written
 
if (*len  0)
return (*token)[*len];
else
return NULL;
 
  +
  +return next;
  +}

Agreed, code changed per suggestion.

 It seems surprising that there isn't already a function to do this.

There used to be a function in xf86-video-ati/ to process the
Option MetaModes for the driver's internal multiple output support.  

I checked xserver/hw/xfree86/{ common, parser} but couldn't find 
anything reusable.  An option statement with a list value, for example

  Option ZoomModes 1600x1200 1280x1024 1280x1024 640x480

would be ideal.  It's almost there: in common/xf86Opt.h ValueUnion 
add a pointer to a list, add support in parser/Flag.c
xf86parseOption() for list values and extend common/xf86Option.c
ParseOptionValue().

But this added complexity doesn't seem worthwhile for a single Option 
user.  So I settled for a string of mode names and a gettoken() 
function.  

  +/** Check for a user configured zoom mode list, Option ZoomModes:
  + *
  + * Section Monitor
  + *   Identifier a21inch
  + *   Option ZoomModes 1600x1200 1280x1024 1280x1024 640x480
  + * EndSection
  + *
  + * Each user mode name is searched for independently so the list
  + * specification order is free.  An output mode is matched at most
  + * once, a mode with an already set M_T_USERDEF type bit is skipped.
  + * Thus a repeat mode name specificaton matches the next output mode.
 
 s/specificaton/specification/
 
 This took me a few reads to make sense.  Maybe add with the same name 
 to the end of this sentence?

Indeed, text amended per suggestion, thank you.

  + * Ctrl+Alt+Keypad-{Plus,Minus} zooms {in,out} by selecting the
  + * {next,previous} M_T_USERDEF mode in the screen modes list, itself
  + * sorted toward lower dot area or lower dot clock frequency, see
  + *   modes/xf86Crtc.c: xf86SortModes() xf86SetScrnInfoModes(), and
  + *   

Re: Initial DRI3000 protocol specs available

2013-03-07 Thread Aaron Plattner

On 03/06/2013 10:35 PM, Keith Packard wrote:

* PGP Signed by an unknown key

Owen Taylor otay...@redhat.com writes:


A complex scheme where the compositor and the server collaborate on the
implementation of SwapRegion seems fragile to me, and still doesn't get
some details right - like the swap count returned from SwapRegion.

What if we made SwapRegion redirectable along the lines of
ResizeRedirectMask? Since it would be tricky to block the client calling
SwapRegion until the compositor responded, this would probably require
removing the reply to SwapRegion and sending everything needed back in
events.


When I first read this a week ago, I thought this was a crazy plan; but
upon reflection, I think this is exactly the right direction. I've
written up a blog posting in more detail about that here:

 http://keithp.com/blogs/composite-swap/


  SwapScheduled - whatever information is available immediately on
  receipt of SwapRegion


I think this can still be in the reply to SwapRegion itself; essentially
all we're returning is the swap-hi/swap-lo numbers and a suggestion for
future buffer allocation sizes. We could place the buffer size hints in
a separate event, but I don't think they're that critical; it's just a
hint, and we'll get it right after a couple of swaps once the user stops
moving the window around anyways.


  SwapIdle  - a buffer is returned to the application for rendering
  SwapComplete  - the swap actually happened and we know the
  msc/sbc/ust triple


Yup. The blog posting suggests how the Complete event might be delayed
until the Compositor gets the content up onto the screen itself.


If I'm understanding this correctly, this requires the X server to 
receive a notification from the GPU that the swap is complete so it can 
send the SwapComplete event.  Is there any chance this could be done 
with a Fence instead?  The application could specify the fence in the 
Swap request, and then use that fence to block further rendering on the 
GPU or wait on the fence from the CPU.  We typically try to do the 
scheduling on the GPU when possible because triggering an interrupt and 
waking up the X server burns power and adds latency for no good reason.



I also think that SwapIdle should *not* be an event. Instead, the client
should mark its pixmap as 'becomes idle upon swap'; on redirection, the
compositor ends up holding the last 'its not idle yet' bit, and when it
does the 'becomes idle upon swap', then the buffer goes idle.

The client must then tell the server to un-idle the pixmap, and that
request will return whether the contents were preserved or not. This has
to be synchronous or huge races will persist.


But I don't know that you need that much granularity. I think SwapIdle
and SwapComplete are sufficient.


As above, SwapIdle isn't good enough, an explicit un-idle request is required.


Tricky parts:

  * Not leaking buffers during redirection/unredirection could be tricky.
What if the compositor exits while a client is waiting for a
SwapIdle? An event when swap is redirected/unredirected is probably
necessary.


When the Compositor exits, the X server will know all of the pending
SwapRegion requests and can 'unredirect' them easily enough.

I don't want to tell apps when they're getting redirected/unredirected,
and I don't think it's necessary.


  * To make this somewhat safe, the concept of idle has to be one of
correct display not system stability. It can't take down the system
if the compositor sends SwapIdle at the wrong time.


See above.


  * Because the SBC is a drawable attribute it's a little complex to
continue having the right value over swap redirection.

 When a window is swap-redirected, we say that the SBC is
 incremented by one every time the redirecting client calls
 SwapRegion, and never otherwise. A query is provided for the
 current value.


We could simply decouple these values and just have a 'swap count'
associated with the window which is used to mark pixmap contents when
'UnIdled'.


  * It doesn't make sense to have both the server and the compositor
scheduling stuff. I think you'd specify that once you swap
redirect a window, it gets simple:


Good point. The redirected swap event should contain all of the swap
parameters so that the Compositor can appropriately schedule the window
swap with the matching screen swap.


Actually, from the compositor's perspective, the window's front
buffer doesn't matter, but you probably need to keep it current
to make screenshot tools, etc, work correctly.


My swap redirect plan has that pixmap getting swapped at the same time
the screen pixmap is swapped, so things will look 'right'.


Is this better than a more collaborative approach where the server and
compositor together determine what pixmaps are idle?


Idleness is certainly a joint prospect, but I don't think it's
cooperative. Instead, a pixmap is idle 

Re: Initial DRI3000 protocol specs available

2013-03-07 Thread Keith Packard
Aaron Plattner aplatt...@nvidia.com writes:

 If I'm understanding this correctly, this requires the X server to 
 receive a notification from the GPU that the swap is complete so it can 
 send the SwapComplete event.  Is there any chance this could be done 
 with a Fence instead?  The application could specify the fence in the 
 Swap request, and then use that fence to block further rendering on the 
 GPU or wait on the fence from the CPU.

From what I've heard from application developers, there are two
different operations here:

 1) Throttle application rendering to avoid racing ahead of the screen

 2) Keeping the screen up-to-date with simple application changes, but
not any faster than frame rate.

The SwapComplete event is designed for this second operation. Imagine a
terminal emulator; it doesn't want to draw any faster than frame rate,
but any particular frame can be drawn in essentially zero time. This
application doesn't want to *block* at all, it wants to keep processing
external events, like getting terminal output and user input events. As
I understand it, a HW fence would cause the terminal emulator to stall
down in the driver, blocking processing of all of the events and
terminal output.

For simple application throttling, that wouldn't use these SwapComplete
events, rather it would use whatever existing mechanisms exist for
blocking rendering to limit the application frame rate.

 We typically try to do the scheduling on the GPU when possible because
 triggering an interrupt and waking up the X server burns power and
 adds latency for no good reason.

Right, we definitely don't want a high-performance application to block
waiting for an X event to arrive before it starts preparing the next
frame.

-- 
keith.pack...@intel.com


pgpSQnDQhrpjU.pgp
Description: PGP signature
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: Initial DRI3000 protocol specs available

2013-03-07 Thread Owen Taylor
On Thu, 2013-02-28 at 16:55 -0800, Keith Packard wrote:

  * It would be great if we could figure out a plan to get to the
point where the exact same application code is going to work for
proprietary and open source drivers. When you get down to the details
of swap this isn't close to the case currently.
 
 Agreed -- the problem here is that except for the nVidia closed drivers,
 everything else implicitly serializes device access through the kernel,
 providing a natural way to provide some defined order of
 operations. Failing that, I'd love to know what mechanisms *could* work
 with that design.

I don't think serialization is actually the big issue - although it's
annoying to deal with fences that are no-op for the open sources, it's
pretty well defined where you have to insert them, and because they are
no-op's for the open source drivers, there's little overhead.

Notification is more of an issue.

- Because swap handled client side in some drivers, INTEL_swap_event
  is seen as awkward to implement.
 
 I'm not sure what could be done here, other than to have some way for
 the X server to get information about the swap and stuff it into the
 event stream, of course. It could be as simple as having the client
 stuff the event data to the X server itself.

It may be that a focus on redirection makes things easier - once the
compositor is involved, we can't get away from X server involvement. The
compositor is the main case where the X server can be completely
bypassed when swapping. And I'm less concerned about API divergence for
the compositor. (Not that I *invite* it...)

- There is divergence on some basic behaviors, e.g.,  whether
  glXSwapBuffers() glFinish() waits for the swap to complete or not.
 
 glXSwapBuffers is pretty darn explicit in saying that it *does not* wait
 for the swap to complete, and glFinish only promises to synchronize the
 effects of rendering (contents of the frame buffer), not the actual
 swap operation itself. I'm not sure how we're supposed to respond when
 drivers ignore the spec and do their own thing?

I wish the GLX specification was clear enough so we actually knew who
was ignoring the spec and doing their own thing... ;-) The GLX
specification describes the swap operation as the contents of the back
buffer become the contents of the front buffer ... that seems like an
operation on the contents of the frame buffer.

But getting into the details here is a bit of a distraction - my goal is
to try to get us to convergence so we have only one API with well
defined behaviors.

- When rendering with a compositor, the X server is innocent of
  relevant information about timing and when the application should
  draw additional new frames. I've been working on handing this
  via client = compositor protocols
 
 With 'Swap', I think the X server should be involved as it is necessary
 to get be able to 'idle' buffers which aren't in use after the
 compositor is done with them. I tried to outline a sketch of how that
 would work before.
 
  (https://mail.gnome.org/archives/wm-spec-list/2013-January/msg0.html)
 
  But this adds a lot of complexity to the minimal client, especially
  when a client wants to work both redirected and unredirected.
 
 Right, which is why I think fixing the X server to help here would be better.

If the goal is really to obsolete the proposed WM spec changes, rather
than just make existing GLX apps work better, then there's quite a bit
of stuff to get right. For example, from my perspective, the
OML_sync_control defined UST timestamps are completely insufficient -
it's not even defined what the units are for these timestamps!

I think it would be great if we could sit down and figure out what
the Linux-ecosystem API is for this in a way we could give to
application authors.
 
 Ideally, a GL application using simple GLX or EGL APIs would work
 'perfectly', without the need to use additional X-specific APIs. My hope
 with splitting DRI3000 into separate DRI3 and Swap extensions is to
 provide those same semantics to simple double-buffered 2D applications
 using core X and Render drawing as well, without requiring that they be
 rewritten to use GL, and while providing all of the same functionality
 over the network as local direct rendering applications get today.

The GLX APIs have some significant holes and poorly defined aspects. And
they don't properly take compositing into account, which is the norm
today. So providing those capabilities to 2D apps seems of limited
utility.

[...]

The SwapComplete event is specified as - This event is delivered
when a SwapRegion operation completes - but the specification
of SwapRegion itself is fuzzy enough that I'm unclear exactly what
that means.
 
- The description SwapRegion needs to define swap since the
  operation has only a vague resemblance to the English-language
  meaning of swap.
 
 Right, SwapRegion can 

Re: Initial DRI3000 protocol specs available

2013-03-07 Thread James Jones

On 03/07/2013 12:49 PM, Keith Packard wrote:

* PGP Signed by an unknown key

Aaron Plattner aplatt...@nvidia.com writes:


If I'm understanding this correctly, this requires the X server to
receive a notification from the GPU that the swap is complete so it can
send the SwapComplete event.  Is there any chance this could be done
with a Fence instead?  The application could specify the fence in the
Swap request, and then use that fence to block further rendering on the
GPU or wait on the fence from the CPU.


 From what I've heard from application developers, there are two
different operations here:

  1) Throttle application rendering to avoid racing ahead of the screen

  2) Keeping the screen up-to-date with simple application changes, but
 not any faster than frame rate.

The SwapComplete event is designed for this second operation. Imagine a
terminal emulator; it doesn't want to draw any faster than frame rate,
but any particular frame can be drawn in essentially zero time. This
application doesn't want to *block* at all, it wants to keep processing
external events, like getting terminal output and user input events. As
I understand it, a HW fence would cause the terminal emulator to stall
down in the driver, blocking processing of all of the events and
terminal output.


If you associate an X Fence Sync with your swap operation, the driver 
has the option to trigger it directly from the client command stream and 
wake up only the applications waiting for that fence.  The compositor, 
if using GL, could have received the swap notification event and already 
programmed the response compositing based on it before the swap even 
completes, and just insert a token to make the GPU or kernel wait for 
the fence to complete before executing the compositing rendering commands.


Thanks,
-James


For simple application throttling, that wouldn't use these SwapComplete
events, rather it would use whatever existing mechanisms exist for
blocking rendering to limit the application frame rate.


We typically try to do the scheduling on the GPU when possible because
triggering an interrupt and waking up the X server burns power and
adds latency for no good reason.


Right, we definitely don't want a high-performance application to block
waiting for an X event to arrive before it starts preparing the next
frame.


___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: Initial DRI3000 protocol specs available

2013-03-07 Thread James Jones

On 03/07/2013 01:19 PM, Owen Taylor wrote:

On Thu, 2013-02-28 at 16:55 -0800, Keith Packard wrote:


* It would be great if we could figure out a plan to get to the
   point where the exact same application code is going to work for
   proprietary and open source drivers. When you get down to the details
   of swap this isn't close to the case currently.


Agreed -- the problem here is that except for the nVidia closed drivers,
everything else implicitly serializes device access through the kernel,
providing a natural way to provide some defined order of
operations. Failing that, I'd love to know what mechanisms *could* work
with that design.


Fence syncs.  Note the original fence sync + multi-buffer proposal 
solved basically the same problems you're trying to solve here, as well 
as everything Owen's WM spec updates do, but more generally, and with 
that, a little more implementation complexity.  It included proposals to 
make minor updates to GLX/EGL as well to tie them in with the newer 
model.  There didn't seem to be much interest outside of NVIDIA, so 
besides fence sync, the ideas are tabled internally ATM.



I don't think serialization is actually the big issue - although it's
annoying to deal with fences that are no-op for the open sources, it's
pretty well defined where you have to insert them, and because they are
no-op's for the open source drivers, there's little overhead.

Notification is more of an issue.


   - Because swap handled client side in some drivers, INTEL_swap_event
 is seen as awkward to implement.


I'm not sure what could be done here, other than to have some way for
the X server to get information about the swap and stuff it into the
event stream, of course. It could be as simple as having the client
stuff the event data to the X server itself.


It may be that a focus on redirection makes things easier - once the
compositor is involved, we can't get away from X server involvement. The
compositor is the main case where the X server can be completely
bypassed when swapping. And I'm less concerned about API divergence for
the compositor. (Not that I *invite* it...)


   - There is divergence on some basic behaviors, e.g.,  whether
 glXSwapBuffers() glFinish() waits for the swap to complete or not.


glXSwapBuffers is pretty darn explicit in saying that it *does not* wait
for the swap to complete, and glFinish only promises to synchronize the
effects of rendering (contents of the frame buffer), not the actual
swap operation itself. I'm not sure how we're supposed to respond when
drivers ignore the spec and do their own thing?


I wish the GLX specification was clear enough so we actually knew who
was ignoring the spec and doing their own thing... ;-) The GLX
specification describes the swap operation as the contents of the back
buffer become the contents of the front buffer ... that seems like an
operation on the contents of the frame buffer.


The GLX spec is plenty clear here.  It states:

Subsequent OpenGL commands can be issued immediately, but will not be 
executed until the buffer swapping has completed...


And glFinish, besides the fact that it counts as a GL command, isn't 
defined as simply waiting until effects on the framebuffer land.  All 
rendering, client, and server (GL server, not X server) state side 
effects from previous operations must settle before it returns. 
SwapBuffers affects all three of those.  Same for fence syncs with 
condition GL_SYNC_GPU_COMMANDS_COMPLETE.


So if the drawable swapped is current to the thread calling swap 
buffers, and they issue any other GL commands afterwards, including 
glFinish, glFenceSync, etc., those commands can't complete until after 
the swap operation does.  For glFinish, that means it can't return.  For 
fence, the fence won't trigger until the swap finishes.  If 
implementations aren't behaving that way, it's a bug in the 
implementation.  Not to say our implementation doesn't have bugs, but 
AFAIK, we don't have that one.


Thanks,
-James


But getting into the details here is a bit of a distraction - my goal is
to try to get us to convergence so we have only one API with well
defined behaviors.


   - When rendering with a compositor, the X server is innocent of
 relevant information about timing and when the application should
 draw additional new frames. I've been working on handing this
 via client = compositor protocols


With 'Swap', I think the X server should be involved as it is necessary
to get be able to 'idle' buffers which aren't in use after the
compositor is done with them. I tried to outline a sketch of how that
would work before.


(https://mail.gnome.org/archives/wm-spec-list/2013-January/msg0.html)

 But this adds a lot of complexity to the minimal client, especially
 when a client wants to work both redirected and unredirected.


Right, which is why I think fixing the X server to help here would be better.


If the goal is really to obsolete the proposed WM 

Re: how to reduce X building time?

2013-03-07 Thread Peter Hutterer
On Thu, Mar 07, 2013 at 06:32:23AM +, wolfking wrote:
 
 hi, all:  I'm building the X on my PowerPC platform and have a problem:  
 Everytime when I use the build.sh scricpt to build the X, the build.sh 
 restart from thebeginning to compile, it wastes a lot of time to rebuild the 
 components that it builtin the previous building process. I remember in the 
 previous version of build.sh, I canuse the -r option to specify the component 
 from where to begin building. But in current version of build.sh, this option 
 is ignored, instead it provides -o option, and it onlycompiles the specified 
 component, and ignores the followed components. Can someone tell mehow to 
 reduce the building time?   

use --autoresume instead of -r

Cheers,
   Peter


___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: Initial DRI3000 protocol specs available

2013-03-07 Thread Keith Packard
James Jones jajo...@nvidia.com writes:

 If you associate an X Fence Sync with your swap operation, the driver 
 has the option to trigger it directly from the client command stream and 
 wake up only the applications waiting for that fence.

Yeah, right now we're doing some hand-waving about serialization which
isn't entirely satisfying.

 The compositor, 
 if using GL, could have received the swap notification event and already 
 programmed the response compositing based on it before the swap even 
 completes, and just insert a token to make the GPU or kernel wait for 
 the fence to complete before executing the compositing rendering
 commands.

We just don't have these issues with the open source drivers, so it's
really hard for us to reason about this kind of asynchronous
operation. Access to the underlying buffers is mediated by the kernel
which ensures that as long as you serialize kernel calls, you will
serialize hw execution as well.

-- 
keith.pack...@intel.com


pgpUbjPWFp_KU.pgp
Description: PGP signature
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: Initial DRI3000 protocol specs available

2013-03-07 Thread Keith Packard
James Jones jajo...@nvidia.com writes:

 There didn't seem to be much interest outside of NVIDIA, so 
 besides fence sync, the ideas are tabled internally ATM.

This shouldn't surprise you though -- no-one else needs this kind of
synchronization, so it's really hard for anyone to evaluate it. And,
DRI2 offers 'sufficient' support for the various GL sync extensions.

So, what I'd like to know is if you think nVidia could take advantage of
the Swap extension so that nVidia 3D applications could do the whole
Swap redirect plan? If so, then I'm a lot more interested in figuring
out how we can get apps using the necessary fencing to actually make it
work right.

-- 
keith.pack...@intel.com


pgpC1oD49AHAr.pgp
Description: PGP signature
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH] fb: Rename wfbDestroyGlyphCache

2013-03-07 Thread Keith Packard
Aaron Plattner aplatt...@nvidia.com writes:

 Renaming this function was missed in commit
 9cbcb5bd6a5360a128d15b77a02d8d3351f74366, so both libfb.so and libwfb.so 
 define
 functions named fbDestroyGlyphCache.

Merged.
   103b77c..5047810  master - master

-- 
keith.pack...@intel.com


pgpVd2Llyp22C.pgp
Description: PGP signature
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

[PULL] -next for 1.15

2013-03-07 Thread Peter Hutterer
Misc patches accumulated during the freeze window. Nothing outrageous and
no big features in here.

The following changes since commit 90642948cc78834d95f7a3bddaac7ff77b68ed7e:

  Merge remote-tracking branch 'jeremyhu/master' (2013-02-14 11:05:48 -0800)

are available in the git repository at:


  git://people.freedesktop.org/~whot/xserver next

for you to fetch changes up to 0f537da72d414ed84e3cd14e3bb7e08565136bd7:

  xkb: Fixes to LatchMods/LatchGroup (2013-03-06 11:22:38 +1000)


Alan Coopersmith (8):
  Handle failure to create counter in init_system_idle_counter
  Stop leaking overlayWin in PanoramiXCompositeGetOverlayWindow error paths
  Free keymap on error in Xephyr's hostx_load_keymap
  Make xf86ValidateModes actually copy clock range list to screen pointer
  Avoid NULL pointer dereference in xf86TokenToOptinfo if token not found
  Avoid memory leak on realloc failure in localRegisterFreeBoxCallback
  xf86XvMCScreenInit: Avoid leak if dixRegisterPrivateKey fails
  Avoid memory leak in ddc resort() if find_header() fails

Andreas Wettstein (1):
  xkb: Fixes to LatchMods/LatchGroup

Daniel Martin (2):
  ephyr: Add -resizeable option
  ephyr: Fix crash on 24bpp host framebuffer

Marcin Slusarz (1):
  os: use libunwind to generate backtraces

Peter Harris (1):
  xkb: Set nIndicators in XkbGetIndicatorMap

Peter Hutterer (13):
  randr: fix set but unused warnings
  xfree86: drop unused prevSIGIO
  fb: drop two unneeded shadowing variables
  Xext: renaming shadowing variable
  Xext: rename two shadowing variables
  xkb: remove unused variable 'names'
  xfree86: remove redundant declaration of inputInfo
  Merge branch 'master' of git+ssh://people.freedesktop.org/~alanc/xserver 
into next
  dix: FreeAllAtoms() on reset
  dix: only show the cursor if a window defines one (#58398)
  os: document pnprintf as sigsafe snprintf
  kdrive: fix set but not used warnings
  xephyr: fix set but not used warnings

 Xext/panoramiX.c   |  14 ++--
 Xext/sync.c|  13 ++--
 Xext/xvdisp.c  |   8 +--
 composite/compext.c|   7 +-
 configure.ac   |   9 ++-
 dix/main.c |   2 +
 dix/window.c   |   4 ++
 fb/fbpict.c|   8 +--
 hw/kdrive/ephyr/ephyr.c|   7 +-
 hw/kdrive/ephyr/ephyrinit.c|   6 ++
 hw/kdrive/ephyr/ephyrvideo.c   |  24 ---
 hw/kdrive/ephyr/hostx.c|  43 
 hw/kdrive/ephyr/hostx.h|   3 +-
 hw/kdrive/fbdev/fbdev.c|  10 ---
 hw/kdrive/linux/mouse.c|   6 --
 hw/kdrive/src/kinput.c |   8 ---
 hw/kdrive/src/kxv.c|   2 -
 hw/xfree86/common/xf86Events.c |   1 -
 hw/xfree86/common/xf86Mode.c   |  17 ++---
 hw/xfree86/common/xf86Option.c |   2 +-
 hw/xfree86/common/xf86fbman.c  |  12 ++--
 hw/xfree86/common/xf86xvmc.c   |   4 +-
 hw/xfree86/ddc/ddc.c   |   7 +-
 hw/xfree86/ramdac/xf86Cursor.c |   1 -
 include/dix-config.h.in|   3 +
 include/input.h|   5 ++
 os/Makefile.am |   5 ++
 os/backtrace.c |  75 +
 os/log.c   |   4 ++
 randr/rrcrtc.c |   9 +--
 xfixes/cursor.c|  10 +--
 xkb/xkb.c  |   3 +-
 xkb/xkbActions.c   | 149 ++---
 33 files changed, 281 insertions(+), 200 deletions(-)
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Bug#702480: xserver-xorg-video-radeon: GTK fonts are invisible on PowerPC

2013-03-07 Thread Michel Dänzer
On Mit, 2013-03-06 at 19:04 -0800, Dan DeVoto wrote: 
 Package: xserver-xorg-video-radeon
 Version: 1:6.14.4-7
 Severity: important

FWIW I think this deserves even higher severity, it should definitely be
fixed for wheezy.


 Dear Maintainer,
 
 On new Wheezy installs or after an upgrade, PowerPC Macs using the radeon
 driver have GTK fonts render as very light purple making them invisible 
 against
 gray backgrounds.  Other users have discussed this problem in the following
 threads:
 
 http://lists.debian.org/debian-powerpc/2013/01/msg00028.html
 
 http://www.mintppc.org/forums/viewtopic.php?f=15t=1205sid=19248fa6623f340007d9c86ea8f37611
 
 A workaround is to edit xorg.conf, changing AccelMethod to XAA or keeping the
 EXA default but adding the options RenderAcceleration False and
 MigrationHeuristic greedy.
 
 This bug does not affect fonts in a console, xterm, qt apps, or Openbox 
 desktop
 menus, only in GTK apps.

I think the attached patch should fix this. Can you or anyone else on
the debian-powerpc list test it?


   Option  RenderAccelerationFalse
   Option  MigrationHeuristicgreedy
[...] 
 [45.807] (WW) RADEON(0): Option RenderAcceleration is not used

BTW, if you spelled RenderAccel correctly, Option MigrationHeuristic
shouldn't be necessary to work around the problem. In theory the latter
won't avoid the problem in some cases anyway.


-- 
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast |  Debian, X and DRI developer
From f9c21f46e87fb2bf5c6489b2365fc2bba88fd336 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Michel=20D=C3=A4nzer?= michel.daen...@amd.com
Date: Thu, 7 Mar 2013 09:59:29 +0100
Subject: [PATCH] UMS: Swap bytes when uploading to pixmap for solid picture on
 big endian host
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit


Signed-off-by: Michel Dänzer michel.daen...@amd.com
---
 src/radeon_exa_shared.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/src/radeon_exa_shared.c b/src/radeon_exa_shared.c
index 7af8a52..48c8fcf 100644
--- a/src/radeon_exa_shared.c
+++ b/src/radeon_exa_shared.c
@@ -40,6 +40,7 @@
 #endif
 #include radeon_macros.h
 #include radeon_probe.h
+#include radeon_reg.h
 #include radeon_version.h
 #include radeon_vbo.h
 
@@ -159,7 +160,12 @@ PixmapPtr RADEONSolidPixmap(ScreenPtr pScreen, uint32_t solid)
 
 /* XXX: Big hammer... */
 info-accel_state-exa-WaitMarker(pScreen, info-accel_state-exaSyncMarker);
+#if X_BYTE_ORDER == X_BIG_ENDIAN
+RADEONCopySwap(info-FB + exaGetPixmapOffset(pPix), (uint8_t*)solid, 4,
+		   RADEON_HOST_DATA_SWAP_32BIT);
+#else
 memcpy(info-FB + exaGetPixmapOffset(pPix), solid, 4);
+#endif
 
 return pPix;
 }
-- 
1.8.2.rc1

___
xorg-driver-ati mailing list
xorg-driver-ati@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-driver-ati


[Bug 57649] xf86-video-ati: Acceleration of solid pictures broken on big endian

2013-03-07 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=57649

Michel Dänzer mic...@daenzer.net changed:

   What|Removed |Added

  Attachment #70730|0   |1
is obsolete||

--- Comment #3 from Michel Dänzer mic...@daenzer.net ---
Created attachment 76114
  -- https://bugs.freedesktop.org/attachment.cgi?id=76114action=edit
UMS: Swap bytes when uploading to pixmap for solid picture on  big endian host

This patch should address the issues from my previous comments. Please test at
depth 24 and 16.

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
xorg-driver-ati mailing list
xorg-driver-ati@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-driver-ati


Bug#702480: xserver-xorg-video-radeon: GTK fonts are invisible on PowerPC

2013-03-07 Thread Michel Dänzer
forwarded 702480 https://bugs.freedesktop.org/show_bug.cgi?id=57649
kthxbye

On Don, 2013-03-07 at 10:07 +0100, Michel Dänzer wrote: 
 On Mit, 2013-03-06 at 19:04 -0800, Dan DeVoto wrote: 
  
  On new Wheezy installs or after an upgrade, PowerPC Macs using the radeon
  driver have GTK fonts render as very light purple making them invisible 
  against
  gray backgrounds.  Other users have discussed this problem in the following
  threads:
  
  http://lists.debian.org/debian-powerpc/2013/01/msg00028.html
  
  http://www.mintppc.org/forums/viewtopic.php?f=15t=1205sid=19248fa6623f340007d9c86ea8f37611
  
  A workaround is to edit xorg.conf, changing AccelMethod to XAA or keeping 
  the
  EXA default but adding the options RenderAcceleration False and
  MigrationHeuristic greedy.
  
  This bug does not affect fonts in a console, xterm, qt apps, or Openbox 
  desktop
  menus, only in GTK apps.
 
 I think the attached patch should fix this. Can you or anyone else on
 the debian-powerpc list test it?

I attached a better patch to the upstream bug report above. Please test
when running X at depth 16 as well as 24.


-- 
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast |  Debian, X and DRI developer

___
xorg-driver-ati mailing list
xorg-driver-ati@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-driver-ati


[Bug 57649] xf86-video-ati: Acceleration of solid pictures broken on big endian

2013-03-07 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=57649

--- Comment #4 from Martin Pieuchot m...@openbsd.org ---
Your diff correctly fixes the problem I was seeing. I just tested it on a
PowerBook G4 with a rv350 at depth 24 *and* 16.

Thank you for taking the time to look into this issue, I really appreciate it.

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
xorg-driver-ati mailing list
xorg-driver-ati@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-driver-ati


[Bug 57649] xf86-video-ati: Acceleration of solid pictures broken on big endian

2013-03-07 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=57649

--- Comment #5 from Alex Deucher ag...@yahoo.com ---
Might be worth applying to the ums xf86-video-ati git branch.

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
xorg-driver-ati mailing list
xorg-driver-ati@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-driver-ati


[Bug 57649] xf86-video-ati: Acceleration of solid pictures broken on big endian

2013-03-07 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=57649

Michel Dänzer mic...@daenzer.net changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution|--- |FIXED

--- Comment #6 from Michel Dänzer mic...@daenzer.net ---
Fixed on the ums branch, thanks for testing!

commit 96ddc91bfa07d91b412afcf90e13523fe9efaf08
Author: Michel Dänzer michel.daen...@amd.com
Date:   Thu Mar 7 09:59:29 2013 +0100

    UMS: Swap bytes when uploading to pixmap for solid picture on big endian
host

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
xorg-driver-ati mailing list
xorg-driver-ati@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-driver-ati


[Bug 61979] New: backlight adjustment doesn't work on HP Pavilion m6-1035dx

2013-03-07 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=61979

  Priority: medium
Bug ID: 61979
  Assignee: xorg-driver-ati@lists.x.org
   Summary: backlight adjustment doesn't work on HP Pavilion
m6-1035dx
QA Contact: xorg-t...@lists.x.org
  Severity: normal
Classification: Unclassified
OS: Linux (All)
  Reporter: s_chriscoll...@hotmail.com
  Hardware: x86-64 (AMD64)
Status: NEW
   Version: unspecified
 Component: Driver/Radeon
   Product: xorg

I am unable to adjust the backlight on my laptop when using the open-source
radeon driver, but backlight adjustments to work correctly when using fglrx.
xbacklight reports that No outputs have backlight property.

Now, if I recall correctly, the backlight used to work before I updated the
BIOS on this laptop. Prior to the BIOS update, the backlight setting worked in
the radeon driver, but was flaky in fglrx. After the update, the backlight
setting works correctly in fglrx but not radeon.

Please let me know what information I can provide to help resolve this issue.

Here is my system info:

OS: Kubuntu 12.04 amd64 w/ KDE SC 4.10.1
PC: HP Pavilion m6-1035dx
CPU/GPU: AMD A10-4600M APU with Radeon(tm) HD Graphics
RAM: 6GB DDR3 800 MHz
Linux Kernel: 3.5.0-25-generic
Screen Resolution: 1366 x 768
Xserver: 1.12.3+git20120709+server-1.12-branch.60e0d205

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
xorg-driver-ati mailing list
xorg-driver-ati@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-driver-ati


[Bug 61979] backlight adjustment doesn't work on HP Pavilion m6-1035dx

2013-03-07 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=61979

--- Comment #1 from Alex Deucher ag...@yahoo.com ---
You are probably using the apci backlight interface which it seems to have been
broken by the bios update.  Assuming your laptop uses the on-GPU backlight
controller, you should be able to use the standard kernel backlight interface
(/sys/class/backlight/) to control it if you have kernel 3.7 or newer.

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
xorg-driver-ati mailing list
xorg-driver-ati@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-driver-ati


[Bug 61979] backlight adjustment doesn't work on HP Pavilion m6-1035dx

2013-03-07 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=61979

Alex Deucher ag...@yahoo.com changed:

   What|Removed |Added

   Assignee|xorg-driver-ati@lists.x.org |dri-devel@lists.freedesktop
   ||.org
 QA Contact|xorg-t...@lists.x.org   |
Product|xorg|DRI
  Component|Driver/Radeon   |DRM/Radeon

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
xorg-driver-ati mailing list
xorg-driver-ati@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-driver-ati