Re: [PATCH mesa 4/7] Add the EGL_MESA_configless_context extension

2014-05-03 Thread John Kåre Alsaker
This commit results in garbage output for r600g when building with
OpenGL (not ES) as enabled by my patches:
https://github.com/Zoxc/weston/commits/gl-rebase. (--enable-opengl on
configure)

It works on i965 so I was wondering how well tested this is on Gallium drivers?

On Fri, Mar 7, 2014 at 7:05 PM, Neil Roberts n...@linux.intel.com wrote:
 This extension provides a way for an application to render to multiple
 surfaces with different buffer formats without having to use multiple
 contexts. An EGLContext can be created without an EGLConfig by passing
 EGL_NO_CONFIG_MESA. In that case there are no restrictions on the surfaces
 that can be used with the context apart from that they must be using the same
 EGLDisplay.

 _mesa_initialze_context can now take a NULL gl_config which will mark the
 context as ‘configless’. It will memset the visual to zero in that case.
 Previously the i965 and i915 drivers were explicitly creating a zeroed visual
 whenever 0 is passed for the EGLConfig. Mesa needs to be aware that the
 context is configless because it affects the initial value to use for
 glDrawBuffer. The first time the context is bound it will set the initial
 value for configless contexts depending on whether the framebuffer used is
 double-buffered.
 ---
  docs/specs/MESA_configless_context.spec   | 120 
 ++
  include/EGL/eglext.h  |   5 ++
  src/egl/drivers/dri2/egl_dri2.c   |   1 +
  src/egl/main/eglapi.c |   2 +-
  src/egl/main/eglcontext.c |  20 -
  src/egl/main/egldisplay.h |   1 +
  src/egl/main/eglmisc.c|   1 +
  src/mesa/drivers/dri/i915/intel_context.c |   6 --
  src/mesa/drivers/dri/i965/brw_context.c   |   6 --
  src/mesa/main/context.c   |  82 +++-
  src/mesa/main/mtypes.h|   6 ++
  11 files changed, 213 insertions(+), 37 deletions(-)
  create mode 100644 docs/specs/MESA_configless_context.spec

 diff --git a/docs/specs/MESA_configless_context.spec 
 b/docs/specs/MESA_configless_context.spec
 new file mode 100644
 index 000..8bed90d
 --- /dev/null
 +++ b/docs/specs/MESA_configless_context.spec
 @@ -0,0 +1,120 @@
 +Name
 +
 +MESA_configless_context
 +
 +Name Strings
 +
 +EGL_MESA_configless_context
 +
 +Contact
 +
 +Neil Roberts neil.s.robe...@intel.com
 +
 +Status
 +
 +Proposal
 +
 +Version
 +
 +Version 1, February 28, 2014
 +
 +Number
 +
 +EGL Extension #not assigned
 +
 +Dependencies
 +
 +Requires EGL 1.4 or later.  This extension is written against the
 +wording of the EGL 1.4 specification.
 +
 +Overview
 +
 +This extension provides a means to use a single context to render to
 +multiple surfaces which have different EGLConfigs. Without this extension
 +the EGLConfig for every surface used by the context must be compatible
 +with the one used by the context. The only way to render to surfaces with
 +different formats would be to create multiple contexts but this is
 +inefficient with modern GPUs where this restriction is unnecessary.
 +
 +IP Status
 +
 +Open-source; freely implementable.
 +
 +New Procedures and Functions
 +
 +None.
 +
 +New Tokens
 +
 +Accepted as config in eglCreateContext
 +
 +EGL_NO_CONFIG_MESA  ((EGLConfig)0)
 +
 +Additions to the EGL Specification section 2.2 Rendering Contexts and 
 Drawing
 +Surfaces
 +
 +Add the following to the 3rd paragraph:
 +
 +   EGLContexts can also optionally be created with respect to an EGLConfig
 +depending on the parameters used at creation time. If a config is 
 provided
 +then additional restrictions apply on what surfaces can be used with the
 +context.
 +
 +Replace the last sentence of the 6th paragraph with:
 +
 +   In order for a context to be compatible with a surface they both must 
 have
 +been created with respect to the same EGLDisplay. If the context was
 +created without respect to an EGLConfig then there are no further
 +constraints. Otherwise they are only compatible if:
 +
 +Remove the last bullet point in the list of constraints.
 +
 +Additions to the EGL Specification section 3.7.1 Creating Rendering 
 Contexts
 +
 +Replace the paragraph starting If config is not a valid EGLConfig...
 +with
 +
 +   The config argument can either be a valid EGLConfig or 
 EGL_NO_CONFIG_MESA.
 +If it is neither of these then an EGL_BAD_CONFIG error is generated. If a
 +valid config is passed then the error will also be generated if the 
 config
 +does not support the requested client API (this includes requesting
 +creation of an OpenGL ES 1.x context when the EGL_RENDERABLE_TYPE
 +attribute of config does not contain EGL_OPENGL_ES_BIT, or creation of an
 +OpenGL ES 2.x context when the attribute does not contain
 +EGL_OPENGL_ES2_BIT).
 +
 +Passing EGL_NO_CONFIG_MESA will create a 

Re: Deep Color support

2014-04-27 Thread John Kåre Alsaker
I implemented support for ABGR16161616 framebuffers in mesa/wl_drm.
My patch has bit-rotted a bit now, but it gives you an idea about what to do:
https://github.com/Zoxc/mesa/commit/73f39f1366287bab02c993cb3537980e89b3cdca

My motivation for this was to have clients render with linear gamma.

One thing to note is that EGL clients usually will pick the highest
color depth by default.
We'd likely want to prevent this somehow, even though it goes against EGL spec.

On Sun, Apr 27, 2014 at 11:52 AM, Wolfgang Draxinger
wdraxinger.maill...@draxit.de wrote:
 On Sun, 27 Apr 2014 11:35:37 +0200
 Wolfgang Draxinger
 wdraxinger.maill...@draxit.de wrote:

  Btw. weren't FBConfigs a GLX thing?

 No, XRender actually; GLX just piggybacks on that.

 Okay, that needs some explanation. While all functions that have
 FBConfig in their name are part of GLX you actually have so use some
 XRender functions to select the desired pixel format from the subset of
 FBConfigs offered to you by GLX.

 In my own OpenGL/GLX programs I use the following code for FBConfig
 selection:

 fbconfigs =
 glXChooseFBConfig(
 Xdisplay, Xscreen, VisData,
 numfbconfigs);

 int need_alpha = 0;
 for(int i = 0; VisData[i] != None; i+=2) {
 if( VisData[i] == GLX_ALPHA_SIZE
   VisData[i+1]  0 ) {
 need_alpha = 1;
 break;
 }
 }

 fbconfig = 0;
 if( need_alpha ) {
 for(int i = 0; inumfbconfigs; i++) {
 visual = (XVisualInfo*)
 glXGetVisualFromFBConfig(
 Xdisplay, fbconfigs[i]);

 if(!visual)
 continue;

 pict_format =
 XRenderFindVisualFormat(
 Xdisplay,
 visual-visual);

 if(!pict_format)
 continue;

 fbconfig = fbconfigs[i];
 if(pict_format-direct.alphaMask  0) {
 break;
 }
 }
 }
 else {
 fbconfig = fbconfigs[0];
 }

 if(!fbconfig) {
 fatalError(No matching FB config found);
 }

 It took me quite a while to figure out that glXChooseFBConfig returns a
 superset of XRender visuals that you have to comb through in a second
 search to select the subset with the desired properties (in this case
 an alpha channel on the X11 backing store).


 Regards,

 Wolfgang


 ___
 wayland-devel mailing list
 wayland-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/wayland-devel
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Deep Color support

2014-04-27 Thread John Kåre Alsaker
On Sun, Apr 27, 2014 at 12:30 PM, Wolfgang Draxinger
wdraxinger.maill...@draxit.de wrote:
 On Sun, 27 Apr 2014 12:11:39 +0200
 John Kåre Alsaker
 john.kare.alsa...@gmail.com wrote:

 I implemented support for ABGR16161616 framebuffers in mesa/wl_drm.
 My patch has bit-rotted a bit now, but it gives you an idea about
 what to do:
 https://github.com/Zoxc/mesa/commit/73f39f1366287bab02c993cb3537980e89b3cdca

 My motivation for this was to have clients render with linear gamma.

 Well, unless there's a nonlinear scanout LUT, so far with current
 technology and software the values in the backing store go to the
 display linearly.

 Not that this was particularly sane; the sane thing would be to
 associate a color profile with the backing store and have the graphics
 system apply a color transform close to scanout – a task trivially to
 implement in a compositor.
I have patches which will do the linear to sRGB gamma conversion for
the gl-renderer in weston,
which could be trivially extended to use a 3D LUT for a full color conversion.
It will also do the sRGB to linear gamma conversion for clients not
rendering with linear gamma,
possibly using a EGL extension I wrote to do this on hardware.

I'd also like to see support for EGL 1.5 (which includes sRGB
framebuffers) in clients so they
can render in linear gamma without the cost of higher bit depths.


 I recently double checked with an oscilloscope, that all GPU/driver
 combinations I own (Intel/intel, NVidia/nvidia, NVidia/nouveau,
 AMD/fglrx, AMD/radeon), are well behaved, and they are. But of
 course you can see the staircase with only 8 bits per channel.

 One of the scope traces ended up in the StackOverflow answer
 http://stackoverflow.com/a/23030225/524368
Since VGA is a ting of the past, I'm more interested in what the display do.
I wonder if there's a tool to measure this with my Spyder3, even
though it's a moot
point after calibration.


 One thing to note is that EGL clients usually will pick the highest
 color depth by default.

 Which is a very sane rationale.

 We'd likely want to prevent this somehow, even though it goes against
 EGL spec.

 Why? Today we're no longer constraint by lack of graphics memory. The
 only valid argument would be the performance hit caused by increased
 memory bandwidth load. However you're normally not limited by
 framebuffer fill bandwidth, but by texture fetch bandwidth (and modern
 GPU's memory controller are full-duplex capable). And in
 render-to-texture situations you normally explicitly select the desired
 internal format for the textures, as the task at hand demands it.

We still are limited by both graphics memory and bandwidth.
A fullscreen 64bpp 4K client will use about 128 MB for the framebuffer.
On a high-DPI variant of my monitor that would be 225 MB. It would not take
many clients to fill my 1GB of VRAM.

Do note that with wayland, what clients really do is render to a texture,
which then get sampled by the compositor. So even if the GPU could hide
the framebuffer writes, sampling performance would likely be worse.

Another point is that clients usually will supply 32 bpp data, so the higher
bit depth will probably be wasted.



 Regards,

 Wolfgang


 ___
 wayland-devel mailing list
 wayland-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/wayland-devel
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


[PATCHv2 weston] gl-renderer: Remove gl_renderer_interface from gl-renderer.h

2014-04-10 Thread John Kåre Alsaker
The rationale here is, that this line would create an instance of
gl_renderer_interface in every compilation unit that included
gl-renderer.h. This is not necessary, and it can actually be harmful by
masking the real exported gl_renderer_interface symbol, if you added
another compilation unit to gl-renderer.so, causing a runtime failure in
loading it.

gl-renderer.c already creates the exported symbol.
---
 src/gl-renderer.h | 1 -
 1 file changed, 1 deletion(-)

diff --git a/src/gl-renderer.h b/src/gl-renderer.h
index db42f6c..6cd5f54 100644
--- a/src/gl-renderer.h
+++ b/src/gl-renderer.h
@@ -101,4 +101,3 @@ struct gl_renderer_interface {
void (*print_egl_error_state)(void);
 };
 
-struct gl_renderer_interface gl_renderer_interface;
-- 
1.9.1

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


[PATCH] gl-renderer: Remove gl_renderer_interface from gl-renderer.h

2014-04-08 Thread John Kåre Alsaker
---
 src/gl-renderer.h | 1 -
 1 file changed, 1 deletion(-)

diff --git a/src/gl-renderer.h b/src/gl-renderer.h
index db42f6c..6cd5f54 100644
--- a/src/gl-renderer.h
+++ b/src/gl-renderer.h
@@ -101,4 +101,3 @@ struct gl_renderer_interface {
void (*print_egl_error_state)(void);
 };
 
-struct gl_renderer_interface gl_renderer_interface;
-- 
1.9.1

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH] gl-renderer: Add flexible shader generation.

2014-04-08 Thread John Kåre Alsaker
I need to add lots of shader variants to do color conversions.

On Tue, Apr 8, 2014 at 9:56 PM, Jasper St. Pierre jstpie...@mecheye.net wrote:
 The code looks good, but I have one question: why? What's the motivation for
 adding a generic shader generator? Does this fix any bugs? Add new hardware
 support? Do you have new features lined up that rely on this?

 I always found that debugging generated shaders was fairly difficult and
 annoying, and while the old code isn't great, I find it's a lot simple to
 understand.


 On Tue, Apr 8, 2014 at 3:26 PM, John Kåre Alsaker
 john.kare.alsa...@gmail.com wrote:

 This add a more flexible way of generating shaders. It generates all valid
 combinations of different input, conversion and output pipelines, which
 can easily be extended with more if desired. It adds gl-internal.h for
 shared definitions between the gl-renderer.c and gl-shaders.c.
 ---
  Makefile.am   |   2 +
  src/gl-internal.h | 238 ++
  src/gl-renderer.c | 512 +-
  src/gl-shaders.c  | 596
 ++
  4 files changed, 881 insertions(+), 467 deletions(-)
  create mode 100644 src/gl-internal.h
  create mode 100644 src/gl-shaders.c

 diff --git a/Makefile.am b/Makefile.am
 index a247c3d..8952d1a 100644
 --- a/Makefile.am
 +++ b/Makefile.am
 @@ -158,6 +158,8 @@ gl_renderer_la_CFLAGS = \
 $(EGL_CFLAGS)   \
 $(GCC_CFLAGS)
  gl_renderer_la_SOURCES =   \
 +   src/gl-shaders.c\
 +   src/gl-internal.h   \
 src/gl-renderer.h   \
 src/gl-renderer.c   \
 src/vertex-clipping.c   \
 diff --git a/src/gl-internal.h b/src/gl-internal.h
 new file mode 100644
 index 000..15f8c8a
 --- /dev/null
 +++ b/src/gl-internal.h
 @@ -0,0 +1,238 @@
 +/*
 + * Copyright © 2012 Intel Corporation
 + * Copyright © 2012 John Kåre Alsaker
 + *
 + * Permission to use, copy, modify, distribute, and sell this software
 and
 + * its documentation for any purpose is hereby granted without fee,
 provided
 + * that the above copyright notice appear in all copies and that both
 that
 + * copyright notice and this permission notice appear in supporting
 + * documentation, and that the name of the copyright holders not be used
 in
 + * advertising or publicity pertaining to distribution of the software
 + * without specific, written prior permission.  The copyright holders
 make
 + * no representations about the suitability of this software for any
 + * purpose.  It is provided as is without express or implied warranty.
 + *
 + * THE COPYRIGHT HOLDERS DISCLAIM ALL WARRANTIES WITH REGARD TO THIS
 + * SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
 + * FITNESS, IN NO EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY
 + * SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER
 + * RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF
 + * CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
 + * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
 + */
 +
 +#ifndef _GL_INTERNAL_H_
 +#define _GL_INTERNAL_H_
 +
 +#include config.h
 +
 +#include GLES2/gl2.h
 +#include GLES2/gl2ext.h
 +
 +#include stdlib.h
 +#include string.h
 +#include ctype.h
 +#include float.h
 +#include assert.h
 +#include linux/input.h
 +
 +#include gl-renderer.h
 +#include vertex-clipping.h
 +
 +#include EGL/eglext.h
 +#include weston-egl-ext.h
 +
 +#define MAX_PLANES 3
 +
 +enum gl_shader_attribute {
 +   ATTRIBUTE_INPUT,
 +   ATTRIBUTE_OUTPUT,
 +   ATTRIBUTE_CONVERSION,
 +   ATTRIBUTE_COUNT
 +};
 +
 +enum gl_conversion_attribute {
 +   CONVERSION_NONE,
 +   CONVERSION_COUNT
 +};
 +
 +enum gl_output_attribute {
 +   OUTPUT_BLEND,
 +   OUTPUT_COUNT
 +};
 +
 +enum gl_input_attribute {
 +   INPUT_RGBX,
 +   INPUT_RGBA,
 +   INPUT_EGL_EXTERNAL,
 +   INPUT_Y_UV,
 +   INPUT_Y_U_V,
 +   INPUT_Y_XUXV,
 +   INPUT_SOLID,
 +   INPUT_COUNT
 +};
 +
 +struct gl_shader {
 +   GLuint program;
 +   GLint projection_uniform;
 +   GLint color_uniform;
 +   GLint alpha_uniform;
 +};
 +
 +#define BUFFER_DAMAGE_COUNT 2
 +
 +enum gl_border_status {
 +   BORDER_STATUS_CLEAN = 0,
 +   BORDER_TOP_DIRTY = 1  GL_RENDERER_BORDER_TOP,
 +   BORDER_LEFT_DIRTY = 1  GL_RENDERER_BORDER_LEFT,
 +   BORDER_RIGHT_DIRTY = 1  GL_RENDERER_BORDER_RIGHT,
 +   BORDER_BOTTOM_DIRTY = 1  GL_RENDERER_BORDER_BOTTOM,
 +   BORDER_ALL_DIRTY = 0xf,
 +   BORDER_SIZE_CHANGED = 0x10
 +};
 +
 +struct gl_border_image {
 +   GLuint tex;
 +   int32_t width, height;
 +   int32_t tex_width;
 +   void *data;
 +};
 +
 +struct gl_output_state {
 +   EGLSurface egl_surface;
 +   pixman_region32_t buffer_damage

Re: [RFC PATCH 1/6] Add colorcorrection protocol

2014-04-03 Thread John Kåre Alsaker
On Thu, Apr 3, 2014 at 11:12 AM, Niels Ole Salscheider
niels_...@salscheider-online.de wrote:
 Zoxc did not like the way I handle the uncorrected case because it is a bit
 ugly and breaks blending in the uncorrected area. Instead of using a mask to
 opt-out from color management, he suggests to just attach the output color
 space to the surface so that it is converted from output to blending and again
 to the output color space. This however makes it necessary for the client to
 know its output and might cause a loss of precision (which might not be too
 bad in the 16 bit LUT case). What do you think?
I'm not sure there's a use case for having a uncorrected area.
Calibration will require
a separate API to reset the GPU LUT. Also my plot was to tag a surface
as uncorrected
and let the compositor convert it from the output color space. The
compositor could also
just copy the content without conversion when the surface is
unobscured, removing
rounding errors, but as I said I don't see a use case for it.



 Unfortunately, I have not been aware that Zoxc is also working on color
 management before writing these patches. I do not know what his plans are for
 his code but he seems to have some useful patches...
My plot is to coerce Kristian to merge them at some point.


 Regards,

 Ole
 ___
 wayland-devel mailing list
 wayland-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/wayland-devel

Since you all seem keen to discuss a color correction protocol, here's
my WIP one:
https://github.com/Zoxc/weston/blob/cms/protocol/cms.xml

In particular I'm wondering on what meaning should be had when one or
both of compositing_color_space
and color_space is pass-though/ignorant in
wl_cms_output_info.output_color_space.
We might want to indicate that:
- The compositor doesn't do color correction and the output has a
profile (so clients can do color correction themselves)
- The compositor doesn't do color correction and the output doesn't
have a profile (we're screwed, should clients assume sRGB?)
- The compositor does color correction and the output doesn't have a
profile, but we have a compositing space (should the compositor assume
the output is sRGB here and give us a sRGB output color space in this
case?)
- The compositor does color correction and the output does have a
profile and we have a compositing space
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH] protocol: replace the usage of selection with clipboard

2014-03-13 Thread John Kåre Alsaker
On Wed, Mar 12, 2014 at 7:32 PM, Bill Spitzak spit...@gmail.com wrote:
 Isn't Wayland differentiating between the selection and the clipboard?
It is not doing that at the moment. Only explicit clipboard actions
are supposed to be used.


 The selection is changed when the user selects an object.The clipboard is
 changed only when the user does a cut or copy operation.
There is no such thing as a selection in the Wayland protocol, even
though things are named 'selection'. It actually just refers to the
clipboard (or so I'm told).


 There is also drag  drop. Though in most cases this can be the same as the
 selection, I think there was some pathological examples showing that it has
 to be it's own piece of data.
Drag and drop is separate from the clipboard in the protocol.


 One of the patches changed selection to data source, maybe data source
 is a better name?
That was because it was referring to a data source being cancelled,
not a clipboard data source specifically. It might be used for drag
and drop too.


 I also am trying to reattach the patch here because for some reason it did
 not show up at text in my email:

 From abe49cf93128063d55cf881658b30b99d892ae53 Mon Sep 17 00:00:00 2001
 From: =?UTF-8?q?John=20K=C3=A5re=20Alsaker?= john.kare.alsa...@gmail.com
 Date: Wed, 12 Mar 2014 15:28:52 +0100
 Subject: [PATCH] protocol: replace the usage of selection with clipboard

 Make it clear that we're dealing with a clipboard and not X-style selections
 in the protocol.
 ---
  protocol/wayland.xml | 18 +-
  1 file changed, 9 insertions(+), 9 deletions(-)

 diff --git a/protocol/wayland.xml b/protocol/wayland.xml
 index 3aa89af..7a1efdd 100644
 --- a/protocol/wayland.xml
 +++ b/protocol/wayland.xml
 @@ -508,7 +508,7 @@
  /event

  event name=cancelled
 -  description summary=selection was cancelled
 +  description summary=data source was cancelled
 This data source has been replaced by another data source.
 The client should clean up and destroy this data source.
/description
 @@ -559,11 +559,11 @@
  /request

  request name=set_selection
 -  description summary=copy data to the selection
 -   This request asks the compositor to set the selection
 +  description summary=copy data to the clipboard
 +   This request asks the compositor to change the clipboard data
 to the data from the source on behalf of the client.

 -   To unset the selection, set the source to NULL.
 +   To empty the clipboard, set the source to NULL.
/description
arg name=source type=object interface=wl_data_source
 allow-null=true/
arg name=serial type=uint summary=serial of the event that
 triggered this request/
 @@ -574,7 +574,7 @@
 The data_offer event introduces a new wl_data_offer object,
 which will subsequently be used in either the
 data_device.enter event (for drag-and-drop) or the
 -   data_device.selection event (for selections).  Immediately
 +   data_device.selection event (for clipboard data).  Immediately
 following the data_device_data_offer event, the new data_offer
 object will send out data_offer.offer events to describe the
 mime types it offers.
 @@ -626,14 +626,14 @@
  /event

  event name=selection
 -  description summary=advertise new selection
 +  description summary=advertise new clipboard data
 The selection event is sent out to notify the client of a new
 -   wl_data_offer for the selection for this device.  The
 +   wl_data_offer for the clipboard for this device.  The
 data_device.data_offer and the data_offer.offer events are
 sent out immediately before this event to introduce the data
 offer object.  The selection event is sent to a client
 -   immediately before receiving keyboard focus and when a new
 -   selection is set while the client has keyboard focus.  The
 +   immediately before receiving keyboard focus and when new
 +   clipboard data is set while the client has keyboard focus.  The
 data_offer is valid until a new data_offer or NULL is received
 or until the client loses keyboard focus.
/description
 --
 1.8.4.msysgit.0


___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


[PATCH] protocol: replace the usage of selection with clipboard

2014-03-12 Thread John Kåre Alsaker



0001-protocol-replace-the-usage-of-selection-with-clipboa.patch
Description: Binary data
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Proposal to change subsurface extension

2013-06-18 Thread John Kåre Alsaker
I don't see a way to have a group of surfaces in sync with a video
surface operating independently on top work in the current
implementation without locking. It seems that only work for a single
surface and not a group of surfaces. The current implementation then
also requires the ability to map a surface with no content. If we
can't avoid doing that, there's not much reason to not change the
current implementation to a cleaner scheme.

If we absolutely can't add the wl_egl_window_* functions, an
alternative in my proposal would be making wl_surface.commit
configurable (sounds familiar?). It would either commit itself and all
children recursively or do nothing depending on what was configured.
This allows eglSwapBuffers to work, but I don't find it very clean.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Wayland color management protocol proposal

2013-06-18 Thread John Kåre Alsaker
Here is my current color management protocol for Wayland clients. The
basic idea is that the compositor sends two color spaces to the
clients. A compositing color space which the compositor prefers since
it would require no conversions. A full-screen color space which the
compositor can be able to scan out from. The full-screen color space
is intended to used by games which are able to do color conversions as
part of their post-processing. It's also possible it could be used by
video players.

Different rendering intents and more advanced color spaces have not
explicitly been considered here, but I hope the use of ICC profiles
will cover these.
Calibration tools will require explicit privileges in order to do
their job. Using the full-screen color space won't be enough since
video card gamma correction can apply to it. I also wonder if we can
just remove the VCGT tag from the ICC profiles to indicate that the
client doesn't have to correct for that, but I'm not sure that anyone
would use the tag anyway.

protocol name=cms
  interface name=wl_cms_factory version=1
request name=destroy type=destructor/

enum name=error
  entry name=bad_surface value=0
 summary=the given wl_surface is invalid/
  entry name=bad_output value=1
 summary=the given wl_output is invalid/
/enum

request name=get_cms_surface
  arg name=id type=new_id interface=wl_cms_surface
   summary=the new cms-surface object id/
  arg name=surface type=object interface=wl_surface
   summary=the surface to be turned into a cms-surface/
/request

request name=get_cms_output
  arg name=id type=new_id interface=wl_cms_output
   summary=the new cms-output object id/
  arg name=surface type=object interface=wl_output
   summary=the output to be turned into a cms-output/
/request

request name=create_rgb_color_space
  description summary=create a RGB color space
Creates a new color space from a RGB model.
If the compositor is unable to use the RGB model, an undefined
color space will be created instead.
  /description
  arg name=id type=new_id interface=wl_color_space
   summary=the new color space object id/
  arg name=gamma type=int/
  arg name=gamut type=int/
  arg name=parameters type=array/
/request

request name=create_profile_color_space
  description summary=create color space from profile
Creates a new color space from a profile.
If the compositor is unable to use the profile, an undefined
color space will be created instead.
  /description
  arg name=id type=new_id interface=wl_color_space
   summary=the new color space object id/
  arg name=mime_type type=string/
  arg name=fd type=fd/
/request
  /interface

  interface name=wl_cms_output version=1
request name=destroy type=destructor/

event name=compositing_color_space
  description summary=update the output's compositing color space
This event is send when wl_cms_output is created and when the
compositor's compositing color space for the output changes.

This is the color space the compositor uses internally when
compositing surfaces on this output. Clients should try to render
content in this space as it's the most efficient when surfaces
are not fullscreen and compositing is required.
However surfaces using this color space may not be able to be
scanned out.

If the client doesn't intend to use the passed color space, it
must destroy it when receiving this event.
  /description
  arg name=color_space type=new_id interface=wl_color_space/
/event

event name=fullscreen_color_space
  description summary=update the output's fullscreen color space
This event is send when wl_cms_output is created and when the
compositor's fullscreen color space for the output changes.

This is the native color space of the output where the
compositor don't have to use any conversions to display it.
The compositor may not be able to do compositing in this color
space without expensive conversions.

If the client doesn't intend to use the passed color space, it
must destroy it when receiving this event.
  /description
  arg name=color_space type=new_id interface=wl_color_space/
/event
  /interface

  interface name=wl_cms_surface version=1
request name=destroy type=destructor/

request name=set_color_space
  description summary=set the color space of a surface
This request sets the color space of a wl_surface. If
color_space is NULL or undefined, the default color space will be used
instead.
It's state is buffered so triggering wl_surface.commit is
required for it to take effect.
  /description
  arg name=color_space type=object interface=wl_color_space
 allow-null=true/
/request
  /interface

  interface 

Proposal to change subsurface extension

2013-06-13 Thread John Kåre Alsaker
I propose the we should change the commit behavior to having commit on the
toplevel wl_surface
commit itself and all it's subsurfaces atomically. Commiting on subsurfaces
should be a no-op.
That is to allow eglSwapBuffers to be used in subsurfaces, should you
manage to get it to be non-blocking.
This isn't a huge loss since clients only need to atomically commit all
surfaces anyway.

This has a number of benefits
- We can get rid of the commit modes (and the set_sync/set_desync request)
and wl_surface.commit becomes easy to reason about
- Commiting all surfaces becomes a single request instead of having to
walking the surface tree bottom up commiting surface along in synchronized
mode
- The compositor doesn't have to manage more than 2 states per surface
(pending and current)

I don't see any drawbacks from this or the reason for the complexity in the
current extension
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Proposal to change subsurface extension

2013-06-13 Thread John Kåre Alsaker
On Thu, Jun 13, 2013 at 1:04 PM, Pekka Paalanen ppaala...@gmail.com wrote:

 On Thu, 13 Jun 2013 12:35:36 +0200
 John Kåre Alsaker john.kare.alsa...@gmail.com wrote:

  I propose the we should change the commit behavior to having commit
  on the toplevel wl_surface
  commit itself and all it's subsurfaces atomically. Commiting on
  subsurfaces should be a no-op.

 When a component is running asynchronously, constantly sending new
 state to a sub-surface, how do you guarantee, that a toplevel commit
 uses a consistent set of state for the sub-surface?

We either introduce libwayland functions to send requests as a group (which
basically just holds the wl_display mutex) or hold a mutex while we're
playing with pending surface state.


  That is to allow eglSwapBuffers to be used in subsurfaces, should you
  manage to get it to be non-blocking.
  This isn't a huge loss since clients only need to atomically commit
  all surfaces anyway.

 No, they don't. We want a component to be able to run independently in
 normal circumstances when it does not absolutely require
 synchronization to any other surfaces forming the window.

 Playing a video should not require the decorations to be committed for
 each video frame.

We don't need to avoid synchronization. It's simply desirable. However you
do realize that almost every call into libwayland requires synchronization
anyway? All rendering, memory management, resizing will
require synchronization. Why are you so reluctant to introduce some
insignificant synchronization given how much it simplifies the protocol?
It's not for performance since libwayland is filled with synchronization
and probably many other operations.
It's because it's too hard to use, since resizing will require much more
complex and expensive synchronization to work.

If you absolutely can't tolerate any synchronization in that case (which we
won't be able to avoid at the libwayland level anyway) a simpler solution
is to let wl_surface.commit atomically commit itself and all children
recursively. You then commit to two separate wl_surfaces in the main and
the component thread both of which have the top-level wl_surface as a
parent. However to do this we need a way to map a surface without a buffer
(or use a transparent one...) since the top-level wl_surface can't have
content. It also won't allow eglSwapBuffers to work in subsurfaces (if we
could even guarantee that it's non-blocking), but that will be rectified by
the wl_egl_window_* additions we discussed.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Proposal to change subsurface extension

2013-06-13 Thread John Kåre Alsaker
On Thu, Jun 13, 2013 at 2:42 PM, Pekka Paalanen ppaala...@gmail.com wrote:

 Libwayland does not synchronize, it only protects the queues for the
 very short moment each time they are modified. It does not cause one
 application component to stall and wait for another component to wake
 up, realize there is something to do, and then do it. Libwayland also
 does not force other threads to wake up, if one thread wants to do
 something. And with the fd locking patches for libwayland-client, that
 will actually be true all the way.

The sort of synchronization required is exactly the same as what is needed
by libwayland-client. It needs to send some bytes to the compositor without
being interrupted by other threads. My point here holds unless
libwayland-client gets a communication channel per thread (instead of per
wl_display).

It is very much about performance, and also about avoiding the need for
 component APIs to have a hook for I want to update my output now
 which crosses thread boundaries.

I'd say adding a mutex (or reusing one in libwayland-client) is probably
faster than doing the additional work required by the compositor in the
current implementation, both being insignificant.


  If you absolutely can't tolerate any synchronization in that case
  (which we won't be able to avoid at the libwayland level anyway) a
  simpler solution is to let wl_surface.commit atomically commit itself
  and all children recursively. You then commit to two separate

 Committing children is a no-go, unless you have a mechanism to
 guarantee a consistent snapshot of the child surface state.

Each thread gets it's own wl_surface and possible children. It's up to each
thread to make sure their tree of surfaces is consistent. Each thread would
call commit on their wl_surfaces. You'd only call commit on the top-level
surface when resizing.


  wl_surfaces in the main and the component thread both of which have
  the top-level wl_surface as a parent. However to do this we need a
  way to map a surface without a buffer (or use a transparent one...)
  since the top-level wl_surface can't have content. It also won't

 Funny how no toolkit developer has brought that problem up yet.

I'm not quite sure which problem you refer to here. The problem of using a
mapped wl_surface with no content to group up wl_surfaces belonging to
separate threads as is rather unique to my solution.

The problem of using a mapped wl_surface with no content and an input
region of non-infinite size is a more general one, which I suspect you're
referring to. I assume you have solved that by clipping the input region to
the extends of the surface and children and moving the content of one of
the subsurfaces into the parent. That solution won't work for my proposal
since it would require synchronization when committing the parent. I
instead have to pick between using a 1x1 buffer (ugh), explicit map/unmap
requests (ugh) and introducing a wl_empty_buffer (meh). I prefer the
wl_empty_buffer option. It also allows me to have input region to be
clipped to the surface content without including children, although I don't
see any reason to do that.


  allow eglSwapBuffers to work in subsurfaces (if we could even
  guarantee that it's non-blocking), but that will be rectified by the
  wl_egl_window_* additions we discussed.

 You have pretty high hopes it would go through... shall see, when you
 have an implementation.

It's too nice and useful to not go through. It's very easy to implement in
the current form. The question is if we want something more advanced...




 - pq

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [fbdev_backend]: support of DIRECTCOLOR Frame buffer

2013-06-07 Thread John Kåre Alsaker
You should execute the ioctl in fbdev_set_screen_info.

I don't see the codepath which results in cmap.transp being not set in the
test for it.

You don't initialize cmap.red so the test for it gives an undefined result.

Don't define macros in the middle of code with a literal that's only used
once.

On Fri, Jun 7, 2013 at 12:18 PM, Marc Chalain marc.chal...@gmail.comwrote:

 this patch allow to use direct color and set the colors map with default
 value.


 ___
 wayland-devel mailing list
 wayland-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/wayland-devel


___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH] gl-renderer: Always release previous EGL images on attach

2013-06-05 Thread John Kåre Alsaker
May I suggest
https://github.com/Zoxc/weston/commit/062f5ca5dc5809c027f693f2d642bc24f568e348instead?


On Wed, Jun 5, 2013 at 11:21 AM, Ander Conselvan de Oliveira 
conselv...@gmail.com wrote:

 From: Ander Conselvan de Oliveira ander.conselvan.de.olive...@intel.com

 When attaching a new buffer, the EGL images created for the previous one
 would be released if this new buffer was an EGL or NULL buffer, but not
 if is was an SHM buffer. This wouldn't cause the resources to be leaked
 becaused they are free()'d when the surface is destroyed, but they
 would linger around for longer than necessary.

 Note that this change the behaviour when attaching an unknow buffer
 type. Before the EGL images would still be around, but now that would
 cause them to be destroyed.
 ---
  src/gl-renderer.c |   14 ++
  1 file changed, 6 insertions(+), 8 deletions(-)

 diff --git a/src/gl-renderer.c b/src/gl-renderer.c
 index d783a0b..13c0fa9 100644
 --- a/src/gl-renderer.c
 +++ b/src/gl-renderer.c
 @@ -1192,12 +1192,13 @@ gl_renderer_attach(struct weston_surface *es,
 struct wl_buffer *buffer)

 weston_buffer_reference(gs-buffer_ref, buffer);

 +   for (i = 0; i  gs-num_images; i++) {
 +   gr-destroy_image(gr-egl_display, gs-images[i]);
 +   gs-images[i] = NULL;
 +   }
 +   gs-num_images = 0;
 +
 if (!buffer) {
 -   for (i = 0; i  gs-num_images; i++) {
 -   gr-destroy_image(gr-egl_display, gs-images[i]);
 -   gs-images[i] = NULL;
 -   }
 -   gs-num_images = 0;
 glDeleteTextures(gs-num_textures, gs-textures);
 gs-num_textures = 0;
 return;
 @@ -1219,9 +1220,6 @@ gl_renderer_attach(struct weston_surface *es, struct
 wl_buffer *buffer)
 gs-shader = gr-texture_shader_rgba;
 } else if (gr-query_buffer(gr-egl_display, buffer,
 EGL_TEXTURE_FORMAT, format)) {
 -   for (i = 0; i  gs-num_images; i++)
 -   gr-destroy_image(gr-egl_display, gs-images[i]);
 -   gs-num_images = 0;
 gs-target = GL_TEXTURE_2D;
 switch (format) {
 case EGL_TEXTURE_RGB:
 --
 1.7.9.5

 ___
 wayland-devel mailing list
 wayland-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/wayland-devel

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 00/15] weston scaling support

2013-06-04 Thread John Kåre Alsaker
 On Mon, Jun 3, 2013 at 10:21 PM, Bill Spitzak spit...@gmail.com wrote:

 + Clients get input in exact pixels (no rounding errors).
 + Clients doesn't have to transform input events in order to match the
  sizes used by buffers.

 You said pels don't match buffer pixels. Therefore a transformation is
 needed to get to buffer pixels.

Clients can choose if this transformation is used.


 And I'm worried that high-resolution pointing devices will be ignored by
 clients that immediately round to the nearest buffer pixel.

Presumably they wouldn't use the high-resolution for anything then. I'm not
sure how this relates to the topic either.



  - Clients changing scaling factors will have older input reported in the
 old size. This will be quite rare although clients can deal with it if
 desired.
 - Requires an implementation :)


 This seems to be the biggest actual objection, other than apparent
 misunderstanding of the proposal.

 However the same problem applies for when a buffer attach is done with an
 x and y offset. The client does not know exactly what mouse events are
 pointing at unless it knows whether they are before or after the commit.

 So I now think a better solution is to have the compositor echo the commit
 in the event stream, so the client can easily tell whether events are
 before or after it. It is possible that existing events can be used to
 determine this.

I believe the frame callback event can be used to tell when you get the
events from the new frame.

On Mon, Jun 3, 2013 at 11:56 PM, Jason Ekstrand ja...@jlekstrand.netwrote:

 I think I'm begining to see what you and John have been suggesting.  While
 I think Alexander's proposal will work well enough for the standard case, I
 think this also has merit.  If I were to sum up, I'd say I was cautiously
 supportive of the above.  It does seem to solve some of the edge-cases.
 That said, there are a few things I'd like to note.

 1. This requires a concept of a window which is one we currently don't
 have.  If we put it in wl_surface, we have to define how it works for
 subsurfaces.  We could say that the scale factor on subsurfaces gets
 ignored.  However, this means that we have defined an extension-specific
 exception.  What happens if we add another future extension that alters the
 window's size or orientation?  This will get messy.  We could put this in
 wl_shell_surface but then we are mixing the global (wl_output) with the
 desktop-specific (wl_shell_surface).

The concept of window I refer to is just regular a wl_surface really. With
the subsurface extension added in it breaks this concept so an exception in
this extension is required. This is just a side effect of adding
subsurfaces as an afterthought and not having a separate window object in
the protocol. I'm sure there's quite a few other things we can't do with
subsurfaces already.


 2. This restricts subsurfaces to the same scaling factor as the original
 surface.  Probably not a huge problem, but I'm not convinced it's a smaller
 restriction than requiring subsurfaces on pel-boundaries.

I don't think that's going to be a problem. If the client for some reason
want to mix scaling factors, it can resize the subsurface itself.



 3. This makes the subsurface case of handing off to a library more
 complicated.  In Alexander's implementation, the library would simply
 render at native resolution or not.  With this, the client has to tell the
 library what surface scale to use and the library has to *EXACTLY* match.
 Maybe this isn't a huge issue, but there's no opportunity for a client to
 embed an older library.

The client can always resize the output of the library. It's just another
side of thing 2 really.


 4. If a client presents a scale_factor to the compositor that is not an
 integer multiple of one of the ouput_scale factors provided by the
 compositor, it should not expect the compositor to do anything
 intelligent.  There are similar problems with integer proposal, but this
 one makes it more interesting.

It should expect the compositor to resize the window accordingly to it's
policies. The client remains blissfully unaware of whatever that has been
done. I don't see how this is a issue with either proposal.



 Food for thought,
 --Jason Ekstrand

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 00/15] weston scaling support

2013-06-04 Thread John Kåre Alsaker
On Tue, Jun 4, 2013 at 10:24 PM, Bill Spitzak spit...@gmail.com wrote:

 John Kåre Alsaker wrote:

  And I'm worried that high-resolution pointing devices will be
 ignored by clients that immediately round to the nearest buffer pixel.

 Presumably they wouldn't use the high-resolution for anything then. I'm
 not sure how this relates to the topic either.


 A client has to round event if it wants to do high-resolution, because it
 wants to align positions with things in the buffer. This gets pretty
 important with 3D modelling, lots of users and software assume that the
 mouse will stop on exactly the same fractional positions, and under your
 proposal this would change.

 For instance if there is a low-res device and a scale-3 output, then a
 client doing a hi-res display will get mouse positions of 1, 1.333..,
 1.666..., in that window. If there is no low-res device and the hi-res one
 now has a scale of 1, then the client will get positions of 3,4,5.

 Multiplying the 24.3 fixed-point representation of 1.333.. by 3 gets
 3.996. The client has to round this to 4 or it will align things slightly
 different on one system than the other. I can assure you that this
 difference is very annoying in 3D software.

 The only solution is to figure out the resolution of the pointing device,
 transform the fixed-point event positions to the buffer positions, and then
 round to the nearest multiple of the pointing device resolution. I think a
 lot of software will ignore the figure out the resolution of the pointing
 device step and just assume it is 1 pixel. Therefore any possible
 high-resolution pointing will be thrown away.

 Getting the events in buffer pixels would make it a lot easier as they
 could be used unchanged. Clients could then just assume any fraction is due
 to a high-resolution pointing device.

This is still a problem without any high-DPI stuff. You could just as
easily have an input device with a resolution 3 times higher than the
display.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 00/15] weston scaling support

2013-06-04 Thread John Kåre Alsaker
On Wed, Jun 5, 2013 at 3:11 AM, Bill Spitzak spit...@gmail.com wrote:

 John Kåre Alsaker wrote:

  This is still a problem without any high-DPI stuff. You could just as
 easily have an input device with a resolution 3 times higher than the
 display.


 I am worried that most clients will work around the problem of the scale
 proposal by rounding all numbers from 3.5 to 4.5 to 4, thus throwing away
 the high resolution input device.

 Lets assume a screen has scale 3 and there is a pointing device with a
 resolution of 1/2 of these pixels. And the client is drawing a full-res
 surface.

 (note the numbers here looks strange but they are the closest decimal to
 the 24.8 fixed-point values. That is why it is .332, not .333):

 Under your proposal a normal pointing device will return 1, 1.332, 1.664,
 2, ... as coordinates. If the client just blindly multiplies by 3 they will
 get 3, 3.996, 4.992, 6. I think most clients will fix this by rounding to
 the nearest integer to get 3, 4, 5, 6.

 Now the high-resolution device will return 1, 1.164, 1.332, 1.5, 1.664,
 1.832, 2. If the client blindly multiplies these by 3 they will get 3,
 3.492, 3.996, 4.5, 4.992, 5.496, 6. However if the client rounds (possibly
 because the author never saw a high-resolution pointer) they will get 3, 3,
 4, 4, 5, 5, 6, thus losing the high-resolution.

 In my proposal a normal pointing device will return 3, 4, 5, 6. The client
 will not need to round. The high-resolution device will return 3, 3.5, 4,
 4.5, 5, 5.5, 6. Since the client does not round, it will get these values
 correctly.

 Of course the client could only round numbers that are close to
 integers, but I still feel my solution is simpler.

You might have me confused for Pekka or Alexander. Our proposals should be
identical here. I'm just saying that it will increase the possibly
resolution of input events on higher scaling factors, which reduces issues
with rounding, but doesn't solve it altogether.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 00/15] weston scaling support

2013-05-28 Thread John Kåre Alsaker
On Thu, May 23, 2013 at 2:57 AM, Kristian Høgsberg hoegsb...@gmail.comwrote:

 I read through the latest wayland protocol patches and the discussion
 around them and didn't seen anything I didn't like.  I think the
 approach here is good and agree with the consensus.  This patch series
 looks great too and I like the improvements to the pixman renderer and
 the compositor-x11.c optimization.

Where did the consensus come from?

I think the lack of fractional scale factors will result in further
extensions to support that in the future. I really also dislike how this
implementation won't allow clients to draw with pixel precision on scaling
factors above 1.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 00/15] weston scaling support

2013-05-28 Thread John Kåre Alsaker
My proposal is simply to let the compositor tell the client how much larger
it wants the client to render content. The client can then tell the
compositor how much larger it made content in the window. Armed with this
information the compositor can size the window accordingly.

Note that this implies no further change in the protocol or concepts.
Surface coordinates are still of the same size as buffer coordinates.

This has a number of benefits over the current implementation:
+ Rational scaling factors has no issues (the most important benefit).
+ Output sizes do not have to be a multiple of the scaling factor.
+ Clients can specify their size in pixels exactly.
+ Clients get input in exact pixels (no rounding errors).
+ Clients doesn't have to transform input events in order to match the
sizes used by buffers.
+ Conceptually simple to implement. Compositors doesn't have to deal with
scaling for subsurfaces and they are probably already able to resize whole
windows.
+ Can support clients with integer scaling factors, although these raises
the same edge cases as the current implementation (they may not be able to
fit the screen exactly).

There are however some downsides:
- Clients may need to transform input events in order to match the
coordinates used by applications.
- Clients changing scaling factors will have older input reported in the
old size. This will be quite rare although clients can deal with it if
desired.
- Requires an implementation :)

I don't expect GTK or other (ancient) toolkits to do fractional scaling and
reap all the benefits listed here. I don't want anything in the core
protocol blocking the ability for clients to scale smoothly up though when
clients will actually be able to do this.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Output modes and scaling

2013-05-23 Thread John Kåre Alsaker
I see Alexander Larsson added a scaled flag to the output modes. Presumably
this is because he scales down the width and height of the mode by the
scaling factor of the output.

This probably shouldn't be done. The scaling factor only works for the
current mode. It isn't representative for the other available modes.

The mode list is probably supposed to be used by games which can't render
at the native resolution and picks one of those instead. Games going
fullscreen using these resolutions will usually fill the screen, making
further attempt at scaling up useless.

When a client tries to go fullscreen it will get a configure event with a
request for width and height. Those dimensions should be scaled down
instead of the output modes.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 2/2] protocol: Support scaled outputs and surfaces

2013-05-21 Thread John Kåre Alsaker
On Tue, May 21, 2013 at 5:35 PM, Bill Spitzak spit...@gmail.com wrote:
 However both proposals have this problem if pre-compositing is not done,
and most practical shells I can figure out can't do pre-compositing because
that requires another buffer for every parent, so maybe this is not a big
deal.
Pre-compositing or compositing of individual windows into buffers will be
required to be done for transparent subsurfaces which overlaps another
subsurface if the compositor wants to change the opacity of the window (a
common effect).

On Mon, May 20, 2013 at 11:23 AM, Pekka Paalanen ppaala...@gmail.comwrote:

 Actually, it means that the surface coordinate system can change
 dramatically when a client sends a new buffer with a different scale,
 which then raises a bucketful of races: is an incoming event using new
 or old surface coordinates? That includes at least all input events
 with a surface position, and the shell geometry event.

This is not a new race. Resizing and surface content changing have the same
problem. Changing the scaling factor would be a relatively rare event too.
I believe I was told that the frame callback was usable as a separator of
events for frames. That could allow clients which are changing scaling
factors to translate old input correctly or simply ignore it.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 0/2] Support for high DPI outputs via scaling

2013-05-15 Thread John Kåre Alsaker
On Tue, May 14, 2013 at 9:46 PM, Bill Spitzak spit...@gmail.com wrote:



 John Kåre Alsaker wrote:


 I expect a compositor to render deviations of the desired scaling
 factor without scaling windows. The range when this is allowed is
 reported to clients so they can try to render at a size which will
 avoid scaling.

 For example a compositor may want to use a 1-1.2 range with 1.1 as
 the desired scaling factor. A clients which are only able to draw at
 integer scaling factor would round that up to 2 and let the
 compositor downscale it. When the range for which compositor won't
 scale is send to clients we can avoid this.


 I don't think a range is necessary. The client can just claim that it's
 window is scaled at 1.1 even though it drew it at 1. Or at 2.2 even though
 it drew it at 2. Nothing stops the client from doing this so you might as
 well make that the way integer scales are done.

Then the user drags the window that claims to be draw at 1.1 over to a
monitor with scaling factor 1. It will be downscaled even though it's a
perfect match for the monitor.



 With the range, what happens to a surface with a scale of 1.3? Is it
 scaled by 1.3? Or should it be 1.3/1.2 times larger than the one scaled at
 1.2, which is actually 1.191666? For this reason I think any scale wanted
 by the client should be obeyed literally.

It will be scaled by client_scaling_factor/output_scaling_factor, which is
1.3/1.1.



  We may also allow scaling factors below 1.


 I think scaling factors less than 1 are going to be a requirement.
 Otherwise the units have to be for the lowest-resolution device, which
 seems silly if you have a huge hi-res screen and a small lcd low-res
 display on your keyboard.

Perhaps, but I'd expect clients to do a lot of rounded up making the
scaling not very linear.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 0/2] Support for high DPI outputs via scaling

2013-05-14 Thread John Kåre Alsaker
On Mon, May 13, 2013 at 8:52 PM, Jason Ekstrand ja...@jlekstrand.netwrote:

 On Mon, May 13, 2013 at 9:54 AM, Alexander Larsson al...@redhat.comwrote:


 On mån, 2013-05-13 at 14:40 +0200, John Kåre Alsaker wrote:

 
  I don't think this will work in practice. I know for sure that
  e.g. Gtk
  is not set up to do any reasonable non-integer scaling. It
  will just
  scale up all drawing by a fractional factor without any
  rounding
  anywhere, causing everything to become fuzzy. We will
  eventually have
  alternative icons for higher resolutions, but these will be at
  2x scale,
  not at generic fractional factor (that is not really doable
  without
  using pure-vector icons with some complex hinting method).
  Although, I
  guess that in the case of text the scaling will be ok, so for
  some
  usecases it might be OK.
  It will work very well for things designed to be scalable, browsers
  are an example. GTK could just fall back to integer scaling.

 Browsers are not really scalable like that, css page layout is generally
 based on
 the CSS Px definition, and per e.g.
 http://static.zealous-studios.co.uk/projects/web_tests/PPI%20tests.html:

 For lower-resolution devices [i.e. non-print], and devices with
 unusual
 viewing distances, it is recommended instead that the anchor unit
 be the
 pixel unit. For such devices it is recommended that the pixel
 unit refer
 to the whole number of device pixels that best approximates the
 reference pixel.

 I.e. whole number of pixels = scale by integer matches best what CSS
 wants.

 My point was that browsers can scale up websites by fractional amounts
even though it will look somewhat better with integer factors.



  So, it's my opinion that supporting fractional scaling is an
  unnecessary
  burden on compositors for something that is not very useful in
  practice.
  Thats just my opinion though, and the proposal as such can
  handle
  fractional scaling fine by just changing the scale factors to
  fixed
  type.
  It is of no burden on compositors at all, only for clients which
  choose to implement it.

 It is, as long as clients are allowed to specify a fractional scaling
 factors compositors need to be able to handle scaling by that (for
 instance if the window appears partially on a non-highres monitor.


 While this isn't a huge problem on GL-based compositors it will cause a
 problem for software compositors.  Any scaling for that matter is a
 potential problem there.  However, a factor of two or something shouldn't
 be too bad in software.

Compositors do not have to scale anything at all. Even with fractional
scaling factors a compositor could limit itself to just scaling with
integer factors. I don't think software compositor is a case worth
considering either. Nobody wants to have those, especially with high-DPI
displays.




In mirrored/clone mode only a single wl_output would be
  presented to
   clients with a single scale factor, so priorities won't
  matter in that
   case.
 
 
  That is not necessarily so. We might very well want to know
  that there
  are two displays, with say different RGB subpixel order, etc.
  The compositor could unify this information into a single wl_output.

 And what would it then report for physical width/height, subpixel, make,
 model, etc? Seems pretty weird.

 Even if two wl_outputs would be presented, they would have the same
resolution and scaling factor so priorities still won't matter.



  I suggest we send 3 scaling factors to clients. A lower bound where
  factors lower will be scaled up. The desired scaling factor. A upper
  bound where factors above will be scaled down. Clients should find the
  scaling factor in lower bound to upper bound they are able to render
  at which is closes to the desired factor. Failing that they should
  find the first factor they are able to render at above the upper
  bound. When displayed on multiple monitors it could try to remain
  sharp on as many monitors as possible.

 I don't quite understand this. Any factor lower than the desired
 factor has to be scaled up, no?

 I expect a compositor to render deviations of the desired scaling factor
without scaling windows. The range when this is allowed is reported to
clients so they can try to render at a size which will avoid scaling.

For example a compositor may want to use a 1-1.2 range with 1.1 as the
desired scaling factor. A clients which are only able to draw at integer
scaling factor would round that up to 2 and let the compositor downscale
it. When the range for which compositor won't scale is send to clients we
can avoid this.


 Lets take a concrete example. The macbook pro 15 has a 2880 x 1800
 native panel

Re: [PATCH 0/2] Support for high DPI outputs via scaling

2013-05-14 Thread John Kåre Alsaker
 I expect a compositor to render deviations of the desired scaling factor
 without scaling windows. The range when this is allowed is reported to
 clients so they can try to render at a size which will avoid scaling.

 For example a compositor may want to use a 1-1.2 range with 1.1 as the
 desired scaling factor. A clients which are only able to draw at integer
 scaling factor would round that up to 2 and let the compositor downscale
 it. When the range for which compositor won't scale is send to clients we
 can avoid this.

Giving this range to clients also tells when if they will be scaled on the
targeted output. If they are they can ignore information about the pixel
layout of the output.

We may also allow scaling factors below 1. When 0 clients should draw using
the smallest amount of pixels possible while still being understandable. I
doubt that will be very useful and clients could just have 1 as a lower
bound.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 0/2] Support for high DPI outputs via scaling

2013-05-14 Thread John Kåre Alsaker
On Tue, May 14, 2013 at 11:25 AM, Alexander Larsson al...@redhat.comwrote:

 On tis, 2013-05-14 at 07:11 +0200, John Kåre Alsaker wrote:

 
  While this isn't a huge problem on GL-based compositors it
  will cause a problem for software compositors.  Any scaling
  for that matter is a potential problem there.  However, a
  factor of two or something shouldn't be too bad in software.
 
  Compositors do not have to scale anything at all. Even with fractional
  scaling factors a compositor could limit itself to just scaling with
  integer factors. I don't think software compositor is a case worth
  considering either. Nobody wants to have those, especially with
  high-DPI displays.

 I don't think it should really be allowed to not apply the scaling.

Users may want to disable scaling as it's not very good looking. I don't
think we should enforce a policy about that.


 Right now buffer sizes define surface sizes, and that makes surface
  sizes not really well defined.

They're defined as the same as buffer sizes (ignoring the trivial
transformations). They should still be the same if the scaling factor is
not 1.0. Clients should scale both buffers and surfaces. The only thing
that should decouple these sizes more are the scaling extension.


 For instance, if you maximize a surface,
 or resize it you will get a configure request with a surface size
 specified. You then want to make your surface that size, so you create a
  buffer with that size, scaled by the factor you pick.

You can't scale the input from configure. You are required to create a
surface of that size in surface pixels in order for it to match the screen.
Clients should still scale content based on the scaling factor even if it
isn't in control of the surface size.


 If the compositor
 chooses not the given factor but rather a somewhat smaller one then your
 surface will be larger than expected, and extend outsize of the monitor
 in the maximized case, or not hit the pointer in the drag resize case.

 Also, it makes surfaces different sizes on different monitors. If an
 output has some specified size/scale factor (1400x900@1.5x say) and you
 move a 1400x900 surface from that monitor (fully or partially) to
 another with a different scale (1400x900@1x) you want the window to be
 the same size, so the scaling factor shouldn't affect surface sizes.

It is much much more important that the input remains sharp than exactly
the same size. When the client is moved to another output it can resize
itself to better suit the scaling factor of it, if it's able to.

 The size of a window should always be in pixels at the protocol level.


 What do you mean by window and pixel here. There are multiple pixels and
 sizes here:

 compositor pixels:
 The 'compositor coordinate system' is an global space where the
 wl_outputs are laid out.
 framebuffer pixels:
 These correspond to the native LCD panel pixel.
 buffer pixels:
 These are the individual pixels the clients rendered
 surface pixels:
 These are pixels in what the protocol calls surface local
 coordinates.

Most things in the protocol uses surface pixels which is what I usually
mean by pixels.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH] protocol: Add buffer_scale to wl_surface and wl_output

2013-05-14 Thread John Kåre Alsaker
On Tue, May 14, 2013 at 2:51 PM, Alexander Larsson al...@redhat.com wrote:

 On tis, 2013-05-14 at 13:44 +0200, John Kåre Alsaker wrote:
  I'd only accept a proposal which makes the clients tell the compositor
  how much they increased the size of their window. If the compositor
  wants to resize anything it should resize the entire window and not
  individual surfaces/buffers. This also avoids having to position
  subsurfaces and have configure requests at fractional surface pixels.

 I don't quite understand what you mean by this (like, how is window
 different from surface). Can you give an example of how the API would
 look for what you mean.

A window can have multiple surfaces (see the subsurface extension).

The API would be the same, the documentation different.



  set_buffer_scale should be renamed to set_scaling_factor. Otherwise it
  could be easily confused with the scaling extension which just scales
  up buffers. My suggestion also renders it unrelated to buffers. The
  geometry2 event should be named scaling_factor. A done event should
  also be added. I'd also like a lower and upper bound for the scaling
  factor as argument to that.

 I think a done event is silly, as it is not needed and complicates
 clients (since the done event is not always sent for older servers), but
 sure I can add one.

Clients can easily require a compositor which implements the done event,
although they may also support older versions. It's just the compositor
that has to take care not sending done (and geometry2) to the old clients.

If you add one, it should be in a separate commit as it's unrelated to this
work.


  On Tue, May 14, 2013 at 12:26 PM,  al...@redhat.com wrote:
 
  From: Alexander Larsson al...@redhat.com
 
  This adds wl_surface_set_buffer_scale() to set the buffer
  scale of a
  surface.
 
  It is similar to set_buffer_transform that the buffer is
  stored in a
  way that has been transformed (in this case scaled). This
  means that
  if an output is scaled we can directly use the pre-scaled
  buffer with
  additional data, rather than having to scale it.
 
  It also adds a geometry2 event with a scale member to
  wl_output that
  specifies the scaling of an output.
 
  This is meant to be used for outputs with a very high DPI to
  tell the
  client that this particular output has subpixel precision.
  Coordinates
  in other parts of the protocol, like input events, relative
  window
  positioning and output positioning are still in the compositor
  space
  rather than the scaled space. However, input has subpixel
  precision
  so you can still get input at full resolution.
 
  This setup means global properties like mouse
  acceleration/speed,
  pointer size, monitor geometry, etc can be specified in a
  mostly
  similar resolution even on a multimonitor setup where some
  monitors
  are low dpi and some are e.g. retina-class outputs.
  ---
   protocol/wayland.xml | 41
  +++--
   1 file changed, 39 insertions(+), 2 deletions(-)
 
  diff --git a/protocol/wayland.xml b/protocol/wayland.xml
  index 3bce022..e5744c7 100644
  --- a/protocol/wayland.xml
  +++ b/protocol/wayland.xml
  @@ -876,7 +876,7 @@
   /event
 /interface
 
  -  interface name=wl_surface version=2
  +  interface name=wl_surface version=3
   description summary=an onscreen surface
 A surface is a rectangular area that is displayed on
  the screen.
 It has a location, size and pixel contents.
  @@ -1110,6 +1110,30 @@
 /description
 arg name=transform type=int/
   /request
  +
  +!-- Version 3 additions --
  +
  +request name=set_buffer_scale since=3
  +  description summary=sets the buffer scale
  +   This request sets an optional scaling factor on how
  the compositor
  +   interprets the contents of the buffer attached to the
  surface. A
  +   value larger than 1, like e.g. 2 means that the buffer
  is 2 times the
  +   size of the surface.
  +
  +   Buffer scale is double-buffered state, see
  wl_surface.commit.
  +
  +   A newly created surface has its buffer scale set to 1.
  +
  +   The purpose of this request is to allow clients to
  supply higher resolution
  +   buffer data for use on high-resolution outputs where
  the output itself
  +   has a scaling factor set. For instance, a laptop

Re: [PATCH 0/2] Support for high DPI outputs via scaling

2013-05-14 Thread John Kåre Alsaker
On Tue, May 14, 2013 at 3:19 PM, Alexander Larsson al...@redhat.com wrote:

 On tis, 2013-05-14 at 13:19 +0200, John Kåre Alsaker wrote:
  On Tue, May 14, 2013 at 11:25 AM, Alexander Larsson
  al...@redhat.com
 
  I don't think it should really be allowed to not apply the
  scaling.
  Users may want to disable scaling as it's not very good looking. I
  don't think we should enforce a policy about that.

 What do you mean? Scaling is done by the compositor. The user is who
 tells the compositor to use scaling (by setting a scale on the
 wl_outputs), just like he sets the resolution.

I might have been unclear. Users may want to disable resizing of buffer
pixels in the compositor. However clients which able to draw at the correct
scaling factor should still do so and appear pixel perfect.


  Right now buffer sizes define surface sizes, and that makes
  surface
   sizes not really well defined.
  They're defined as the same as buffer sizes (ignoring the trivial
  transformations). They should still be the same if the scaling factor
  is not 1.0. Clients should scale both buffers and surfaces. The only
  thing that should decouple these sizes more are the scaling extension.

 Obviously they are defined as such right now, since there is no scaling.
  But, the patch I proposed changes this,



 because otherwise we can't
 cleanly handle backwards compatibility (and also it makes sense).

Backwards compatibility with what?

How would you handle subsurfaces with different scaling factors? What
coordinates would the subsurface's position be in?


 In the canonical example of 2x scaling you have some lcd panel at say
 2000x2000, and we want naive apps to be automatically upscaled as if it
 was a 1000x1000 monitor. If a window in the naive app is maximized this
 means it must get a configure reply from maximize that says 1000x1000,
 which cause it to attach a buffer of 1000x1000 (with scale factor 1) to
 it. This is how existing wayland clients work.

In my proposal the compositor will scale down configure arguments by the
same amount it resizes the window up.


 Now, a non-naive client will get the same maximize reply of 1000x1000,
 but in this case we want it to create a buffer of 2000x2000 (with scale
 2) since it knows the window is on a high-dpi monitor. So, there is a
 natural separation of the surface size (1000x1000) and the buffer size
 (2000x2000).

 We can't have the configure event say 2000x2000, because then old
 clients will not be auto-upscaled as we want.

In this case our proposals differs. The compositor knows it's not resizing
the window so it simply passes 2000x2000 along to the configure arguments.

It's also worth considering the edge cases when pixels doesn't scale down
so easily. Say we have a workspace of 2000x1901 and we have a scaling
factor of 2 and we want to maximize the client.
With a scale factor unaware legacy client we'd have to send 1000x950.5
which would have to be rounded to 1000x951. We could compensate by cropping
or stretching to fill the workspace.
With a client using a scaling factor of 2 I'd send 2000x1901 which the
client might be able to fill exactly or it might just pretend it's
2000x1902 and crop the result. In your proposal this would still have to be
rounded to 1000x951 and clients able to fill 2000x1901 exactly would render
at 2000x1902 instead.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 0/2] Support for high DPI outputs via scaling

2013-05-14 Thread John Kåre Alsaker
On Tue, May 14, 2013 at 5:49 PM, Alexander Larsson al...@redhat.com wrote:

 On tis, 2013-05-14 at 17:00 +0200, John Kåre Alsaker wrote:


  Obviously they are defined as such right now, since there is
  no scaling.
   But, the patch I proposed changes this,
 
  because otherwise we can't
  cleanly handle backwards compatibility (and also it makes
  sense).
  Backwards compatibility with what?

 With existing wayland clients that don't do sbupixel accuracy. Did you
 ever try a current wayland app on a 240 dpi display?

I thought you were referring to something else since there's still won't be
any scaling from buffer to surface coordinates in the legacy client case.


  How would you handle subsurfaces with different scaling factors? What
  coordinates would the subsurface's position be in?

 All positions are in the global compositor coordinate space. The one
 that e.g. different outputs are positioned in, etc.

That won't work. Currently a subsurface's position is relative to the
parent surface in the parent's surface coordinates.


  We can't have the configure event say 2000x2000, because then
  old
  clients will not be auto-upscaled as we want.
  In this case our proposals differs. The compositor knows it's not
  resizing the window so it simply passes 2000x2000 along to the
  configure arguments.

 How can the compositor know its not resizing the window? Only the app
 knows if it can render at a different scaling factor, as it is the app
 that allocates and draws to the buffer.

The compositor knows if it's drawing pixels at a 1:1 rate or if it's doing
resizing.


 Anyway, its pretty obvious that you're talking about a different design.
 I'm growing tired of this discussion, I will just shut up and implement
 my design in weston. If you want to implement a competing desing, feel
 free.



___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 0/2] Support for high DPI outputs via scaling

2013-05-14 Thread John Kåre Alsaker
On Tue, May 14, 2013 at 9:16 PM, Bill Spitzak spit...@gmail.com wrote:

 John Kåre Alsaker wrote:

  The size of a window should always be in pixels at the protocol level.


 Note that this is in direct contradiction to the proposed scaler api. In
 it the size of the window is in transformed pixels.

 I think now the scaler proposal should be changed.

I was referring to surface pixels here, which is what you get after the
transformation of the scaler extension.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 0/2] Support for high DPI outputs via scaling

2013-05-13 Thread John Kåre Alsaker
On Mon, May 13, 2013 at 11:56 AM, Alexander Larsson al...@redhat.comwrote:

 On ons, 2013-05-08 at 22:58 +0200, John Kåre Alsaker wrote:
  I think we should allow fractional scale factors. Clients which only
  support integer scale factors should round the scale factor they get
  to a whole number.

 I don't see how this is useful? The scaling has two main benefits:

 1) It defines a common global coordinate system in which to specify
global things like window sizes, mouse speed, pointer size, relative
monitor positions, etc. For these it doesn't necessarily matter that
the DPI of these match exactly, only that they are of the same
magnitude. I personally have a desktop setup with two different
monitors of different DPI, which works fine. However, I also have
a chromebook pixel with 239 dpi, and using the same settings for
these things *definately* is a problem on that.

The scaling factor should not have any affect of any coordinate system or
anything else in the protocol. It should only affect the size in pixels of
the surfaces of the clients. Clients should do any transformations of
coordinates they want themselves.



 2) It allows reusing all existing code and assets that specify things in
pixels. The fact that the theme specifying a one-pixel border on
buttons end up with a 3% wider line on some displays is not really
a problem. However, on the pixel it is again a real problem. Surface
scaling lets us fix this without modifying and duplicating all widget
 code, themes, svgs, etc.


 A fractional scale factor is always going to cause visually poor
 experience due to nr 2. I.e. themes will use an integer multiple wide
 lines and padding in order to look good on current resolution screens,
 which will always end up looking blurry if you then scale by a fraction.
 In fact, if the scaling factor is e.g. 1.5, there is no way for a theme
 to get crisp button borders. It could specify a line of width 0.6667
 and hope it gets rounded to exactly 1 on the output, but if some padding
 above it made it accidentally end up on an uneven row it would become
 blurred between two rows on the screen.

 It also causes issues because buffers are by necessity of integer sizes,
 and an integer buffer size times a fractional scale will cause the
 surface to have a fractional size in the global coordinate space. How
 would you handle this if the window overlaps an output with a different
 scaling factor? You'll end up with either a cut off window not showing
 the last row or a blurred partial pixel rendered with AA. How would you
 maximize a window when the interger output size results in a fractional
 buffer size for the window?

Clients can easily scale larger features like icons, padding and fonts and
round them to pixel sizes and draw sharp output even with a fractional
scaling factor. This allows users to pick a size suitable for them. The
difference between a 1 and 2 scaling factor is too huge.



  I don't think we need a scale priority on wl_outputs, clients should
  pick the highest scale factor of the outputs it's on.

 I don't agree. This is a policy decision, as it will always result in
 the best looking output on the highest dpi monitor. The lower one would
 get a downscale of the hires rendering, which will look worse than e.g.
 a properly hinted text rendered for the lower resolution, and I can
 easily imagine situations where you want the output to look sharpest on
 the lower resolution monitor, like when you're using a projector.

 What to pick in this case should be up to the user and/or
 compositor/desktop-environment, so we should allow the compositor to
 tell clients what to do in this case. This is what e.g. OSX does (i.e.
 you get a popup that asks if you want best looking result on the monitor
  or on the laptop display).

I think the case when a window is displayed on multiple monitors is pretty
rare or short in duration (the dragging across monitor case when not using
a workspace per monitor). I suppose we could leave this up to the
compositor.


___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 1/2] protocol: Allow versioned message arguments

2013-05-13 Thread John Kåre Alsaker
For the wl_output case I suggest we add a 'done' event which signals that
the compositor is done sending a batch of events for an wl_output and
related extension objects (which versioned message arguments won't handle).
This would be analogous to wl_surface.commit, only coming from the server
side.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 1/2] protocol: Allow versioned message arguments

2013-05-13 Thread John Kåre Alsaker
On Mon, May 13, 2013 at 1:28 PM, Alexander Larsson al...@redhat.com wrote:

 On mån, 2013-05-13 at 13:26 +0200, John Kåre Alsaker wrote:
  For the wl_output case I suggest we add a 'done' event which signals
  that the compositor is done sending a batch of events for an wl_output
  and related extension objects (which versioned message arguments won't
  handle). This would be analogous to wl_surface.commit, only coming
  from the server side.

 As i said in another mail, an even easier approach is to use the gemetry
 event itself as the done event, and just send the additions (if any)
 *before* the geometry event.

That would also work, but is uglier. The geometry event is rather large
too. Since currently events can happen at any time, adding a done event
won't break backwards compatibility.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 0/2] Support for high DPI outputs via scaling

2013-05-13 Thread John Kåre Alsaker
Mon, May 13, 2013 at 1:26 PM, Alexander Larsson al...@redhat.com wrote:

 On mån, 2013-05-13 at 12:40 +0200, John Kåre Alsaker wrote:

  The scaling factor should not have any affect of any coordinate system
  or anything else in the protocol. It should only affect the size in
  pixels of the surfaces of the clients. Clients should do any
  transformations of coordinates they want themselves.

 I'm not sure if we agree or not here. But, lemme have an example. If you
 have a setup with two monitors using mirrored mode, one with scale == 2.
 Then I want both monitors to show the same thing, one with more detail.
 If I then put the mouse cursor over the bottom right corner of the
 surface i want to get one mouse event, and the coordinates of it should
 be the size of the window in the unscaled coordinates, i.e. the
 width/height of the window in the lower resolution.

The client should get the width and height of the surface in pixels. If the
compositor doesn't display pixels in a 1:1 ratio on the monitor, it should
translate events into pixels in surface coordinates before sending them to
a client. The scaling factor should have no effect on anything (from a
protocol perspective) other than which pixel sizes the client picks.


 Is this what you meant?

  Clients can easily scale larger features like icons, padding and fonts
  and round them to pixel sizes and draw sharp output even with a
  fractional scaling factor. This allows users to pick a size suitable
  for them. The difference between a 1 and 2 scaling factor is too huge.

 Some things can be easily scaled. Like text and photorealistic images.
 Other things can't necessarily be scaled and still look nice. Like line
 art or solid line borders (like e.g. the boxes around a button). Sure,
 you can round everything to nearest int, but that will cause
 unevenness in the rounding (buttons sometimes get a 1 pixel border,
 sometimes 2 depending on the fractional part of the button coords, etc).
 So, I don't think this is every useful in practice.

Even if things aren't perfectly even, it will still be much better than the
alternative which is round up to the next scaling factor and let the
compositor scale the surface down to a suitable size again. This is what
you have to do on Mac OS X to get a reasonable workspace on higher DPI
displays. Windows gets 4 times bigger from scaling factor 1 to 2. That
isn't near enough granular.



  I think the case when a window is displayed on multiple monitors is
  pretty rare or short in duration (the dragging across monitor case
  when not using a workspace per monitor). I suppose we could leave this
  up to the compositor.

 Totally, i don't mean that the priority is useful for that case.
 Normally clients should just pick the scale based on which output the
 surface has the most pixels in. However, the one case where that
 breaks is mirrored/clone mode, where two monitors show the same thing.
 This is where the priority can be useful.

In mirrored/clone mode only a single wl_output would be presented to
clients with a single scale factor, so priorities won't matter in that
case.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 0/2] Support for high DPI outputs via scaling

2013-05-13 Thread John Kåre Alsaker
On Mon, May 13, 2013 at 2:19 PM, Alexander Larsson al...@redhat.com wrote:

 On mån, 2013-05-13 at 14:00 +0200, John Kåre Alsaker wrote:

   Clients can easily scale larger features like icons, padding
  and fonts
   and round them to pixel sizes and draw sharp output even
  with a
   fractional scaling factor. This allows users to pick a size
  suitable
   for them. The difference between a 1 and 2 scaling factor is
  too huge.
 
 
  Some things can be easily scaled. Like text and photorealistic
  images.
  Other things can't necessarily be scaled and still look nice.
  Like line
  art or solid line borders (like e.g. the boxes around a
  button). Sure,
  you can round everything to nearest int, but that will cause
  unevenness in the rounding (buttons sometimes get a 1 pixel
  border,
  sometimes 2 depending on the fractional part of the button
  coords, etc).
  So, I don't think this is every useful in practice.
  Even if things aren't perfectly even, it will still be much better
  than the alternative which is round up to the next scaling factor and
  let the compositor scale the surface down to a suitable size again.
  This is what you have to do on Mac OS X to get a reasonable workspace
  on higher DPI displays. Windows gets 4 times bigger from scaling
  factor 1 to 2. That isn't near enough granular.

 I don't think this will work in practice. I know for sure that e.g. Gtk
 is not set up to do any reasonable non-integer scaling. It will just
 scale up all drawing by a fractional factor without any rounding
 anywhere, causing everything to become fuzzy. We will eventually have
 alternative icons for higher resolutions, but these will be at 2x scale,
 not at generic fractional factor (that is not really doable without
 using pure-vector icons with some complex hinting method). Although, I
 guess that in the case of text the scaling will be ok, so for some
 usecases it might be OK.

It will work very well for things designed to be scalable, browsers are an
example. GTK could just fall back to integer scaling.


 So, it's my opinion that supporting fractional scaling is an unnecessary
 burden on compositors for something that is not very useful in practice.
 Thats just my opinion though, and the proposal as such can handle
 fractional scaling fine by just changing the scale factors to fixed
 type.

It is of no burden on compositors at all, only for clients which choose to
implement it.



  In mirrored/clone mode only a single wl_output would be presented to
  clients with a single scale factor, so priorities won't matter in that
  case.

 That is not necessarily so. We might very well want to know that there
 are two displays, with say different RGB subpixel order, etc.

The compositor could unify this information into a single wl_output.

I suggest we send 3 scaling factors to clients. A lower bound where factors
lower will be scaled up. The desired scaling factor. A upper bound where
factors above will be scaled down. Clients should find the scaling factor
in lower bound to upper bound they are able to render at which is closes to
the desired factor. Failing that they should find the first factor they are
able to render at above the upper bound. When displayed on multiple
monitors it could try to remain sharp on as many monitors as possible.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 0/2] Support for high DPI outputs via scaling

2013-05-08 Thread John Kåre Alsaker
I think we should allow fractional scale factors. Clients which only
support integer scale factors should round the scale factor they get to a
whole number.

We should implement this by adding a scale_factor event on wl_output (which
clients can get at any time).
When clients have rendered their contents in the new scale factor they
should call set_scale_factor on wl_surface (which should only work for
windows/top-level surfaces, not subsurfaces).
set_scale_factor should take two arguments, the scale factor of the output
the client used and the rounded scale factor it used for rendering.
The reason the rounded scale factor is sent is so when the compositor
decides to resize the window, it can use the actual scale factor of the
window. The scale factor of the output is send so the compositor knows it
the client rounded the scale factor and can use a different policy of
resizing the window.

I don't think we need a scale priority on wl_outputs, clients should pick
the highest scale factor of the outputs it's on.

Bill Spitzak: This can't be done with the scaler API because it refers to
scaling of subsurfaces which are positioned on pixel boundaries. It would
also be prone to rounded errors resulting in scaling when we don't want any
(which is in most cases).
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [patches] Add a color management framework to weston

2013-05-02 Thread John Kåre Alsaker
I'd suggest that client should use subsurfaces if they want multiple
colorspaces in a window. They might have to do that anyway since they may
want a different pixel format.


On Thu, May 2, 2013 at 8:58 AM, Pekka Paalanen ppaala...@gmail.com wrote:

 On Wed, 1 May 2013 17:03:30 -0400
 Kristian Høgsberg hoegsb...@gmail.com wrote:

  On Mon, Apr 22, 2013 at 01:30:28PM +0300, Pekka Paalanen wrote:
   On Fri, 5 Apr 2013 14:23:50 -0500
   Thomas Daede daede...@umn.edu wrote:
  
I am not sure that doing the color conversion in the compositor
is the correct place. Some of it has to be there to support vcgt,
but for more general conversions, it gets complicated quickly.
   
Color correction is most important for artists doing work in
something like the GIMP. Programs like this (as of GIMP 3.0, at
least) generally work in higher bit depths - 16 bit, half float,
or sometimes 32 bit float. It's much better to do conversion in
the higher bit depth, then dither to the final display depth.
Doing this with the compositor involves supporting all of these
texture formats for subsurfaces - also, the toolkit has to
support subsurfaces, because generally the UI will be some lower
bit depth, or not need correction.
   
Converting bit depths in the compositor would also require an
extra buffer allocation and copy in the compositor.
   
RGB images are also not the only thing that needs to be color
corrected - for example, 10bit YUV video streams might want to be
color corrected too. If we were to go as far as converting bit
depths in the compositor, it wouldn't be much to add this.
   
I think that providing color space regions to the client,
relative to the client's buffer, including vcgt settings, would
shove a lot of complexity away to clients that are already used
to dealing with it.
  
   I'm not sure we can do that. The regions can be arbitrarily complex
   and non-linear shapes.
 
  It's not too different from how we handle opaque regions.  We split up
  the surfaces into opaque and transparent parts and render them using
  different shaders already.

 Sorry, I read providing color space regions to the client as the
 server determines the areas of each output overlapping with the
 surface, and communicated that to the clients.

 If the regions are communicated from clients to the server direction
 only, then nevermind my comment.


 Thanks,
 pq
 ___
 wayland-devel mailing list
 wayland-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/wayland-devel

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH] Parse the color management parts of the EDID

2013-05-02 Thread John Kåre Alsaker
EDID parsing should probably be moved out of compositor-drm since other
backends can provide EDIDs too.


On Thu, May 2, 2013 at 10:38 PM, Richard Hughes hughsi...@gmail.com wrote:

 This code was originally written by Soren Sandmann sandm...@redhat.com
 and was
 found here: http://people.gnome.org/~ssp/randr/edid-parse.c

 The code has been in use in gnome-settings-daemon for a couple of years
 now and
 is used to generate an Auto-EDID ICC profile if the screen has not been
 profiled
 with a calibration device.

 This will be used by the CMS plugins in the future, and so it makes sense
 being
 common code in compositor-drm.
 ---
  src/compositor-drm.c | 56
 
  1 file changed, 56 insertions(+)

 diff --git a/src/compositor-drm.c b/src/compositor-drm.c
 index f39096e..e4a1919 100644
 --- a/src/compositor-drm.c
 +++ b/src/compositor-drm.c
 @@ -136,11 +136,22 @@ struct drm_fb {
 void *map;
  };

 +struct drm_color_Yxy {
 +   double Y;
 +   double x;
 +   double y;
 +};
 +
  struct drm_edid {
 char eisa_id[13];
 char monitor_name[13];
 char pnp_id[5];
 char serial_number[13];
 +   struct drm_color_Yxy whitepoint;
 +   struct drm_color_Yxy primary_red;
 +   struct drm_color_Yxy primary_green;
 +   struct drm_color_Yxy primary_blue;
 +   double gamma;
  };

  struct drm_output {
 @@ -1558,6 +1569,31 @@ edid_parse_string(const uint8_t *data, char text[])
  #define EDID_OFFSET_SERIAL 0x0c

  static int
 +edid_get_bit(int in, int bit)
 +{
 +   return (in  (1  bit))  bit;
 +}
 +
 +static int
 +edid_get_bits(int in, int begin, int end)
 +{
 +   int mask = (1  (end - begin + 1)) - 1;
 +   return (in  begin)  mask;
 +}
 +
 +static double
 +edid_decode_fraction(int high, int low)
 +{
 +   double result = 0.0;
 +   int i;
 +
 +   high = (high  2) | low;
 +   for (i = 0; i  10; ++i)
 +   result += edid_get_bit(high, i) * pow(2, i - 10);
 +   return result;
 +}
 +
 +static int
  edid_parse(struct drm_edid *edid, const uint8_t *data, size_t length)
  {
 int i;
 @@ -1579,6 +1615,26 @@ edid_parse(struct drm_edid *edid, const uint8_t
 *data, size_t length)
 edid-pnp_id[2] = 'A' + (data[EDID_OFFSET_PNPID + 1]  0x1f) - 1;
 edid-pnp_id[3] = '\0';

 +   /* get native gamma */
 +   if (data[0x17] == 0xff)
 +   edid-gamma = -1.0;
 +   else
 +   edid-gamma = (data[0x17] + 100.0) / 100.0;
 +
 +   /* get primaries and whitepoint */
 +   edid-primary_red.Y = 1.0f;
 +   edid-primary_red.x = edid_decode_fraction(data[0x1b],
 edid_get_bits(data[0x19], 6, 7));
 +   edid-primary_red.y = edid_decode_fraction(data[0x1c],
 edid_get_bits(data[0x19], 5, 4));
 +   edid-primary_green.Y = 1.0f;
 +   edid-primary_green.x = edid_decode_fraction(data[0x1d],
 edid_get_bits(data[0x19], 2, 3));
 +   edid-primary_green.y = edid_decode_fraction(data[0x1e],
 edid_get_bits(data[0x19], 0, 1));
 +   edid-primary_blue.Y = 1.0f;
 +   edid-primary_blue.x = edid_decode_fraction(data[0x1f],
 edid_get_bits(data[0x1a], 6, 7));
 +   edid-primary_blue.y = edid_decode_fraction(data[0x20],
 edid_get_bits(data[0x1a], 4, 5));
 +   edid-whitepoint.Y = 1.0f;
 +   edid-whitepoint.x = edid_decode_fraction(data[0x21],
 edid_get_bits(data[0x1a], 2, 3));
 +   edid-whitepoint.y = edid_decode_fraction(data[0x22],
 edid_get_bits(data[0x1a], 0, 1));
 +
 /* maybe there isn't a ASCII serial number descriptor, so use this
 instead */
 serial_number = (uint32_t) data[EDID_OFFSET_SERIAL + 0];
 serial_number += (uint32_t) data[EDID_OFFSET_SERIAL + 1] * 0x100;
 --
 1.8.2.1

 ___
 wayland-devel mailing list
 wayland-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/wayland-devel

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH] Parse the color management parts of the EDID

2013-05-02 Thread John Kåre Alsaker
That will probably work.


On Thu, May 2, 2013 at 11:07 PM, Richard Hughes hughsi...@gmail.com wrote:

 edid.[c|h] is actually what I had initially, want me to do something
 like that? Something like a:

 struct edid_info;
 int edid_parse(struct edid_info *info, const uint8_t *data, size_t length);

 Richard

 On 2 May 2013 21:57, John Kåre Alsaker john.kare.alsa...@gmail.com
 wrote:
  EDID parsing should probably be moved out of compositor-drm since other
  backends can provide EDIDs too.
 
 
  On Thu, May 2, 2013 at 10:38 PM, Richard Hughes hughsi...@gmail.com
 wrote:
 
  This code was originally written by Soren Sandmann sandm...@redhat.com
 
  and was
  found here: http://people.gnome.org/~ssp/randr/edid-parse.c
 
  The code has been in use in gnome-settings-daemon for a couple of years
  now and
  is used to generate an Auto-EDID ICC profile if the screen has not been
  profiled
  with a calibration device.
 
  This will be used by the CMS plugins in the future, and so it makes
 sense
  being
  common code in compositor-drm.
  ---
   src/compositor-drm.c | 56
  
   1 file changed, 56 insertions(+)
 
  diff --git a/src/compositor-drm.c b/src/compositor-drm.c
  index f39096e..e4a1919 100644
  --- a/src/compositor-drm.c
  +++ b/src/compositor-drm.c
  @@ -136,11 +136,22 @@ struct drm_fb {
  void *map;
   };
 
  +struct drm_color_Yxy {
  +   double Y;
  +   double x;
  +   double y;
  +};
  +
   struct drm_edid {
  char eisa_id[13];
  char monitor_name[13];
  char pnp_id[5];
  char serial_number[13];
  +   struct drm_color_Yxy whitepoint;
  +   struct drm_color_Yxy primary_red;
  +   struct drm_color_Yxy primary_green;
  +   struct drm_color_Yxy primary_blue;
  +   double gamma;
   };
 
   struct drm_output {
  @@ -1558,6 +1569,31 @@ edid_parse_string(const uint8_t *data, char
 text[])
   #define EDID_OFFSET_SERIAL 0x0c
 
   static int
  +edid_get_bit(int in, int bit)
  +{
  +   return (in  (1  bit))  bit;
  +}
  +
  +static int
  +edid_get_bits(int in, int begin, int end)
  +{
  +   int mask = (1  (end - begin + 1)) - 1;
  +   return (in  begin)  mask;
  +}
  +
  +static double
  +edid_decode_fraction(int high, int low)
  +{
  +   double result = 0.0;
  +   int i;
  +
  +   high = (high  2) | low;
  +   for (i = 0; i  10; ++i)
  +   result += edid_get_bit(high, i) * pow(2, i - 10);
  +   return result;
  +}
  +
  +static int
   edid_parse(struct drm_edid *edid, const uint8_t *data, size_t length)
   {
  int i;
  @@ -1579,6 +1615,26 @@ edid_parse(struct drm_edid *edid, const uint8_t
  *data, size_t length)
  edid-pnp_id[2] = 'A' + (data[EDID_OFFSET_PNPID + 1]  0x1f) -
 1;
  edid-pnp_id[3] = '\0';
 
  +   /* get native gamma */
  +   if (data[0x17] == 0xff)
  +   edid-gamma = -1.0;
  +   else
  +   edid-gamma = (data[0x17] + 100.0) / 100.0;
  +
  +   /* get primaries and whitepoint */
  +   edid-primary_red.Y = 1.0f;
  +   edid-primary_red.x = edid_decode_fraction(data[0x1b],
  edid_get_bits(data[0x19], 6, 7));
  +   edid-primary_red.y = edid_decode_fraction(data[0x1c],
  edid_get_bits(data[0x19], 5, 4));
  +   edid-primary_green.Y = 1.0f;
  +   edid-primary_green.x = edid_decode_fraction(data[0x1d],
  edid_get_bits(data[0x19], 2, 3));
  +   edid-primary_green.y = edid_decode_fraction(data[0x1e],
  edid_get_bits(data[0x19], 0, 1));
  +   edid-primary_blue.Y = 1.0f;
  +   edid-primary_blue.x = edid_decode_fraction(data[0x1f],
  edid_get_bits(data[0x1a], 6, 7));
  +   edid-primary_blue.y = edid_decode_fraction(data[0x20],
  edid_get_bits(data[0x1a], 4, 5));
  +   edid-whitepoint.Y = 1.0f;
  +   edid-whitepoint.x = edid_decode_fraction(data[0x21],
  edid_get_bits(data[0x1a], 2, 3));
  +   edid-whitepoint.y = edid_decode_fraction(data[0x22],
  edid_get_bits(data[0x1a], 0, 1));
  +
  /* maybe there isn't a ASCII serial number descriptor, so use
 this
  instead */
  serial_number = (uint32_t) data[EDID_OFFSET_SERIAL + 0];
  serial_number += (uint32_t) data[EDID_OFFSET_SERIAL + 1] *
 0x100;
  --
  1.8.2.1
 
  ___
  wayland-devel mailing list
  wayland-devel@lists.freedesktop.org
  http://lists.freedesktop.org/mailman/listinfo/wayland-devel
 
 

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Setting the display calibration ramps using weston

2013-04-02 Thread John Kåre Alsaker
My planned approach was to make colord and Oyranos plugins for Weston.
The plugins would provide the CMSes with information about the outputs
and Weston in return gets ICC profiles for them (which contains gamma
curves). Weston needs the full ICC profile in order to do proper color
correction and the CMSes needs information about the outputs, so
everyone's happy.

On Tue, Apr 2, 2013 at 11:24 AM, Richard Hughes hughsi...@gmail.com wrote:
 On 2 April 2013 06:45, Graeme Gill grae...@argyllcms.com wrote:
 The display's ICC profile will also be stored in the _ICC_PROFILE atom,
 to make it available to the display client, so that color can be adjusted
 appropriately for that display. Using XRandR it will/should
 also be set in an output property.

 There's not going to be _ICC_PROFILE atoms in weston, nor XRandR
 output properties, so clients will need to query the CMM directly
 (which is a good thing IMO). The grand idea is to use a sub-surface
 with a tagged color profile, which means applications don't have to
 care about CM unless they want to opt out a region and handle
 everything themselves.

 Richard
 ___
 wayland-devel mailing list
 wayland-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/wayland-devel
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Setting the display calibration ramps using weston

2013-04-02 Thread John Kåre Alsaker
I'd say we make it a ./configure option which builds a shared library
which talks to colord over D-Bus. It can then be packaged separately
by distros. A weston.ini option should be added to pick the CMS plugin
to load. lcms2 can be added as a dependency for the weston side code
(it will probably be required to generate 3D LUTs later on anyway). We
probably want a configure option to disable the CMS plugin loader
which also removes the dependency on lcms2. The CMS plugin loader or
an CMS API doesn't exist yet, so we'll have to create one. We probably
want to pass EDID data over to the CMS plugin and it notifies weston
about output profile changes in form of a ICC profile which lcms2
handles on the weston side.

On Tue, Apr 2, 2013 at 5:40 PM, Richard Hughes hughsi...@gmail.com wrote:
 On 2 April 2013 02:42, Kristian Høgsberg hoegsb...@gmail.com wrote:
 So for something like weston, where we store our configuration in
 weston.ini, I'd expect that we add a config key to set the gamma (is
 this just a number like what xgamma takes? or one for red, green and
 blue?) per output in weston.ini.  So we need to parse that in
 output_section_done() and store in struct drm_configured_output, copy
 the value into struct drm_output in create_output_for_connector() and
 then apply that whenever we call drmModeSetCrtc(), generally.

 I've attached a patch that works for me, I'd appreciate any comments
 or a formal review. If you've got colord and lcms2 installed and you
 uncomment the icc_profile item in weston.ini then the Bluish test
 profile will be applied and LVDS will go a deep shade of blue. lcms2
 is a totally sane dep IMO, it's used by tons of embedded and desktoppy
 things already.

 I'm totally up for abstracting this like John suggested, I just need
 some more guidance on how you'd like me to do this. For instance, are
 the plugins going to be installed by the CMF (e.g. colord installs
 /usr//lib/weston/plugins/colord.so), ./configure compile arguments
 (e.g. --enable-colord) or can I just link to D-Bus and make it a
 runtime dep?

 Richard

 ___
 wayland-devel mailing list
 wayland-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/wayland-devel

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH weston] gl-renderer: Allow compilation when EGL_BUFFER_AGE_EXT is not present

2013-03-20 Thread John Kåre Alsaker
The extension should be added to weston-egl-ext.h instead.

On Wed, Mar 20, 2013 at 6:05 PM, Rob Bradford robert.bradf...@intel.com wrote:
 From: Rob Bradford r...@linux.intel.com

 In 1c169ff support is added for using the EGL_BUFFER_AGE_EXT extension
 including runtime detection of whether the extension is available. This change
 extends that to also check that the extension is known about at compile time.

 This allows weston to continue to compile against EGL stacks that don't yet
 have that extension.
 ---
  src/gl-renderer.c | 7 +--
  1 file changed, 5 insertions(+), 2 deletions(-)

 diff --git a/src/gl-renderer.c b/src/gl-renderer.c
 index ea6631f..11a97f7 100644
 --- a/src/gl-renderer.c
 +++ b/src/gl-renderer.c
 @@ -950,11 +950,13 @@ output_get_buffer_damage(struct weston_output *output,
  pixman_region32_t *buffer_damage)
  {
 struct gl_output_state *go = get_output_state(output);
 -   struct gl_renderer *gr = get_renderer(output-compositor);
 EGLint buffer_age = 0;
 -   EGLBoolean ret;
 int i;

 +#ifdef EGL_BUFFER_AGE_EXT
 +   struct gl_renderer *gr = get_renderer(output-compositor);
 +   EGLBoolean ret;
 +
 if (gr-has_egl_buffer_age) {
 ret = eglQuerySurface(gr-egl_display, go-egl_surface,
   EGL_BUFFER_AGE_EXT, buffer_age);
 @@ -963,6 +965,7 @@ output_get_buffer_damage(struct weston_output *output,
 gl_renderer_print_egl_error_state();
 }
 }
 +#endif

 if (buffer_age == 0 || buffer_age - 1  BUFFER_DAMAGE_COUNT)
 pixman_region32_copy(buffer_damage, output-region);
 --
 1.8.1.2

 ___
 wayland-devel mailing list
 wayland-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/wayland-devel
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [RFC 0/4] New wayland-egl functions for swapping

2013-03-19 Thread John Kåre Alsaker
wl_egl_window_take_buffer might also facilitate sharing wl_buffers
between processes.

On Mon, Mar 4, 2013 at 1:51 PM, John Kåre Alsaker
john.kare.alsa...@gmail.com wrote:
 On Mon, Mar 4, 2013 at 11:56 AM, Pekka Paalanen ppaala...@gmail.com wrote:
 On Mon, 4 Mar 2013 11:12:23 +0100
 John Kåre Alsaker john.kare.alsa...@gmail.com wrote:

 On Mon, Mar 4, 2013 at 9:48 AM, Pekka Paalanen ppaala...@gmail.com wrote:
  On Sun,  3 Mar 2013 02:26:10 +0100
  John Kåre Alsaker john.kare.alsa...@gmail.com wrote:
 
  This patchset introduces wl_egl_window_take_buffer and 
  wl_egl_window_commit to the native wayland-egl platform.
 
  wl_egl_window_take_buffer gives a way to get a wl_buffer from an 
  wl_egl_window.
  The application is expected to attach this to one or multiple 
  wl_surfaces and the EGL implementation
  will listen to it's release event before it reuses the buffer.
 
  This has a couple of advantages over eglSwapBuffers:
   - It's always non-blocking, clients doesn't have to ensure swap 
  interval is 0
 
  Where do you synchronize the GPU?
  Does wl_egl_window_take_buffer imply a glFinish(), with a real wait for
  GPU to finish rendering?
 
  An advantage of eglSwapBuffers, if I understand right, is that it can
  schedule a swap, and return early. The actual wait for GPU to finish
  rendering happens in the compositor, when it tries to use the buffer
  provided by a client as a texture. Or rather, it might not wait;
  instead just queue more rendering commands, and the GPU's pipeline and
  hw synchronization primitives will synchronice automatically, without
  stalling any process on a CPU.
 
  Can you preserve that with this proposal?
 Yes, that will work exactly the same way.

 
  To me it looks like your patch 2/4 explicitly breaks this by waiting
  for the frame event right after sending a buffer. The client is forced
  to stall, until rendering of *current* frame hits the screen, as
  opposed to the rendering of the previous frame.
 That patch waits for the frame callback after it has committed the
 frame instead of before, which reduces input latency.

 This, and your reply above are contradictory.
 The changes in 2/4 is not required for the proposal.


 The point is, can the client do any useful work between sending a
 frame, and waiting for it to show. Before your changes, eglSwapBuffers
 returned, and will wait on the next call, leaving the chance to do work
 between sending a buffer and waiting for it to be processed. Therefore,
 part of the time that must pass between frames, can be spent on
 computing in the client.

 If I read your patch 2/4 right, after your changes, you first send a
 buffer, commit, and then wait for the frame callback of that commit to
 arrive, before returning from eglSwapBuffers. Hence the client is
 completely synchronized to and blocked by the server, guaranteeing the
 maximal waste of time. If we had only uniprocessor systems, this
 would probably not be too bad, since the two processes couldn't run at
 the same time anyway.

 Also, being blocked inside EGL does not allow any Wayland events, except
 those for the EGL itself, to be processed, so there is no escape.
 When an EGL client renders faster than the framerate they will still
 be blocked as much inside EGL only it will had 1 frame of delay.
 However when it renders slower than the framerate we want to use the
 current behavior where we block on the previous frame.
 This allows the client to render a frame ahead, which in practice
 means that it will usually never block and it will stay busy drawing
 frames.
 We should probably preserve this behavior until we get something better.

 That something better would be to decide to wait for the frame event
 based on the time spend drawing the previous frame.
 Given that with DRM part of this time is spent at the compositor that
 may require some cooperation with the compositor or the compositor's
 EGL side to be effective.
 We also need a way to provide the desired framerate to EGL, which
 sounds like a decent vendor neutral EGL extension.

 Anyway, that only matter for games, for regular clients we want to use
 wl_egl_window_take_buffer or
 eglSwapBuffers (with swap interval at 0) so we never block in EGL and
 can stay in the event loop ready to react.


  I'm not familiar enough with the Mesa code to see everything from the
  patches.
 
  I think this will also be effected by when does the compositor actually
  send frame callback events, on starting repaint as it is now, or when
  the frame has actually hit the screen. At least for me that is still an
  open question.
 I don't think that affects this proposal versus eglSwapBuffers.

 It affects what happens: does sending the frame callbacks wait for the
 compositor's rendering to finish and hit vblank for the flip, or not.
 We should keep the whole stack in mind, otherwise we probably overlook
 some interactions.

 If your tests currently show no bad effects, that may change, if the
 frame callback handing

Re: [PATCH wayland 0/6] Add wl_object based custom dispatchers support

2013-03-17 Thread John Kåre Alsaker
I suggest we apply all of them except the last 4 which add dispatcher
support. That allows us to slowly fix the other projects with the new
API.

On Sat, Mar 16, 2013 at 9:25 PM, Scott Moreau ore...@gmail.com wrote:
 Hi Jason,

 I appreciate your work here. These patches can potentially lower the
 entry barrier to writing other language bindings. Unfortunately, I
 don't not feel that this is something suitable for gh next. It adds a
 dependency on a mesa patch, which makes it more difficult to build the
 next stack, without many immediate benefits. I wanted to make it clear
 upfront to avoid wasted effort. I am glad that we were able to get
 many of the bugs worked out before it is reviewed, it is nice to see
 the community working together here.

 - Scott

 On Sat, Mar 16, 2013 at 1:15 PM, Jason Ekstrand ja...@jlekstrand.net wrote:
 Kristian,
 Working with Scott Moreau to get this applied to his next branch
 demonstrated that this series is a bit tricky to apply.  In order to
 do everything correctly, it needs to be applied in three stages:

 First, apply the following 4 patches (in order):
 http://lists.freedesktop.org/archives/wayland-devel/2013-March/007830.html
 http://lists.freedesktop.org/archives/wayland-devel/2013-March/007964.html
 http://lists.freedesktop.org/archives/wayland-devel/2013-March/007844.html
 http://lists.freedesktop.org/archives/wayland-devel/2013-March/007846.html

 The last of the above patches adds a wl_resource_init function for
 initializing the wl_resource structure.  After this, Weston, Mesa, and
 (possibly) xwayland need to be patched to use wl_resource_init.  The
 following patch for weston applies against git master, but will
 probably need to be re-done due to changes made in weston:
 http://lists.freedesktop.org/archives/wayland-devel/2013-March/007845.html

 Finally, once Weston, xwayland, and Mesa have been patched to use
 wl_resource_init, we can apply the last 4 patches in the series (these
 actually add dispatcher support):

 http://lists.freedesktop.org/archives/wayland-devel/2013-March/007850.html
 http://lists.freedesktop.org/archives/wayland-devel/2013-March/007848.html
 http://lists.freedesktop.org/archives/wayland-devel/2013-March/007849.html
 http://lists.freedesktop.org/archives/wayland-devel/2013-March/007847.html

 Hopefully, this will keep the transition from being too rocky.
 --Jason Ekstrand
 ___
 wayland-devel mailing list
 wayland-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/wayland-devel
 ___
 wayland-devel mailing list
 wayland-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/wayland-devel
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [RFC] protocol: Introduce logical surface protocol

2013-03-17 Thread John Kåre Alsaker
I believe this is the wrong approach. I'd rather see some generic way
to share any wl_object between clients securely. I don't see a
transfer of ownership of your handle which means any client and get
access to it.

I'd rather have something like this:
request name=share_object
  arg name=object type=object /
  arg name=target type=int /
  arg name=data type=array /
/request

event name=receive_object
  arg name=object type=object/
  arg name=sender type=int/
  arg name=interface type=string/
  arg name=data type=array/
/event

One connection uses share_object to send a wl_object and some
associated data so it knows what to do with it. The target can be some
unique identifier of a connection to the compositor. The other
connection receives the object with receive_object. It has the unique
identifier of the sender and the interface type so it knows which type
the object it. It can look at the data argument to find out what the
sender want you to do with the object. If the client doesn't want to
use the object, it should just delete it.

There's probably some details regarding interfaces and version that
needs to be worked out for this to work.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: surface buffer cardinality and outputs

2013-03-14 Thread John Kåre Alsaker
I don't see why we'd want to remove properties from the output object.
They are still useful for the very common case where a surface is on a
single output only. For example, if a surface is on multiple outputs
with mismatching subpixel layouts, it could turn off subpixel
rendering.

Having clients render in different color spaces for different outputs
is another reason to use a buffer per output. I have some elaborate
plot to avoid this requirement in my Wayland CMS protocol suggestion,
but that won't well for other properties.

Clients will want to align drawing to pixels, so a simple multiple
wl_buffer per wl_surface scheme won't work well for subsurfaces. That
would require an wl_surface per output.

We can question how important the case of dragging a window across and
letting it display properly until it's completely on the other screen
is.
- for subpixel rendering, clients can disable it. Subpixel rendering
is becoming less important with high DPI display and turning it off
isn't very intrusive.
- for the DPI differences, clients can render in the highest DPI and
let the compositor scale down on the other outputs it's on. This means
it will always look good on one output atleast. You'd see a jump in
quality as the client renders for the new output.

I'd say these workarounds works for most applications and is
preferable over the complexity and performance loss of using multiple
buffers. We may want a solution for applications with a single window
spanning multiple outputs, but I'm not sure those exists.

Anyway I do prefer the one workspace per output approach where you
can't see these race issues at all :)

On Thu, Mar 14, 2013 at 4:36 PM, Jason Ekstrand ja...@jlekstrand.net wrote:
 Your proposal is middle ground: a set of buffers, and some
 heuristics applied on their properties to combine those in the
 compositor space. For simplicity, I dare to think one buffer for
 one output is quite clear cut and easier to deal with.

 As a word of clarification,  I said  keyed to output properties in
 an attempt to be general.  The heuristic used to select the exact
 buffer could be any number of things.  One possibility is to have it
 keyed to the exact output (what you are suggesting).  For a multi-gpu
 setup, you might have it keyed to the GPU.

 The real point of my example was the following idea:  A buffer
 represents the back-end data to be drawn to the screen.  Why does that
 back-end data have to mean a single grid of pixels?  Why couldn't it
 be an SVG image? Why can't it be a grid of pixels that is
 smaller/larger than it will actually display on-screen and should be
 scaled? Why can't it be multiple grids of pixels, one for each display
 that may or may not be the same size depending on screen densities?

 This different way of thinking about buffers is just one possible way
 to handle it.  Perhaps it should be handled at the surface level
 instead (although I don't like that as much).

 Working with multiple screens, subpixel formats, densities, etc is a
 huge issue.  We are no longer in the era where every screen is a 96DPI
 CRT and we need to find a way to handle that.  When it comes to
 densities: what if have a window on my 2880x1800 laptop display and
 drag it onto my 1280x1024 external monitor?  I really don't want that
 window to more-than-fill the external monitor and have huge text.
 Instead, the toolkit should rescale for the lower density and it needs
 to be able to do that while the window is still half on one monitor
 and half on the other.  Like the subpixel issue, I don't think this
 can be properly fixed without multiple images per buffer or multiple
 images per surface.  The question is how best to handle it at the
 protocol level.

 Part of the problem is that we're breaking new ground here.  As far as
 I know, no one currently handles this well (particularly screen
 densities).  Android handles things fairly well but they don't really
 support multiple monitors and I don't think they care about subpixel
 stuff on the second (cloned) monitor.  Apple handles this by cheating
 and making their retina displays basically identical except that they
 have exactly 4 times as many pixels.  Microsoft handles this by
 simply scaling all of the non-compliant apps by 1.8 on denser screens
 (yes, really, 1.8) and doesn't handle multiple-monitors well either.
 ___
 wayland-devel mailing list
 wayland-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/wayland-devel
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [RFC 0/4] New wayland-egl functions for swapping

2013-03-04 Thread John Kåre Alsaker
On Mon, Mar 4, 2013 at 9:48 AM, Pekka Paalanen ppaala...@gmail.com wrote:
 On Sun,  3 Mar 2013 02:26:10 +0100
 John Kåre Alsaker john.kare.alsa...@gmail.com wrote:

 This patchset introduces wl_egl_window_take_buffer and wl_egl_window_commit 
 to the native wayland-egl platform.

 wl_egl_window_take_buffer gives a way to get a wl_buffer from an 
 wl_egl_window.
 The application is expected to attach this to one or multiple wl_surfaces 
 and the EGL implementation
 will listen to it's release event before it reuses the buffer.

 This has a couple of advantages over eglSwapBuffers:
  - It's always non-blocking, clients doesn't have to ensure swap interval is  0

 Where do you synchronize the GPU?
 Does wl_egl_window_take_buffer imply a glFinish(), with a real wait for
 GPU to finish rendering?

 An advantage of eglSwapBuffers, if I understand right, is that it can
 schedule a swap, and return early. The actual wait for GPU to finish
 rendering happens in the compositor, when it tries to use the buffer
 provided by a client as a texture. Or rather, it might not wait;
 instead just queue more rendering commands, and the GPU's pipeline and
 hw synchronization primitives will synchronice automatically, without
 stalling any process on a CPU.

 Can you preserve that with this proposal?
Yes, that will work exactly the same way.


 To me it looks like your patch 2/4 explicitly breaks this by waiting
 for the frame event right after sending a buffer. The client is forced
 to stall, until rendering of *current* frame hits the screen, as
 opposed to the rendering of the previous frame.
That patch waits for the frame callback after it has committed the
frame instead of before, which reduces input latency.


 I'm not familiar enough with the Mesa code to see everything from the
 patches.

 I think this will also be effected by when does the compositor actually
 send frame callback events, on starting repaint as it is now, or when
 the frame has actually hit the screen. At least for me that is still an
 open question.
I don't think that affects this proposal versus eglSwapBuffers.


  - Client are in control of the damage, attach and commit requests.
I find that to be cleaner and wouldn't require an extension to specify 
 damage.

 With the buffer age extension, that makes sense, especially for nested
 compositors and other 2D-ish apps.

  - You can attach a wl_buffer to multiple wl_surfaces.
For example you could use a single buffer for 4 window decorations 
 subsurfaces and
perhaps even draw decorations with a single draw call.

 How do you intend the example would work? Are you assuming something
 from scaling  clipping extension?
Yeah, you'd need clipping to do that.


  - We can remove the ugly commit mode setting in the subsurfaces proposal.
It's no longer required since EGL won't do commit on subsurfaces.

 Can you explain why it could be removed? What kind of commit mode would
 be the only commit mode then, and how it can a) offer a way for
 guarateed atomic update of several sub-surfaces, and b) allow a
 sub-surface to be updated without a parent surface commit when wanted?
My suggestion there is to have wl_surface.commit commit all child surfaces too.
Then you can update all surfaces in a group or just sub-surfaces.
I don't see a need for something more elaborate, but nested
sub-surface could offer more control.


 wl_egl_window_commit will commit the surface and use EGL's policy regarding 
 waiting on frames.

 If wl_egl_window_commit is going to commit and block until the frame
 has been displayed, I think the function name is misleading as commit
 is usually just a kick here, not a roundtrip. Perhaps a different verb
 would be in place.
I don't think it's more misleading than SwapBuffers. The only thing
it's required to do is call wl_surface.commit
and wait n frames depending on EGL's configuration. We may also drop
this function since it's only intended for games,
which won't do much damage tracking anyway. Games may still use
wl_egl_window_take_buffer for subsurface decorations
followed by eglSwapBuffers which commits all of the surfaces with my
suggested commit scheme.


 wl_egl_window_take_buffer followed by wl_egl_window_commit is equivalent to 
 eglSwapBuffers
 only that you can specify damage without an extensions. It's more of a 
 natural complement of
 wl_egl_window_take_buffer than a required function.



 Thanks,
 pq
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [RFC 0/4] New wayland-egl functions for swapping

2013-03-04 Thread John Kåre Alsaker
On Mon, Mar 4, 2013 at 11:56 AM, Pekka Paalanen ppaala...@gmail.com wrote:
 On Mon, 4 Mar 2013 11:12:23 +0100
 John Kåre Alsaker john.kare.alsa...@gmail.com wrote:

 On Mon, Mar 4, 2013 at 9:48 AM, Pekka Paalanen ppaala...@gmail.com wrote:
  On Sun,  3 Mar 2013 02:26:10 +0100
  John Kåre Alsaker john.kare.alsa...@gmail.com wrote:
 
  This patchset introduces wl_egl_window_take_buffer and 
  wl_egl_window_commit to the native wayland-egl platform.
 
  wl_egl_window_take_buffer gives a way to get a wl_buffer from an 
  wl_egl_window.
  The application is expected to attach this to one or multiple wl_surfaces 
  and the EGL implementation
  will listen to it's release event before it reuses the buffer.
 
  This has a couple of advantages over eglSwapBuffers:
   - It's always non-blocking, clients doesn't have to ensure swap interval 
  is 0
 
  Where do you synchronize the GPU?
  Does wl_egl_window_take_buffer imply a glFinish(), with a real wait for
  GPU to finish rendering?
 
  An advantage of eglSwapBuffers, if I understand right, is that it can
  schedule a swap, and return early. The actual wait for GPU to finish
  rendering happens in the compositor, when it tries to use the buffer
  provided by a client as a texture. Or rather, it might not wait;
  instead just queue more rendering commands, and the GPU's pipeline and
  hw synchronization primitives will synchronice automatically, without
  stalling any process on a CPU.
 
  Can you preserve that with this proposal?
 Yes, that will work exactly the same way.

 
  To me it looks like your patch 2/4 explicitly breaks this by waiting
  for the frame event right after sending a buffer. The client is forced
  to stall, until rendering of *current* frame hits the screen, as
  opposed to the rendering of the previous frame.
 That patch waits for the frame callback after it has committed the
 frame instead of before, which reduces input latency.

 This, and your reply above are contradictory.
The changes in 2/4 is not required for the proposal.


 The point is, can the client do any useful work between sending a
 frame, and waiting for it to show. Before your changes, eglSwapBuffers
 returned, and will wait on the next call, leaving the chance to do work
 between sending a buffer and waiting for it to be processed. Therefore,
 part of the time that must pass between frames, can be spent on
 computing in the client.

 If I read your patch 2/4 right, after your changes, you first send a
 buffer, commit, and then wait for the frame callback of that commit to
 arrive, before returning from eglSwapBuffers. Hence the client is
 completely synchronized to and blocked by the server, guaranteeing the
 maximal waste of time. If we had only uniprocessor systems, this
 would probably not be too bad, since the two processes couldn't run at
 the same time anyway.

 Also, being blocked inside EGL does not allow any Wayland events, except
 those for the EGL itself, to be processed, so there is no escape.
When an EGL client renders faster than the framerate they will still
be blocked as much inside EGL only it will had 1 frame of delay.
However when it renders slower than the framerate we want to use the
current behavior where we block on the previous frame.
This allows the client to render a frame ahead, which in practice
means that it will usually never block and it will stay busy drawing
frames.
We should probably preserve this behavior until we get something better.

That something better would be to decide to wait for the frame event
based on the time spend drawing the previous frame.
Given that with DRM part of this time is spent at the compositor that
may require some cooperation with the compositor or the compositor's
EGL side to be effective.
We also need a way to provide the desired framerate to EGL, which
sounds like a decent vendor neutral EGL extension.

Anyway, that only matter for games, for regular clients we want to use
wl_egl_window_take_buffer or
eglSwapBuffers (with swap interval at 0) so we never block in EGL and
can stay in the event loop ready to react.


  I'm not familiar enough with the Mesa code to see everything from the
  patches.
 
  I think this will also be effected by when does the compositor actually
  send frame callback events, on starting repaint as it is now, or when
  the frame has actually hit the screen. At least for me that is still an
  open question.
 I don't think that affects this proposal versus eglSwapBuffers.

 It affects what happens: does sending the frame callbacks wait for the
 compositor's rendering to finish and hit vblank for the flip, or not.
 We should keep the whole stack in mind, otherwise we probably overlook
 some interactions.

 If your tests currently show no bad effects, that may change, if the
 frame callback handing in the server is changed. Therefore I would be
 very careful before doing any changes to EGL APIs until the server
 behaviour has been decided.
This has more to do with when to wait for the frame

[PATCH] compositor-drm: Inspect result of gbm_create_device.

2013-03-02 Thread John Kåre Alsaker
---
 src/compositor-drm.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/src/compositor-drm.c b/src/compositor-drm.c
index 3c44f7a..f60fce9 100644
--- a/src/compositor-drm.c
+++ b/src/compositor-drm.c
@@ -1174,6 +1174,9 @@ init_egl(struct drm_compositor *ec)
 {
ec-gbm = gbm_create_device(ec-drm.fd);
 
+   if (!ec-gbm)
+   return -1;
+
if (gl_renderer_create(ec-base, ec-gbm, gl_renderer_opaque_attribs,
NULL)  0) {
gbm_device_destroy(ec-gbm);
-- 
1.8.1.4

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


[RFC 0/4] New wayland-egl functions for swapping

2013-03-02 Thread John Kåre Alsaker
This patchset introduces wl_egl_window_take_buffer and wl_egl_window_commit to 
the native wayland-egl platform.

wl_egl_window_take_buffer gives a way to get a wl_buffer from an wl_egl_window.
The application is expected to attach this to one or multiple wl_surfaces and 
the EGL implementation
will listen to it's release event before it reuses the buffer.

This has a couple of advantages over eglSwapBuffers:
 - It's always non-blocking, clients doesn't have to ensure swap interval is 0
 - Client are in control of the damage, attach and commit requests.
   I find that to be cleaner and wouldn't require an extension to specify 
damage.
 - You can attach a wl_buffer to multiple wl_surfaces.
   For example you could use a single buffer for 4 window decorations 
subsurfaces and
   perhaps even draw decorations with a single draw call.
 - We can remove the ugly commit mode setting in the subsurfaces proposal.
   It's no longer required since EGL won't do commit on subsurfaces.

wl_egl_window_commit will commit the surface and use EGL's policy regarding 
waiting on frames.
wl_egl_window_take_buffer followed by wl_egl_window_commit is equivalent to 
eglSwapBuffers
only that you can specify damage without an extensions. It's more of a natural 
complement of
wl_egl_window_take_buffer than a required function.

-- 
1.8.1.4

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


[RFC wayland 1/4] wayland-egl: Add functions to replace eglSwapBuffers.

2013-03-02 Thread John Kåre Alsaker
---
 src/wayland-egl.h | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/src/wayland-egl.h b/src/wayland-egl.h
index 8255151..49bd9f7 100644
--- a/src/wayland-egl.h
+++ b/src/wayland-egl.h
@@ -30,7 +30,7 @@ extern C {
 
 #include wayland-client.h
 
-#define WL_EGL_PLATFORM 1
+#define WL_EGL_PLATFORM 2
 
 struct wl_egl_window;
 
@@ -50,6 +50,12 @@ void
 wl_egl_window_get_attached_size(struct wl_egl_window *egl_window,
int *width, int *height);
 
+struct wl_buffer *
+wl_egl_window_take_buffer(struct wl_egl_window *egl_window);
+
+int
+wl_egl_window_commit(struct wl_egl_window *egl_window);
+
 #ifdef  __cplusplus
 }
 #endif
-- 
1.8.1.4

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


[RFC mesa 2/4] wayland: Wait for frame callback after submitting frame

2013-03-02 Thread John Kåre Alsaker
---
 src/egl/drivers/dri2/platform_wayland.c | 26 ++
 1 file changed, 14 insertions(+), 12 deletions(-)

diff --git a/src/egl/drivers/dri2/platform_wayland.c 
b/src/egl/drivers/dri2/platform_wayland.c
index 6e702ab..56f5d3c 100644
--- a/src/egl/drivers/dri2/platform_wayland.c
+++ b/src/egl/drivers/dri2/platform_wayland.c
@@ -447,18 +447,7 @@ dri2_swap_buffers(_EGLDriver *drv, _EGLDisplay *disp, 
_EGLSurface *draw)
struct dri2_egl_display *dri2_dpy = dri2_egl_display(disp);
struct dri2_egl_surface *dri2_surf = dri2_egl_surface(draw);
__DRIbuffer buffer;
-   int i, ret = 0;
-
-   while (dri2_surf-frame_callback  ret != -1)
-  ret = wl_display_dispatch_queue(dri2_dpy-wl_dpy, dri2_dpy-wl_queue);
-   if (ret  0)
-  return EGL_FALSE;
-
-   dri2_surf-frame_callback = wl_surface_frame(dri2_surf-wl_win-surface);
-   wl_callback_add_listener(dri2_surf-frame_callback,
-frame_listener, dri2_surf);
-   wl_proxy_set_queue((struct wl_proxy *) dri2_surf-frame_callback,
-  dri2_dpy-wl_queue);
+   int i, ret;
 
for (i = 0; i  ARRAY_SIZE(dri2_surf-color_buffers); i++)
   if (dri2_surf-color_buffers[i].age  0)
@@ -502,11 +491,24 @@ dri2_swap_buffers(_EGLDriver *drv, _EGLDisplay *disp, 
_EGLSurface *draw)
wl_surface_damage(dri2_surf-wl_win-surface, 0, 0,
  dri2_surf-base.Width, dri2_surf-base.Height);
 
+   dri2_surf-frame_callback = wl_surface_frame(dri2_surf-wl_win-surface);
+   wl_callback_add_listener(dri2_surf-frame_callback,
+frame_listener, dri2_surf);
+   wl_proxy_set_queue((struct wl_proxy *) dri2_surf-frame_callback,
+  dri2_dpy-wl_queue);
+
wl_surface_commit(dri2_surf-wl_win-surface);
 
(*dri2_dpy-flush-flush)(dri2_surf-dri_drawable);
(*dri2_dpy-flush-invalidate)(dri2_surf-dri_drawable);
 
+   ret = 0;
+
+   while (dri2_surf-frame_callback  ret != -1)
+  ret = wl_display_dispatch_queue(dri2_dpy-wl_dpy, dri2_dpy-wl_queue);
+   if (ret  0)
+  return EGL_FALSE;
+
return EGL_TRUE;
 }
 
-- 
1.8.1.4

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


[RFC mesa 3/4] wayland: Add new take_buffer and commit for wayland-egl

2013-03-02 Thread John Kåre Alsaker
---
 src/egl/drivers/dri2/platform_wayland.c| 106 ++---
 src/egl/wayland/wayland-egl/wayland-egl-priv.h |   2 +
 src/egl/wayland/wayland-egl/wayland-egl.c  |  22 +
 3 files changed, 102 insertions(+), 28 deletions(-)

diff --git a/src/egl/drivers/dri2/platform_wayland.c 
b/src/egl/drivers/dri2/platform_wayland.c
index 56f5d3c..8797491 100644
--- a/src/egl/drivers/dri2/platform_wayland.c
+++ b/src/egl/drivers/dri2/platform_wayland.c
@@ -98,6 +98,12 @@ static struct wl_buffer_listener wl_buffer_listener = {
wl_buffer_release
 };
 
+static struct wl_buffer *
+take_buffer(struct dri2_egl_surface *dri2_surf);
+
+static EGLBoolean
+commit_surface(struct dri2_egl_display *, struct dri2_egl_surface *dri2_surf);
+
 static void
 resize_callback(struct wl_egl_window *wl_win, void *data)
 {
@@ -108,6 +114,22 @@ resize_callback(struct wl_egl_window *wl_win, void *data)
(*dri2_dpy-flush-invalidate)(dri2_surf-dri_drawable);
 }
 
+static struct wl_buffer *
+take_buffer_callback(struct wl_egl_window *wl_win, void *data)
+{
+   return take_buffer(data);
+}
+
+static int
+commit_callback(struct wl_egl_window *wl_win, void *data)
+{
+   struct dri2_egl_surface *dri2_surf = data;
+   struct dri2_egl_display *dri2_dpy =
+  dri2_egl_display(dri2_surf-base.Resource.Display);
+
+   return commit_surface(dri2_dpy, dri2_surf) ? 0 : -1;
+}
+
 /**
  * Called via eglCreateWindowSurface(), drv-API.CreateWindowSurface().
  */
@@ -143,6 +165,8 @@ dri2_create_surface(_EGLDriver *drv, _EGLDisplay *disp, 
EGLint type,
 
   dri2_surf-wl_win-private = dri2_surf;
   dri2_surf-wl_win-resize_callback = resize_callback;
+  dri2_surf-wl_win-take_buffer_callback = take_buffer_callback;
+  dri2_surf-wl_win-commit_callback = commit_callback;
 
   dri2_surf-base.Width =  -1;
   dri2_surf-base.Height = -1;
@@ -221,6 +245,8 @@ dri2_destroy_surface(_EGLDriver *drv, _EGLDisplay *disp, 
_EGLSurface *surf)
if (dri2_surf-base.Type == EGL_WINDOW_BIT) {
   dri2_surf-wl_win-private = NULL;
   dri2_surf-wl_win-resize_callback = NULL;
+  dri2_surf-wl_win-take_buffer_callback = NULL;
+  dri2_surf-wl_win-commit_callback = NULL;
}
 
free(surf);
@@ -438,16 +464,13 @@ static const struct wl_callback_listener frame_listener = 
{
wayland_frame_callback
 };
 
-/**
- * Called via eglSwapBuffers(), drv-API.SwapBuffers().
- */
-static EGLBoolean
-dri2_swap_buffers(_EGLDriver *drv, _EGLDisplay *disp, _EGLSurface *draw)
+static struct wl_buffer *
+take_buffer(struct dri2_egl_surface *dri2_surf)
 {
-   struct dri2_egl_display *dri2_dpy = dri2_egl_display(disp);
-   struct dri2_egl_surface *dri2_surf = dri2_egl_surface(draw);
__DRIbuffer buffer;
-   int i, ret;
+   struct dri2_egl_display *dri2_dpy =
+  dri2_egl_display(dri2_surf-base.Resource.Display);
+   int i;
 
for (i = 0; i  ARRAY_SIZE(dri2_surf-color_buffers); i++)
   if (dri2_surf-color_buffers[i].age  0)
@@ -456,8 +479,7 @@ dri2_swap_buffers(_EGLDriver *drv, _EGLDisplay *disp, 
_EGLSurface *draw)
/* Make sure we have a back buffer in case we're swapping without ever
 * rendering. */
if (get_back_bo(dri2_surf, buffer)  0) {
-  _eglError(EGL_BAD_ALLOC, dri2_swap_buffers);
-  return EGL_FALSE;
+  return NULL;
}
 
dri2_surf-back-age = 1;
@@ -465,31 +487,30 @@ dri2_swap_buffers(_EGLDriver *drv, _EGLDisplay *disp, 
_EGLSurface *draw)
dri2_surf-back = NULL;
 
if (dri2_surf-current-wl_buffer == NULL) {
-  dri2_surf-current-wl_buffer =
+  struct wl_buffer *result =
  wl_drm_create_buffer(dri2_dpy-wl_drm,
   dri2_surf-current-dri_buffer-name,
   dri2_surf-base.Width,
   dri2_surf-base.Height,
   dri2_surf-current-dri_buffer-pitch,
   dri2_surf-format);
-  wl_proxy_set_queue((struct wl_proxy *) dri2_surf-current-wl_buffer,
- dri2_dpy-wl_queue);
-  wl_buffer_add_listener(dri2_surf-current-wl_buffer,
- wl_buffer_listener, dri2_surf);
+  /* TODO: Handle the case where dri2_surf-current-wl_buffer is NULL */
+  wl_proxy_set_queue((struct wl_proxy *) result, dri2_dpy-wl_queue);
+  wl_buffer_add_listener(result, wl_buffer_listener, dri2_surf);
+
+  dri2_surf-current-wl_buffer = result;
}
 
-   wl_surface_attach(dri2_surf-wl_win-surface,
- dri2_surf-current-wl_buffer,
- dri2_surf-dx, dri2_surf-dy);
+   (*dri2_dpy-flush-flush)(dri2_surf-dri_drawable);
+   (*dri2_dpy-flush-invalidate)(dri2_surf-dri_drawable);
 
-   dri2_surf-wl_win-attached_width  = dri2_surf-base.Width;
-   dri2_surf-wl_win-attached_height = dri2_surf-base.Height;
-   /* reset resize growing parameters */
-   dri2_surf-dx = 0;
-   dri2_surf-dy = 0;
+   return dri2_surf-current-wl_buffer;
+}
 
-   wl_surface_damage(dri2_surf-wl_win-surface, 

[RFC weston 4/4] simple-egl: Use wl_egl_window_take_buffer

2013-03-02 Thread John Kåre Alsaker
---
 clients/simple-egl.c | 10 +-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/clients/simple-egl.c b/clients/simple-egl.c
index 26ebe5c..8138a02 100644
--- a/clients/simple-egl.c
+++ b/clients/simple-egl.c
@@ -365,6 +365,7 @@ redraw(void *data, struct wl_callback *callback, uint32_t 
time)
static const int32_t speed_div = 5;
static uint32_t start_time = 0;
struct wl_region *region;
+   struct wl_buffer *buffer;
 
assert(window-callback == callback);
window-callback = NULL;
@@ -416,7 +417,14 @@ redraw(void *data, struct wl_callback *callback, uint32_t 
time)
window-callback = wl_surface_frame(window-surface);
wl_callback_add_listener(window-callback, frame_listener, window);
 
-   eglSwapBuffers(window-display-egl.dpy, window-egl_surface);
+   buffer = wl_egl_window_take_buffer(window-native);
+
+   wl_surface_attach(window-surface, buffer, 0, 0);
+
+   wl_surface_damage(window-surface, 0, 0,
+window-geometry.width, window-geometry.height);
+
+   wl_surface_commit(window-surface);
 }
 
 static const struct wl_callback_listener frame_listener = {
-- 
1.8.1.4

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


[PATCH weston 3/8 fixed] gl-renderer: Add optional support for desktop OpenGL.

2013-03-02 Thread John Kåre Alsaker
This adds support for desktop OpenGL which can be enabled by with
./configure --enable-opengl.

Most of the differences in API between OpenGL and OpenGL ES is hidden by
the new gl_renderer fields.

It also accesses GLES2 extensions by including GLES2/gl2platform.h directly.
---
 configure.ac | 11 -
 src/compositor-rpi.c |  2 +-
 src/compositor.h |  1 +
 src/gl-internal.h|  6 ++---
 src/gl-renderer.c| 65 +++-
 src/gl-renderer.h| 19 +++
 src/gl-shaders.c |  5 ++--
 7 files changed, 81 insertions(+), 28 deletions(-)

diff --git a/configure.ac b/configure.ac
index 682e7a3..20faaa3 100644
--- a/configure.ac
+++ b/configure.ac
@@ -48,9 +48,18 @@ COMPOSITOR_MODULES=wayland-server xkbcommon pixman-1
 AC_ARG_ENABLE(egl, [  --disable-egl],,
   enable_egl=yes)
 AM_CONDITIONAL(ENABLE_EGL, test x$enable_egl = xyes)
+AC_ARG_ENABLE(opengl, [  --enable-opengl],,
+ enable_opengl=no)
+AM_CONDITIONAL(ENABLE_OPENGL, test x$enable_opengl = xyes)
 if test x$enable_egl = xyes; then
AC_DEFINE([ENABLE_EGL], [1], [Build Weston with EGL support])
-   COMPOSITOR_MODULES=$COMPOSITOR_MODULES egl = 7.10 glesv2
+   COMPOSITOR_MODULES=$COMPOSITOR_MODULES egl = 7.10
+   if test x$enable_opengl = xyes; then
+   AC_DEFINE([BUILD_DESKTOP_OPENGL], [1], [Build using desktop 
OpenGL])
+   COMPOSITOR_MODULES=$COMPOSITOR_MODULES gl
+   else
+   COMPOSITOR_MODULES=$COMPOSITOR_MODULES glesv2
+   fi
 fi
 
 PKG_CHECK_MODULES(COMPOSITOR, [$COMPOSITOR_MODULES])
diff --git a/src/compositor-rpi.c b/src/compositor-rpi.c
index d3767b9..1de85ec 100644
--- a/src/compositor-rpi.c
+++ b/src/compositor-rpi.c
@@ -1436,7 +1436,7 @@ rpi_compositor_create(struct wl_display *display, int 
*argc, char *argv[],
EGL_GREEN_SIZE, 1,
EGL_BLUE_SIZE, 1,
EGL_ALPHA_SIZE, 0,
-   EGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT,
+   EGL_RENDERABLE_TYPE, GL_RENDERER_EGL_OPENGL_BIT,
EGL_NONE
};
 
diff --git a/src/compositor.h b/src/compositor.h
index 00d3b22..d0aca32 100644
--- a/src/compositor.h
+++ b/src/compositor.h
@@ -24,6 +24,7 @@
 #ifndef _WAYLAND_SYSTEM_COMPOSITOR_H_
 #define _WAYLAND_SYSTEM_COMPOSITOR_H_
 
+#include config.h
 #include pixman.h
 #include xkbcommon/xkbcommon.h
 #include wayland-server.h
diff --git a/src/gl-internal.h b/src/gl-internal.h
index 2f61299..f27b221 100644
--- a/src/gl-internal.h
+++ b/src/gl-internal.h
@@ -24,9 +24,6 @@
 #ifndef _GL_INTERNAL_H_
 #define _GL_INTERNAL_H_
 
-#include GLES2/gl2.h
-#include GLES2/gl2ext.h
-
 #include stdlib.h
 #include string.h
 #include ctype.h
@@ -112,6 +109,9 @@ struct gl_renderer {
int32_t width, height;
} border;
 
+   GLenum bgra_internal_format, bgra_format;
+   GLenum short_type;
+
PFNGLEGLIMAGETARGETTEXTURE2DOESPROC image_target_texture_2d;
PFNEGLCREATEIMAGEKHRPROC create_image;
PFNEGLDESTROYIMAGEKHRPROC destroy_image;
diff --git a/src/gl-renderer.c b/src/gl-renderer.c
index 6e0bfa5..a4fa5da 100644
--- a/src/gl-renderer.c
+++ b/src/gl-renderer.c
@@ -893,11 +893,12 @@ gl_renderer_read_pixels(struct weston_output *output,
   uint32_t x, uint32_t y,
   uint32_t width, uint32_t height)
 {
+   struct gl_renderer *gr = get_renderer(output-compositor);
GLenum gl_format;
 
switch (format) {
case PIXMAN_a8r8g8b8:
-   gl_format = GL_BGRA_EXT;
+   gl_format = gr-bgra_format;
break;
case PIXMAN_a8b8g8r8:
gl_format = GL_RGBA;
@@ -949,9 +950,9 @@ gl_renderer_flush_damage(struct weston_surface *surface)
glBindTexture(GL_TEXTURE_2D, gs-textures[0]);
 
if (!gr-has_unpack_subimage) {
-   glTexImage2D(GL_TEXTURE_2D, 0, GL_BGRA_EXT,
+   glTexImage2D(GL_TEXTURE_2D, 0, gr-bgra_internal_format,
 gs-pitch, buffer-height, 0,
-GL_BGRA_EXT, GL_UNSIGNED_BYTE,
+gr-bgra_format, GL_UNSIGNED_BYTE,
 wl_shm_buffer_get_data(buffer));
 
goto done;
@@ -971,7 +972,7 @@ gl_renderer_flush_damage(struct weston_surface *surface)
glPixelStorei(GL_UNPACK_SKIP_ROWS, r.y1);
glTexSubImage2D(GL_TEXTURE_2D, 0, r.x1, r.y1,
r.x2 - r.x1, r.y2 - r.y1,
-   GL_BGRA_EXT, GL_UNSIGNED_BYTE, data);
+   gr-bgra_format, GL_UNSIGNED_BYTE, data);
}
 #endif
 
@@ -1030,9 +1031,9 @@ gl_renderer_attach(struct weston_surface *es, struct 
wl_buffer *buffer)
 
ensure_textures(gs, 1);
glBindTexture(GL_TEXTURE_2D, gs-textures[0]);
-   glTexImage2D(GL_TEXTURE_2D, 0, 

[PATCH weston 1/8] gl-renderer: Move shader functions into it's own file.

2013-02-21 Thread John Kåre Alsaker
This moves all shader functions into it's own file in preparation of a more
flexible shader generation. It adds gl-internal.h for shared definitions
between the files.
---
 src/Makefile.am   |   2 +
 src/gl-internal.h | 144 
 src/gl-renderer.c | 382 ++
 src/gl-shaders.c  | 284 
 4 files changed, 443 insertions(+), 369 deletions(-)
 create mode 100644 src/gl-internal.h
 create mode 100644 src/gl-shaders.c

diff --git a/src/Makefile.am b/src/Makefile.am
index 7da2c01..978a21d 100644
--- a/src/Makefile.am
+++ b/src/Makefile.am
@@ -46,6 +46,8 @@ weston_SOURCES =  \
 
 if ENABLE_EGL
 weston_SOURCES +=  \
+   gl-shaders.c\
+   gl-internal.h   \
gl-renderer.c
 endif
 
diff --git a/src/gl-internal.h b/src/gl-internal.h
new file mode 100644
index 000..2e55313
--- /dev/null
+++ b/src/gl-internal.h
@@ -0,0 +1,144 @@
+/*
+ * Copyright © 2012 Intel Corporation
+ * Copyright © 2012 John Kåre Alsaker
+ *
+ * Permission to use, copy, modify, distribute, and sell this software and
+ * its documentation for any purpose is hereby granted without fee, provided
+ * that the above copyright notice appear in all copies and that both that
+ * copyright notice and this permission notice appear in supporting
+ * documentation, and that the name of the copyright holders not be used in
+ * advertising or publicity pertaining to distribution of the software
+ * without specific, written prior permission.  The copyright holders make
+ * no representations about the suitability of this software for any
+ * purpose.  It is provided as is without express or implied warranty.
+ *
+ * THE COPYRIGHT HOLDERS DISCLAIM ALL WARRANTIES WITH REGARD TO THIS
+ * SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
+ * FITNESS, IN NO EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY
+ * SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER
+ * RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF
+ * CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
+ * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+#ifndef _GL_INTERNAL_H_
+#define _GL_INTERNAL_H_
+
+#include GLES2/gl2.h
+#include GLES2/gl2ext.h
+
+#include stdlib.h
+#include string.h
+#include ctype.h
+#include float.h
+#include assert.h
+#include linux/input.h
+
+#include gl-renderer.h
+
+#include EGL/eglext.h
+#include weston-egl-ext.h
+
+struct gl_shader {
+   GLuint program;
+   GLuint vertex_shader, fragment_shader;
+   GLint proj_uniform;
+   GLint tex_uniforms[3];
+   GLint alpha_uniform;
+   GLint color_uniform;
+};
+
+struct gl_output_state {
+   EGLSurface egl_surface;
+   int current_buffer;
+   pixman_region32_t buffer_damage[2];
+};
+
+struct gl_surface_state {
+   GLfloat color[4];
+   struct gl_shader *shader;
+
+   GLuint textures[3];
+   int num_textures;
+   pixman_region32_t texture_damage;
+
+   EGLImageKHR images[3];
+   GLenum target;
+   int num_images;
+
+   struct weston_buffer_reference buffer_ref;
+   int pitch; /* in pixels */
+};
+
+struct gl_renderer {
+   struct weston_renderer base;
+   int fragment_shader_debug;
+
+   EGLDisplay egl_display;
+   EGLContext egl_context;
+   EGLConfig egl_config;
+
+   struct {
+   int32_t top, bottom, left, right;
+   GLuint texture;
+   int32_t width, height;
+   } border;
+
+   PFNGLEGLIMAGETARGETTEXTURE2DOESPROC image_target_texture_2d;
+   PFNEGLCREATEIMAGEKHRPROC create_image;
+   PFNEGLDESTROYIMAGEKHRPROC destroy_image;
+
+   int has_unpack_subimage;
+
+   PFNEGLBINDWAYLANDDISPLAYWL bind_display;
+   PFNEGLUNBINDWAYLANDDISPLAYWL unbind_display;
+   PFNEGLQUERYWAYLANDBUFFERWL query_buffer;
+   int has_bind_display;
+
+   int has_egl_image_external;
+
+   struct gl_shader texture_shader_rgba;
+   struct gl_shader texture_shader_rgbx;
+   struct gl_shader texture_shader_egl_external;
+   struct gl_shader texture_shader_y_uv;
+   struct gl_shader texture_shader_y_u_v;
+   struct gl_shader texture_shader_y_xuxv;
+   struct gl_shader invert_color_shader;
+   struct gl_shader solid_shader;
+   struct gl_shader *current_shader;
+};
+
+static inline struct gl_output_state *
+get_output_state(struct weston_output *output)
+{
+   return (struct gl_output_state *)output-renderer_state;
+}
+
+static inline struct gl_surface_state *
+get_surface_state(struct weston_surface *surface)
+{
+   return (struct gl_surface_state *)surface-renderer_state;
+}
+
+static inline struct gl_renderer *
+get_renderer(struct weston_compositor *ec)
+{
+   return (struct gl_renderer *)ec-renderer;
+}
+
+int
+gl_init_shaders

[PATCH weston 2/8] gl-renderer: Add flexible shader generation.

2013-02-21 Thread John Kåre Alsaker
This add a more flexible way of generating shaders. It generates all valid
combinations of different input, conversion and output pipelines, which
can easily be extended with more if desired.
---
 src/gl-internal.h |  65 +++--
 src/gl-renderer.c |  71 +++---
 src/gl-shaders.c  | 700 +++---
 3 files changed, 594 insertions(+), 242 deletions(-)

diff --git a/src/gl-internal.h b/src/gl-internal.h
index 2e55313..2f61299 100644
--- a/src/gl-internal.h
+++ b/src/gl-internal.h
@@ -39,13 +39,41 @@
 #include EGL/eglext.h
 #include weston-egl-ext.h
 
+#define MAX_PLANES 3
+
+enum gl_shader_attribute {
+   ATTRIBUTE_INPUT,
+   ATTRIBUTE_OUTPUT,
+   ATTRIBUTE_CONVERSION,
+   ATTRIBUTE_COUNT
+};
+
+enum gl_conversion_attribute {
+   CONVERSION_NONE,
+   CONVERSION_COUNT
+};
+
+enum gl_output_attribute {
+   OUTPUT_BLEND,
+   OUTPUT_COUNT
+};
+
+enum gl_input_attribute {
+   INPUT_RGBX,
+   INPUT_RGBA,
+   INPUT_EGL_EXTERNAL,
+   INPUT_Y_UV,
+   INPUT_Y_U_V,
+   INPUT_Y_XUXV,
+   INPUT_SOLID,
+   INPUT_COUNT
+};
+
 struct gl_shader {
GLuint program;
-   GLuint vertex_shader, fragment_shader;
-   GLint proj_uniform;
-   GLint tex_uniforms[3];
-   GLint alpha_uniform;
+   GLint projection_uniform;
GLint color_uniform;
+   GLint alpha_uniform;
 };
 
 struct gl_output_state {
@@ -56,13 +84,13 @@ struct gl_output_state {
 
 struct gl_surface_state {
GLfloat color[4];
-   struct gl_shader *shader;
+   enum gl_input_attribute input;
 
-   GLuint textures[3];
+   GLuint textures[MAX_PLANES];
int num_textures;
pixman_region32_t texture_damage;
 
-   EGLImageKHR images[3];
+   EGLImageKHR images[MAX_PLANES];
GLenum target;
int num_images;
 
@@ -97,15 +125,11 @@ struct gl_renderer {
 
int has_egl_image_external;
 
-   struct gl_shader texture_shader_rgba;
-   struct gl_shader texture_shader_rgbx;
-   struct gl_shader texture_shader_egl_external;
-   struct gl_shader texture_shader_y_uv;
-   struct gl_shader texture_shader_y_u_v;
-   struct gl_shader texture_shader_y_xuxv;
-   struct gl_shader invert_color_shader;
-   struct gl_shader solid_shader;
+   struct gl_shader *solid_shader;
struct gl_shader *current_shader;
+
+   struct gl_shader **shaders;
+   size_t shader_count;
 };
 
 static inline struct gl_output_state *
@@ -133,11 +157,20 @@ void
 gl_destroy_shaders(struct gl_renderer *gr);
 
 void
+gl_shader_set_output(struct gl_shader *shader,
+struct weston_output *output);
+
+void
 gl_use_shader(struct gl_renderer *gr,
 struct gl_shader *shader);
 
+struct gl_shader *
+gl_select_shader(struct gl_renderer *gr,
+   enum gl_input_attribute input,
+   enum gl_output_attribute output);
+
 void
-gl_shader_uniforms(struct gl_shader *shader,
+gl_shader_setup(struct gl_shader *shader,
   struct weston_surface *surface,
   struct weston_output *output);
 
diff --git a/src/gl-renderer.c b/src/gl-renderer.c
index 495d5db..6e0bfa5 100644
--- a/src/gl-renderer.c
+++ b/src/gl-renderer.c
@@ -533,8 +533,8 @@ triangle_fan_debug(struct weston_surface *surface, int 
first, int count)
*index++ = first + i;
}
 
-   glUseProgram(gr-solid_shader.program);
-   glUniform4fv(gr-solid_shader.color_uniform, 1,
+   glUseProgram(gr-solid_shader-program);
+   glUniform4fv(gr-solid_shader-color_uniform, 1,
color[color_idx++ % ARRAY_LENGTH(color)]);
glDrawElements(GL_LINES, nelems, GL_UNSIGNED_SHORT, buffer);
glUseProgram(gr-current_shader-program);
@@ -616,6 +616,7 @@ draw_surface(struct weston_surface *es, struct 
weston_output *output,
struct gl_renderer *gr = get_renderer(ec);
struct gl_surface_state *gs = get_surface_state(es);
struct gl_output_state *go = get_output_state(output);
+   struct gl_shader *shader;
/* repaint bounding region in global coordinates: */
pixman_region32_t repaint;
/* non-opaque region in surface coordinates: */
@@ -638,12 +639,14 @@ draw_surface(struct weston_surface *es, struct 
weston_output *output,
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
 
if (ec-fan_debug) {
-   gl_use_shader(gr, gr-solid_shader);
-   gl_shader_uniforms(gr-solid_shader, es, output);
+   gl_use_shader(gr, gr-solid_shader);
+   gl_shader_setup(gr-solid_shader, es, output);
}
 
-   gl_use_shader(gr, gs-shader);
-   gl_shader_uniforms(gs-shader, es, output);
+   shader = gl_select_shader(gr, gs-input, OUTPUT_BLEND);
+
+   gl_use_shader(gr, shader);
+   gl_shader_setup(shader, es, output);
 
if (es-transform.enabled || 

[PATCH weston 3/8] gl-renderer: Add optional support for desktop OpenGL.

2013-02-21 Thread John Kåre Alsaker
This adds support for desktop OpenGL which can be enabled by with
./configure --enable-opengl.

Most of the differences in API between OpenGL and OpenGL ES is hidden by
the new gl_renderer fields.

It also accesses GLES2 extensions by including GLES2/gl2platform.h directly.
---
 configure.ac | 11 -
 src/compositor-rpi.c |  2 +-
 src/compositor.h |  1 +
 src/gl-internal.h|  6 ++---
 src/gl-renderer.c| 65 +++-
 src/gl-renderer.h| 19 +++
 src/gl-shaders.c |  5 ++--
 7 files changed, 81 insertions(+), 28 deletions(-)

diff --git a/configure.ac b/configure.ac
index 74eb892..6351f8e 100644
--- a/configure.ac
+++ b/configure.ac
@@ -48,9 +48,18 @@ COMPOSITOR_MODULES=wayland-server xkbcommon pixman-1
 AC_ARG_ENABLE(egl, [  --disable-egl],,
   enable_egl=yes)
 AM_CONDITIONAL(ENABLE_EGL, test x$enable_egl = xyes)
+AC_ARG_ENABLE(opengl, [  --enable-opengl],,
+ enable_opengl=no)
+AM_CONDITIONAL(ENABLE_OPENGL, test x$enable_opengl = xyes)
 if test x$enable_egl = xyes; then
AC_DEFINE([ENABLE_EGL], [1], [Build Weston with EGL support])
-   COMPOSITOR_MODULES=$COMPOSITOR_MODULES egl = 7.10 glesv2
+   COMPOSITOR_MODULES=$COMPOSITOR_MODULES egl = 7.10
+   if test x$enable_opengl = xyes; then
+   AC_DEFINE([BUILD_DESKTOP_OPENGL], [1], [Build using desktop 
OpenGL])
+   COMPOSITOR_MODULES=$COMPOSITOR_MODULES gl
+   else
+   COMPOSITOR_MODULES=$COMPOSITOR_MODULES glesv2
+   fi
 fi
 
 PKG_CHECK_MODULES(COMPOSITOR, [$COMPOSITOR_MODULES])
diff --git a/src/compositor-rpi.c b/src/compositor-rpi.c
index d3767b9..1de85ec 100644
--- a/src/compositor-rpi.c
+++ b/src/compositor-rpi.c
@@ -1436,7 +1436,7 @@ rpi_compositor_create(struct wl_display *display, int 
*argc, char *argv[],
EGL_GREEN_SIZE, 1,
EGL_BLUE_SIZE, 1,
EGL_ALPHA_SIZE, 0,
-   EGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT,
+   EGL_RENDERABLE_TYPE, GL_RENDERER_EGL_OPENGL_BIT,
EGL_NONE
};
 
diff --git a/src/compositor.h b/src/compositor.h
index a45fdf6..8842372 100644
--- a/src/compositor.h
+++ b/src/compositor.h
@@ -24,6 +24,7 @@
 #ifndef _WAYLAND_SYSTEM_COMPOSITOR_H_
 #define _WAYLAND_SYSTEM_COMPOSITOR_H_
 
+#include config.h
 #include pixman.h
 #include xkbcommon/xkbcommon.h
 #include wayland-server.h
diff --git a/src/gl-internal.h b/src/gl-internal.h
index 2f61299..f27b221 100644
--- a/src/gl-internal.h
+++ b/src/gl-internal.h
@@ -24,9 +24,6 @@
 #ifndef _GL_INTERNAL_H_
 #define _GL_INTERNAL_H_
 
-#include GLES2/gl2.h
-#include GLES2/gl2ext.h
-
 #include stdlib.h
 #include string.h
 #include ctype.h
@@ -112,6 +109,9 @@ struct gl_renderer {
int32_t width, height;
} border;
 
+   GLenum bgra_internal_format, bgra_format;
+   GLenum short_type;
+
PFNGLEGLIMAGETARGETTEXTURE2DOESPROC image_target_texture_2d;
PFNEGLCREATEIMAGEKHRPROC create_image;
PFNEGLDESTROYIMAGEKHRPROC destroy_image;
diff --git a/src/gl-renderer.c b/src/gl-renderer.c
index 6e0bfa5..97bcfc8 100644
--- a/src/gl-renderer.c
+++ b/src/gl-renderer.c
@@ -893,11 +893,12 @@ gl_renderer_read_pixels(struct weston_output *output,
   uint32_t x, uint32_t y,
   uint32_t width, uint32_t height)
 {
+   struct gl_renderer *gr = get_renderer(output-compositor);
GLenum gl_format;
 
switch (format) {
case PIXMAN_a8r8g8b8:
-   gl_format = GL_BGRA_EXT;
+   gl_format = gr-bgra_format;
break;
case PIXMAN_a8b8g8r8:
gl_format = GL_RGBA;
@@ -949,9 +950,9 @@ gl_renderer_flush_damage(struct weston_surface *surface)
glBindTexture(GL_TEXTURE_2D, gs-textures[0]);
 
if (!gr-has_unpack_subimage) {
-   glTexImage2D(GL_TEXTURE_2D, 0, GL_BGRA_EXT,
+   glTexImage2D(GL_TEXTURE_2D, 0, gr-bgra_internal_format,
 gs-pitch, buffer-height, 0,
-GL_BGRA_EXT, GL_UNSIGNED_BYTE,
+gr-bgra_format, GL_UNSIGNED_BYTE,
 wl_shm_buffer_get_data(buffer));
 
goto done;
@@ -971,7 +972,7 @@ gl_renderer_flush_damage(struct weston_surface *surface)
glPixelStorei(GL_UNPACK_SKIP_ROWS, r.y1);
glTexSubImage2D(GL_TEXTURE_2D, 0, r.x1, r.y1,
r.x2 - r.x1, r.y2 - r.y1,
-   GL_BGRA_EXT, GL_UNSIGNED_BYTE, data);
+   gr-bgra_format, GL_UNSIGNED_BYTE, data);
}
 #endif
 
@@ -1030,9 +1031,9 @@ gl_renderer_attach(struct weston_surface *es, struct 
wl_buffer *buffer)
 
ensure_textures(gs, 1);
glBindTexture(GL_TEXTURE_2D, gs-textures[0]);
-   glTexImage2D(GL_TEXTURE_2D, 0, 

[PATCH weston 5/8] gl-renderer: Add support for blending in linear space.

2013-02-21 Thread John Kåre Alsaker
This makes compositing gamma correct by assuming all input surfaces are in
the sRGB color space. It can be enabled by setting color-managed to true
in the compositor section of weston.ini.

It's implemented by converting from sRGB gamma using the new 
CONVERSION_FROM_SRGB
shader attribute and drawing the surfaces into a temporary buffer (the
indirect rendering introduced in the previous patch). That buffer is then
drawed to the framebuffer using the OUTPUT_TO_SRGB shader attribute which
converts back into sRGB gamma.

Both the temporary buffer and sRGB decode LUT needs atleast 12-bits per
channel for flawless results with 8-bit input/output. This is not provided
by OpenGL ES 2 by default so desktop OpenGL is required for usable results.

It also adds a check to ensure we have enough texture units for the planes
and the LUT.
---
 src/compositor.c  |   7 ++-
 src/compositor.h  |   2 +
 src/gl-internal.h |  14 -
 src/gl-renderer.c |  47 ---
 src/gl-shaders.c  | 169 +++---
 5 files changed, 222 insertions(+), 17 deletions(-)

diff --git a/src/compositor.c b/src/compositor.c
index 9198b3b..22bd3ea 100644
--- a/src/compositor.c
+++ b/src/compositor.c
@@ -2963,10 +2963,15 @@ weston_compositor_init(struct weston_compositor *ec,
{ keymap_variant, CONFIG_KEY_STRING, xkb_names.variant },
{ keymap_options, CONFIG_KEY_STRING, xkb_names.options },
 };
+const struct config_key compositor_config_keys[] = {
+{ color-managed, CONFIG_KEY_BOOLEAN, ec-color_managed },
+};
const struct config_section cs[] = {
 { keyboard,
   keyboard_config_keys, ARRAY_LENGTH(keyboard_config_keys) },
-   };
+{ compositor,
+  compositor_config_keys, ARRAY_LENGTH(compositor_config_keys) 
},
+};
 
memset(xkb_names, 0, sizeof(xkb_names));
parse_config_file(config_file, cs, ARRAY_LENGTH(cs), ec);
diff --git a/src/compositor.h b/src/compositor.h
index 8842372..88a3cdd 100644
--- a/src/compositor.h
+++ b/src/compositor.h
@@ -338,6 +338,8 @@ struct weston_compositor {
struct weston_plane primary_plane;
int fan_debug;
 
+   int color_managed;
+
uint32_t focus;
 
struct weston_renderer *renderer;
diff --git a/src/gl-internal.h b/src/gl-internal.h
index 205c2d9..44b1c75 100644
--- a/src/gl-internal.h
+++ b/src/gl-internal.h
@@ -47,11 +47,13 @@ enum gl_shader_attribute {
 
 enum gl_conversion_attribute {
CONVERSION_NONE,
+   CONVERSION_FROM_SRGB,
CONVERSION_COUNT
 };
 
 enum gl_output_attribute {
OUTPUT_BLEND,
+   OUTPUT_TO_SRGB,
OUTPUT_COUNT
 };
 
@@ -87,6 +89,7 @@ struct gl_output_state {
 struct gl_surface_state {
GLfloat color[4];
enum gl_input_attribute input;
+   enum gl_conversion_attribute conversion;
 
GLuint textures[MAX_PLANES];
int num_textures;
@@ -114,7 +117,12 @@ struct gl_renderer {
int32_t width, height;
} border;
 
+   GLuint srgb_decode_lut;
+   GLuint srgb_encode_lut;
+
GLenum bgra_internal_format, bgra_format;
+   GLenum rgba16_internal_format;
+   GLenum l16_internal_format;
GLenum short_type;
 
PFNGLEGLIMAGETARGETTEXTURE2DOESPROC image_target_texture_2d;
@@ -158,6 +166,9 @@ get_renderer(struct weston_compositor *ec)
 int
 gl_init_shaders(struct gl_renderer *gr);
 
+int
+gl_compile_shaders(struct gl_renderer *gr);
+
 void
 gl_destroy_shaders(struct gl_renderer *gr);
 
@@ -172,7 +183,8 @@ gl_use_shader(struct gl_renderer *gr,
 struct gl_shader *
 gl_select_shader(struct gl_renderer *gr,
enum gl_input_attribute input,
-   enum gl_output_attribute output);
+   enum gl_output_attribute output,
+   enum gl_conversion_attribute conversion);
 
 void
 gl_shader_setup(struct gl_shader *shader,
diff --git a/src/gl-renderer.c b/src/gl-renderer.c
index ac8696f..25f5f84 100644
--- a/src/gl-renderer.c
+++ b/src/gl-renderer.c
@@ -700,7 +700,10 @@ repaint_surfaces_start(struct weston_output *output, 
pixman_region32_t *damage)
 {
struct gl_output_state *go = get_output_state(output);
 
-   go-indirect_drawing = 0  !go-indirect_disable;
+   go-indirect_drawing = output-compositor-color_managed;
+
+   if (go-indirect_disable)
+   go-indirect_drawing = 0;
 
if (go-indirect_drawing) {
glBindFramebuffer(GL_FRAMEBUFFER, go-indirect_fbo);
@@ -720,11 +723,16 @@ repaint_surfaces_finish(struct weston_output *output, 
pixman_region32_t *damage)
if (go-indirect_drawing) {
glBindFramebuffer(GL_FRAMEBUFFER, 0);
 
-   shader = gl_select_shader(gr, INPUT_RGBX, OUTPUT_BLEND);
+   shader = gl_select_shader(gr,
+   INPUT_RGBX,
+   

[PATCH weston 6/8] gl-renderer: Don't multiply with alpha uniform when it's 1.0.

2013-02-21 Thread John Kåre Alsaker
This eliminates the multiplication of the alpha uniform for the common case
of surfaces with 1.0 as alpha.
---
 src/gl-internal.h |  1 +
 src/gl-renderer.c | 14 --
 src/gl-shaders.c  |  6 +++---
 3 files changed, 12 insertions(+), 9 deletions(-)

diff --git a/src/gl-internal.h b/src/gl-internal.h
index 44b1c75..6a46865 100644
--- a/src/gl-internal.h
+++ b/src/gl-internal.h
@@ -53,6 +53,7 @@ enum gl_conversion_attribute {
 
 enum gl_output_attribute {
OUTPUT_BLEND,
+   OUTPUT_TRANSPARENT,
OUTPUT_TO_SRGB,
OUTPUT_COUNT
 };
diff --git a/src/gl-renderer.c b/src/gl-renderer.c
index 25f5f84..7a19b26 100644
--- a/src/gl-renderer.c
+++ b/src/gl-renderer.c
@@ -758,7 +758,8 @@ draw_surface(struct weston_surface *es, struct 
weston_output *output,
pixman_region32_t surface_blend;
pixman_region32_t *buffer_damage;
GLint filter;
-   int i;
+   int i, transparent;
+   enum gl_output_attribute output_attribute;
 
pixman_region32_init(repaint);
pixman_region32_intersect(repaint,
@@ -778,7 +779,10 @@ draw_surface(struct weston_surface *es, struct 
weston_output *output,
gl_shader_setup(gr-solid_shader, es, output);
}
 
-   shader = gl_select_shader(gr, gs-input, OUTPUT_BLEND, gs-conversion);
+   transparent = es-alpha  1.0;
+   output_attribute = transparent ? OUTPUT_TRANSPARENT : OUTPUT_BLEND;
+
+   shader = gl_select_shader(gr, gs-input, output_attribute, 
gs-conversion);
 
gl_use_shader(gr, shader);
gl_shader_setup(shader, es, output);
@@ -810,13 +814,13 @@ draw_surface(struct weston_surface *es, struct 
weston_output *output,
 
struct gl_shader *rgbx_shader = gl_select_shader(gr,
INPUT_RGBX,
-   OUTPUT_BLEND,
+   output_attribute,
gs-conversion);
gl_use_shader(gr, rgbx_shader);
gl_shader_setup(rgbx_shader, es, output);
}
 
-   if (es-alpha  1.0)
+   if (transparent)
glEnable(GL_BLEND);
else
glDisable(GL_BLEND);
@@ -945,8 +949,6 @@ draw_border(struct weston_output *output)
 
gl_shader_set_output(shader, output);
 
-   glUniform1f(shader-alpha_uniform, 1);
-
n = texture_border(output);
 
glActiveTexture(GL_TEXTURE0);
diff --git a/src/gl-shaders.c b/src/gl-shaders.c
index b3c81cd..8987329 100644
--- a/src/gl-shaders.c
+++ b/src/gl-shaders.c
@@ -490,8 +490,7 @@ create_shader_permutation(struct gl_renderer *renderer,
 
if (OPENGL_ES_VER)
append(sb.global, precision mediump float;\n);
-   append(sb.global, varying vec2 texture_coord;\n \
-   uniform float alpha;\n);
+   append(sb.global, varying vec2 texture_coord;\n);
 
append(sb.body, void main()\n{\n);
 
@@ -503,7 +502,8 @@ create_shader_permutation(struct gl_renderer *renderer,
add_conversion(sb);
 
switch (sb.attributes[ATTRIBUTE_OUTPUT]) {
-   case OUTPUT_BLEND:
+   case OUTPUT_TRANSPARENT:
+   append(sb.global, uniform float alpha;\n);
append(sb.body, gl_FragColor *= alpha;\n);
break;
case OUTPUT_TO_SRGB:
-- 
1.8.1.3

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


[PATCH weston 7/8] gl-renderer: Remove unneeded glActiveTexture call.

2013-02-21 Thread John Kåre Alsaker
---
 src/gl-renderer.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/src/gl-renderer.c b/src/gl-renderer.c
index 7a19b26..778fedd 100644
--- a/src/gl-renderer.c
+++ b/src/gl-renderer.c
@@ -1772,8 +1772,6 @@ gl_renderer_setup(struct weston_compositor *ec, 
EGLSurface egl_surface)
gr-has_bind_display = 0;
}
 
-   glActiveTexture(GL_TEXTURE0);
-
if (gl_init_shaders(gr)  0)
return -1;
 
-- 
1.8.1.3

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH weston 4/7] compositor-drm: Delay egl initialization until all outputs are created

2013-01-22 Thread John Kåre Alsaker
What's the reason for this change and where is EGL initialized for
outputs created on the fly?

On Tue, Jan 22, 2013 at 5:07 PM, Ander Conselvan de Oliveira
ander.conselvan.de.olive...@intel.com wrote:
 ---
  src/compositor-drm.c |   30 ++
  1 file changed, 18 insertions(+), 12 deletions(-)

 diff --git a/src/compositor-drm.c b/src/compositor-drm.c
 index 449106e..0f455ba 100644
 --- a/src/compositor-drm.c
 +++ b/src/compositor-drm.c
 @@ -1047,8 +1047,13 @@ init_drm(struct drm_compositor *ec, struct udev_device 
 *device)
  }

  static int
 +drm_output_init_egl(struct drm_output *output, struct drm_compositor *ec);
 +
 +static int
  init_egl(struct drm_compositor *ec)
  {
 +   struct drm_output *output;
 +
 ec-gbm = gbm_create_device(ec-drm.fd);

 if (gl_renderer_create(ec-base, ec-gbm, gl_renderer_opaque_attribs,
 @@ -1057,6 +1062,14 @@ init_egl(struct drm_compositor *ec)
 return -1;
 }

 +   wl_list_for_each(output, ec-base.output_list, base.link)
 +   if (drm_output_init_egl(output, ec)  0) {
 +   weston_log(Failed to init egl state for output %s\n,
 +  output-name);
 +   weston_output_destroy(output-base);
 +   wl_list_remove(output-base.link);
 +   }
 +
 return 0;
  }

 @@ -1421,11 +1434,6 @@ create_output_for_connector(struct drm_compositor *ec,
connector-mmWidth, connector-mmHeight,
o ? o-transform : WL_OUTPUT_TRANSFORM_NORMAL);

 -   if (drm_output_init_egl(output, ec)  0) {
 -   weston_log(Failed to init output gl state\n);
 -   goto err_output;
 -   }
 -
 output-backlight = backlight_init(drm_device,
connector-connector_type);
 if (output-backlight) {
 @@ -1459,8 +1467,6 @@ create_output_for_connector(struct drm_compositor *ec,

 return 0;

 -err_output:
 -   weston_output_destroy(output-base);
  err_free:
 wl_list_for_each_safe(drm_mode, next, output-base.mode_list,
 base.link) {
 @@ -2249,11 +2255,6 @@ drm_compositor_create(struct wl_display *display,
 goto err_udev_dev;
 }

 -   if (init_egl(ec)  0) {
 -   weston_log(failed to initialize egl\n);
 -   goto err_udev_dev;
 -   }
 -
 ec-base.destroy = drm_destroy;
 ec-base.restore = drm_restore;

 @@ -2274,6 +2275,11 @@ drm_compositor_create(struct wl_display *display,
 goto err_sprite;
 }

 +   if (init_egl(ec)  0) {
 +   weston_log(failed to initialize egl\n);
 +   goto err_udev_dev;
 +   }
 +
 path = NULL;

 evdev_input_create(ec-base, ec-udev, seat);
 --
 1.7.10.4

 ___
 wayland-devel mailing list
 wayland-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/wayland-devel
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [RFC] Sub-surface protocol and implementation v1

2013-01-10 Thread John Kåre Alsaker
On Thu, Jan 10, 2013 at 9:49 AM, Pekka Paalanen ppaala...@gmail.com wrote:
 On Wed, 9 Jan 2013 18:14:12 +0100
 John Kåre Alsaker john.kare.alsa...@gmail.com wrote:

 On Wed, Jan 9, 2013 at 10:53 AM, Pekka Paalanen ppaala...@gmail.com wrote:
  On Tue, 8 Jan 2013 21:50:20 +0100
  John Kåre Alsaker john.kare.alsa...@gmail.com wrote:
 
  My goals for a subsurface implementation are these:
  - Allow nesting to ease interoperability for client side code.
  - Allow a surface without any content to have an input region and let
  the content be presented in a number of adjacent subsurfaces. This
  would simplify input handling by a lot.
  - Allow clients to commit a set of surfaces.
 
  On Tue, Jan 8, 2013 at 8:50 AM, Pekka Paalanen ppaala...@gmail.com 
  wrote:
  
   On Mon, 7 Jan 2013 16:56:47 +0100
   John Kåre Alsaker john.kare.alsa...@gmail.com wrote:
  
   On Fri, Dec 21, 2012 at 12:56 PM, Pekka Paalanen ppaala...@gmail.com 
   wrote:
- how should commits work in parent vs. sub-surfaces?
   Commit should work the same way. It should commit itself and all it's
   children. Furthermore it should commit all reordering of it's
   immediate children.
  
   Could you give some rationale why this is preferred to any other way,
   for instance sub-surface commit needing a main surface commit to apply
   all the pending state?
  We don't want to keep another copy of the surface state around and
  using dummy surfaces and nesting we can commit a set of surfaces as we
  please.
 
  Not having to keep another copy of state, yes indeed. Committing a set
  of surfaces however has some corner cases. How do we avoid committing a
  sub-surface that is just in the middle of updating its state, when
  committing the parent? Is it easy enough to avoid in applications?
 We use a dummy root surface, with a number of children surfaces which
 the client can choose commit.

 Sorry, I don't understand how this is a solution; this is the
 original problem. Continuing on speculation, since we don't have real
 examples:

 Say, an application gives a sub-surface to a library saying use this
 for your overlay stuff. Assuming we can nest sub-surfaces, the library
 can go and create sub-surfaces for the given surface. Now, if the
 application commits its main surface for the window, or the sub-surface
 it gave to the library, it will commit the whole tree of surfaces down
 to the sub-surfaces the library created itself. How can we have any
 consistency in all these surface's states?

 I guess a problem here is that the application should not commit the
 library's surfaces to begin with, which is what you suggested the dummy
 root surface for, right?
Yes, the application only commits it's subsurface tree and the library
only commit it's subsurface tree, both of which is under a dummy
surface. Depending on how input is handled, the dummy surface may or
may not have a input region.


 However, the dummy surface as the root surface (i.e. the window main
 surface) will not work, because it is the surface the shell will be
 managing. Sub-surfaces cannot be assigned a shell surface role. Are you
 proposing to change this?

 If you are, then the protocol will allow a new class of semantic
 errors: assigning shell roles to more than one sub-surface in a window
 surface set. I think this change would be a net loss, especially if we
 can avoid this altogether with commit semantics.

 If instead you would still assing the shell role to the dummy root
 surface, we will have problems with mapping, since by definition, a
 dummy surface does not have content. We cannot use the first attach as
 a map operation, and need more protocol to fix that.
The problem with a surface with no content is that you want to stop
traversing the surface tree when you spot one, so that all subsurface
would be hidden? I prefer explicit show/hide requests if you want to
do that.
The problem with a surface with a input region and no content is that
a infinite input region is set by default, so it needs to be clipped
to something to count as a real surface.


   How would we implement the dummy parent? There is no concept of a dummy
   surface in the protocol yet, and currently mapping a sub-surface
   depends on mapping the immediate parent.
  A dummy parent would simply be a surface without content. It would be
  mapped (by the shell, it should be left out when rendering). It would
  have the size of it's children, or we could add a way to specify the
  size of a surface without buffers, which could be shared with a
  scaling implementation. I'm not very clear on what sizes are used for
  though.
 
  Yeah, this would be a major change from the current behaviour in the
  protocol.
 
  Currently, a surface becomes mapped, when a) it has content, and b) the
  shell agrees, i.e. a proper window type has been set via wl_shell. For
  sub-surfaces, the condition b) instead requires, that the
  immediate parent surface is mapped. Therefore if parent gets unmapped,
  all its direct

Re: [RFC] Sub-surface protocol and implementation v1

2013-01-09 Thread John Kåre Alsaker
On Wed, Jan 9, 2013 at 10:53 AM, Pekka Paalanen ppaala...@gmail.com wrote:
 On Tue, 8 Jan 2013 21:50:20 +0100
 John Kåre Alsaker john.kare.alsa...@gmail.com wrote:

 My goals for a subsurface implementation are these:
 - Allow nesting to ease interoperability for client side code.
 - Allow a surface without any content to have an input region and let
 the content be presented in a number of adjacent subsurfaces. This
 would simplify input handling by a lot.
 - Allow clients to commit a set of surfaces.

 On Tue, Jan 8, 2013 at 8:50 AM, Pekka Paalanen ppaala...@gmail.com wrote:
  Hi John,
 
  thanks for the comments, I have some more questions about your
  suggestions.
 
 
  On Mon, 7 Jan 2013 16:56:47 +0100
  John Kåre Alsaker john.kare.alsa...@gmail.com wrote:
 
  On Fri, Dec 21, 2012 at 12:56 PM, Pekka Paalanen ppaala...@gmail.com 
  wrote:
   Hi all,
  
   we started a discussion about sub-surfaces in Wayland in [1].
  ...
   - how should commits work in parent vs. sub-surfaces?
  Commit should work the same way. It should commit itself and all it's
  children. Furthermore it should commit all reordering of it's
  immediate children.
 
  Could you give some rationale why this is preferred to any other way,
  for instance sub-surface commit needing a main surface commit to apply
  all the pending state?
 We don't want to keep another copy of the surface state around and
 using dummy surfaces and nesting we can commit a set of surfaces as we
 please.

 Not having to keep another copy of state, yes indeed. Committing a set
 of surfaces however has some corner cases. How do we avoid committing a
 sub-surface that is just in the middle of updating its state, when
 committing the parent? Is it easy enough to avoid in applications?
We use a dummy root surface, with a number of children surfaces which
the client can choose commit.


  How would we implement the dummy parent? There is no concept of a dummy
  surface in the protocol yet, and currently mapping a sub-surface
  depends on mapping the immediate parent.
 A dummy parent would simply be a surface without content. It would be
 mapped (by the shell, it should be left out when rendering). It would
 have the size of it's children, or we could add a way to specify the
 size of a surface without buffers, which could be shared with a
 scaling implementation. I'm not very clear on what sizes are used for
 though.

 Yeah, this would be a major change from the current behaviour in the
 protocol.

 Currently, a surface becomes mapped, when a) it has content, and b) the
 shell agrees, i.e. a proper window type has been set via wl_shell. For
 sub-surfaces, the condition b) instead requires, that the
 immediate parent surface is mapped. Therefore if parent gets unmapped,
 all its direct and indirect sub-surfaces are unmapped, too, so there is
 an easy way for a client to hide a whole window.
I though it was required to interact with the shell in order to get a
visible surface.


 If we allow surfaces to be mapped without content, we need some protocol
 to map it, either in addition or replacing the trigger attached a
 wl_buffer and committed. That might be a new request. Logically, it
 should also have a counterpart, an unmap request, that works regardless
 of surface content.

 Hmm, on another thought, maybe we should just ignore mapped or not for
 sub-surfaces, and simply go with the main surface, which is managed
 directly by the shell. Maybe this is what you were after all along?
Yes, the entire surface group should be mapped/unmapped at once, and
the shell should only interact with the root surface.


 That leaves only the problem of size of a contentless sub-surface.
 Input region outside of the wl_surface is ignored, so some size is
 needed to be able to have an input region.

 Sure, a contentless surface over a compound window would be a handy
 trick to normalize all input to the compound window into the same
 coordinate space, but I don't think its convenient enough to warrant
 the protocol complications. I'm going to implement input event
 coalescing from sub-surfaces in the toytoolkit, and it doesn't look
 too hard so far.
Handling enter/leave mouse request doesn't look very fun. Also it
wouldn't complicate the protocol very much. The surface's size could
be inferred from it's children or set explicitly. That probably has to
be done for surfaces without content and input region too. I'm not
sure what size is used for besides clip the input region of surfaces.


 Actually, since I'm only aiming for the GL or video overlay widget case
 for starters in toytoolkit, I could simply set the input region to
 empty on the sub-surface, and so the main surface beneath would get all
 input.
That is quite a simple case with one sub-surface :)


 Sans input region, what else would a contentless surface need a size
 for?
Given that I don't know what size is used for besides clipping the
input region, I can't answer this.


  Do you have a use case for this?
 Dummy

Re: [PATCH 2/3] renderer: introduce destroy callback

2013-01-08 Thread John Kåre Alsaker
I'd like to see this callback in a buffer manager abstraction which
could encapsulate EGL and X11/fb/kms buffer management and get rid of
duplicated code for the renderers. However if it's added to
weston_renderer, it should be destroyed by the compositor and not by
the backends.

On Tue, Jan 8, 2013 at 5:09 PM, Vasily Khoruzhick anars...@gmail.com wrote:
 Signed-off-by: Vasily Khoruzhick anars...@gmail.com
 ---
  src/compositor-android.c  |  6 +++---
  src/compositor-drm.c  |  4 ++--
  src/compositor-headless.c |  2 +-
  src/compositor-rpi.c  |  4 ++--
  src/compositor-wayland.c  |  4 ++--
  src/compositor-x11.c  | 10 ++
  src/compositor.h  |  3 +--
  src/gl-renderer.c |  3 ++-
  src/gl-renderer.h |  2 --
  src/noop-renderer.c   |  3 ++-
  src/pixman-renderer.c |  3 ++-
  src/pixman-renderer.h |  3 ---
  12 files changed, 19 insertions(+), 28 deletions(-)

 diff --git a/src/compositor-android.c b/src/compositor-android.c
 index 5e27071..4e4ff08 100644
 --- a/src/compositor-android.c
 +++ b/src/compositor-android.c
 @@ -299,7 +299,7 @@ android_init_egl(struct android_compositor *compositor,

 if (gl_renderer_output_create(output-base,
 output-fb-native_window)  0) {
 -   gl_renderer_destroy(compositor-base);
 +   compositor-base.renderer-destroy(compositor-base);
 return -1;
 }

 @@ -313,7 +313,7 @@ android_compositor_destroy(struct weston_compositor *base)

 android_seat_destroy(compositor-seat);

 -   gl_renderer_destroy(base);
 +   base-renderer-destroy(base);

 /* destroys outputs, too */
 weston_compositor_shutdown(compositor-base);
 @@ -358,7 +358,7 @@ android_compositor_create(struct wl_display *display, int 
 argc, char *argv[],
 return compositor-base;

  err_gl:
 -   gl_renderer_destroy(compositor-base);
 +   compositor-base.renderer-destroy(compositor-base);
  err_output:
 android_output_destroy(output-base);
  err_compositor:
 diff --git a/src/compositor-drm.c b/src/compositor-drm.c
 index cc72c74..1c17bfd 100644
 --- a/src/compositor-drm.c
 +++ b/src/compositor-drm.c
 @@ -2007,7 +2007,7 @@ drm_destroy(struct weston_compositor *ec)

 weston_compositor_shutdown(ec);

 -   gl_renderer_destroy(ec);
 +   ec-renderer-destroy(ec);

 destroy_sprites(d);
 gbm_device_destroy(d-gbm);
 @@ -2307,7 +2307,7 @@ err_drm_source:
 wl_list_for_each_safe(weston_seat, next, ec-base.seat_list, link)
 evdev_input_destroy(weston_seat);
  err_sprite:
 -   gl_renderer_destroy(ec-base);
 +   ec-base.renderer-destroy(ec-base);
 gbm_device_destroy(ec-gbm);
 destroy_sprites(ec);
  err_udev_dev:
 diff --git a/src/compositor-headless.c b/src/compositor-headless.c
 index d23ee0a..03bd0a4 100644
 --- a/src/compositor-headless.c
 +++ b/src/compositor-headless.c
 @@ -141,7 +141,7 @@ headless_destroy(struct weston_compositor *ec)
  {
 struct headless_compositor *c = (struct headless_compositor *) ec;

 -   noop_renderer_destroy(ec);
 +   ec-renderer-destroy(ec);

 weston_seat_release(c-fake_seat);
 weston_compositor_shutdown(ec);
 diff --git a/src/compositor-rpi.c b/src/compositor-rpi.c
 index f169d3b..208271f 100644
 --- a/src/compositor-rpi.c
 +++ b/src/compositor-rpi.c
 @@ -1347,7 +1347,7 @@ rpi_compositor_destroy(struct weston_compositor *base)
 /* destroys outputs, too */
 weston_compositor_shutdown(compositor-base);

 -   gl_renderer_destroy(compositor-base);
 +   compositor-base.renderer-destroy(compositor-base);
 tty_destroy(compositor-tty);

 bcm_host_deinit();
 @@ -1501,7 +1501,7 @@ rpi_compositor_create(struct wl_display *display, int 
 argc, char *argv[],
 return compositor-base;

  out_gl:
 -   gl_renderer_destroy(compositor-base);
 +   compositor-base.renderer-destroy(compositor-base);

  out_tty:
 tty_destroy(compositor-tty);
 diff --git a/src/compositor-wayland.c b/src/compositor-wayland.c
 index 167fce6..0a3df03 100644
 --- a/src/compositor-wayland.c
 +++ b/src/compositor-wayland.c
 @@ -612,7 +612,7 @@ wayland_restore(struct weston_compositor *ec)
  static void
  wayland_destroy(struct weston_compositor *ec)
  {
 -   gl_renderer_destroy(ec);
 +   ec-renderer-destroy(ec);

 weston_compositor_shutdown(ec);

 @@ -686,7 +686,7 @@ wayland_compositor_create(struct wl_display *display,
 return c-base;

  err_gl:
 -   gl_renderer_destroy(c-base);
 +   c-base.renderer-destroy(c-base);
  err_display:
 wl_display_disconnect(c-parent.wl_display);
  err_compositor:
 diff --git a/src/compositor-x11.c b/src/compositor-x11.c
 index a5fc9ef..c5c4248 100644
 --- a/src/compositor-x11.c
 +++ b/src/compositor-x11.c
 @@ -1419,10 +1419,7 @@ x11_destroy(struct weston_compositor *ec)

 weston_compositor_shutdown(ec); /* destroys 

Re: [PATCH 2/3] renderer: introduce destroy callback

2013-01-08 Thread John Kåre Alsaker
I'd like for output_create, output_destroy, renderer_destroy,
set_border and repaint_output to be virtual functions as part of a new
abstraction which is responsible for handling buffers and renderers
for the backends. These may simply be new fields in weston_renderer
that the backends override as necessary, or a new structure private to
the backends which the renderers may partially initialize.

This way we can replace this nice peace of code
if (c-use_pixman) {
if (x11_output_init_shm(c, output, width, height)  0)
return NULL;
if (pixman_renderer_output_create(output-base)  0) {
x11_output_deinit_shm(c, output);
return NULL;
}
} else {
if (gl_renderer_output_create(output-base, output-window)  
0)
return NULL;
}
with this
if (buff_manager-output_create(output-base, output-window)  0)
return NULL;
and generally reduce the number of branches. Which will help making
EGL/gl_renderer switchable.

On Tue, Jan 8, 2013 at 8:22 PM, Kristian Høgsberg k...@bitplanet.net wrote:
 On Tue, Jan 8, 2013 at 1:36 PM, John Kåre Alsaker
 john.kare.alsa...@gmail.com wrote:
 I'd like to see this callback in a buffer manager abstraction which
 could encapsulate EGL and X11/fb/kms buffer management and get rid of
 duplicated code for the renderers. However if it's added to
 weston_renderer, it should be destroyed by the compositor and not by
 the backends.

 What code duplication do you see?  There's very little left; only
 compositor-drm creates egl on kms buffers, only compositor-x11 creates
 X windows... for the pixman renderer, we'll need X shm buffers for X
 and kms dumb buffers for compsitor-drm so there's not a lot of
 commonality there either.  Also, since the backend creates the
 renderer it only makes sense that it destroys it.  The reason for the
 function pointer is that we can now have different renderers in play
 in one backend, and instead of doing #ifdef EGL if (using gl renderer)
 destroy_gl_renderer else #else blah blah, we can just call the
 renderer destroy vfunc.  It's just a small pragmatic cleanup, not a
 brand new abstraction.

 Kristian

 On Tue, Jan 8, 2013 at 5:09 PM, Vasily Khoruzhick anars...@gmail.com wrote:
 Signed-off-by: Vasily Khoruzhick anars...@gmail.com
 ---
  src/compositor-android.c  |  6 +++---
  src/compositor-drm.c  |  4 ++--
  src/compositor-headless.c |  2 +-
  src/compositor-rpi.c  |  4 ++--
  src/compositor-wayland.c  |  4 ++--
  src/compositor-x11.c  | 10 ++
  src/compositor.h  |  3 +--
  src/gl-renderer.c |  3 ++-
  src/gl-renderer.h |  2 --
  src/noop-renderer.c   |  3 ++-
  src/pixman-renderer.c |  3 ++-
  src/pixman-renderer.h |  3 ---
  12 files changed, 19 insertions(+), 28 deletions(-)

 diff --git a/src/compositor-android.c b/src/compositor-android.c
 index 5e27071..4e4ff08 100644
 --- a/src/compositor-android.c
 +++ b/src/compositor-android.c
 @@ -299,7 +299,7 @@ android_init_egl(struct android_compositor *compositor,

 if (gl_renderer_output_create(output-base,
 output-fb-native_window)  0) {
 -   gl_renderer_destroy(compositor-base);
 +   compositor-base.renderer-destroy(compositor-base);
 return -1;
 }

 @@ -313,7 +313,7 @@ android_compositor_destroy(struct weston_compositor 
 *base)

 android_seat_destroy(compositor-seat);

 -   gl_renderer_destroy(base);
 +   base-renderer-destroy(base);

 /* destroys outputs, too */
 weston_compositor_shutdown(compositor-base);
 @@ -358,7 +358,7 @@ android_compositor_create(struct wl_display *display, 
 int argc, char *argv[],
 return compositor-base;

  err_gl:
 -   gl_renderer_destroy(compositor-base);
 +   compositor-base.renderer-destroy(compositor-base);
  err_output:
 android_output_destroy(output-base);
  err_compositor:
 diff --git a/src/compositor-drm.c b/src/compositor-drm.c
 index cc72c74..1c17bfd 100644
 --- a/src/compositor-drm.c
 +++ b/src/compositor-drm.c
 @@ -2007,7 +2007,7 @@ drm_destroy(struct weston_compositor *ec)

 weston_compositor_shutdown(ec);

 -   gl_renderer_destroy(ec);
 +   ec-renderer-destroy(ec);

 destroy_sprites(d);
 gbm_device_destroy(d-gbm);
 @@ -2307,7 +2307,7 @@ err_drm_source:
 wl_list_for_each_safe(weston_seat, next, ec-base.seat_list, link)
 evdev_input_destroy(weston_seat);
  err_sprite:
 -   gl_renderer_destroy(ec-base);
 +   ec-base.renderer-destroy(ec-base);
 gbm_device_destroy(ec-gbm);
 destroy_sprites(ec);
  err_udev_dev:
 diff --git a/src/compositor-headless.c b/src/compositor-headless.c
 index d23ee0a..03bd0a4 100644
 --- a/src/compositor-headless.c
 +++ b/src/compositor-headless.c
 @@ -141,7 +141,7

Re: [RFC] Sub-surface protocol and implementation v1

2013-01-08 Thread John Kåre Alsaker
My goals for a subsurface implementation are these:
- Allow nesting to ease interoperability for client side code.
- Allow a surface without any content to have an input region and let
the content be presented in a number of adjacent subsurfaces. This
would simplify input handling by a lot.
- Allow clients to commit a set of surfaces.

On Tue, Jan 8, 2013 at 8:50 AM, Pekka Paalanen ppaala...@gmail.com wrote:
 Hi John,

 thanks for the comments, I have some more questions about your
 suggestions.


 On Mon, 7 Jan 2013 16:56:47 +0100
 John Kåre Alsaker john.kare.alsa...@gmail.com wrote:

 On Fri, Dec 21, 2012 at 12:56 PM, Pekka Paalanen ppaala...@gmail.com wrote:
  Hi all,
 
  we started a discussion about sub-surfaces in Wayland in [1].
 ...
 
  There are still many open questions, ranging from implementation
  details I noted in code comments and commit messages, to heavy protocol
  additions like clipping. Some of the open questions are:
  - how should commits work in parent vs. sub-surfaces?
 Commit should work the same way. It should commit itself and all it's
 children. Furthermore it should commit all reordering of it's
 immediate children.

 Could you give some rationale why this is preferred to any other way,
 for instance sub-surface commit needing a main surface commit to apply
 all the pending state?
We don't want to keep another copy of the surface state around and
using dummy surfaces and nesting we can commit a set of surfaces as we
please.


 Collecting the reasons why some method should be preferred is
 important, even if they are somehow obvious, so we can choose.

  - do we allow nested sub-surfaces?
 Yes, this makes it easier for client code to cooperate. If a library
 wants to use subsurfaces it shouldn't need any special handling with
 the application to achieve that. It also allows clients to commit a
 group of sub-surfaces by using a dummy parent.

 Do you mean a library creates a dummy sub-surface as a parent for other
 sub-surfaces it creates, so it can commit then all in one go?
Yes, although it may also use a real sub-surface and set the others as
children if that's convenient.


 How would we implement the dummy parent? There is no concept of a dummy
 surface in the protocol yet, and currently mapping a sub-surface
 depends on mapping the immediate parent.
A dummy parent would simply be a surface without content. It would be
mapped (by the shell, it should be left out when rendering). It would
have the size of it's children, or we could add a way to specify the
size of a surface without buffers, which could be shared with a
scaling implementation. I'm not very clear on what sizes are used for
though.


 I believe breaking this mapping requirement chain would require a lot
 more complex rules on what is mapped and when. OTOH, creating a dummy
 surface with e.g. attaching a 1-pixel 100% transparent wl_buffer is in
 my opinion a hack to be reserved for cases where we simply cannot change
 the protocol anymore. We are now designing the protocol for the first
 time, so let's try to avoid such workarounds to begin with.
I don't see any reasons for why dummy surface would add complexity.
Also using a 1-pixel transparent buffer would result in horrible
performance at the moment and would require odd optimizations.


 Do you have a use case for this?
Dummy surface are a useful way to group surfaces. Without them, when
deleting the first surface in a group (which would be the parent
instead of the dummy surface), you'd have to delete all the surfaces
in the group and recreate all but the first. With re-parenting this is
less crazy, but still requires quite a few operations.


 As for nesting in itself, it would be simple to just allow it, I think,
 and I have not found any good reason to forbid it yet.

  - do we want a completely orthogonal interface for clipping and scaling,
or should it be limited to sub-surfaces?
 Clipping and/or scaling should work for all kinds of surfaces. I don't
 see any reason to limit it to sub-surfaces.

 Yeah, I have a feeling it would make sense as an orthogonal interface.
 It will be a huge change in any case, since surface size cannot be
 determined by a wl_buffer alone anymore.
Surface size not being determinable by wl_buffer's size is also a
requirement of dummy surfaces with a input region.


 Regarding sub-surfaces, there is the question of whether clipping
 should be controlled within the sub-surface update-commit cycle or
 decoupled from it, for instance a video overlay driven by an autonomous
 decoding library where the application needs to clip it; does it need
 to communicate clipping to the library, set clipping on its own, or
 always commit every video frame. Positioning the sub-surface wrt. to
 the window is up for the application, so clipping might be as well.
 Hence clipping may have some interactions with sub-surfaces, and I
 don't know how any existing software does it.
I think existing software use window systems with trees where

Re: [PATCH 3/3] Make EGL/GLESv2 dependencies optional

2013-01-08 Thread John Kåre Alsaker
EGLDisplay, EGLSurface, EGLNativeDisplayType and EGLNativeWindowType
should all be void *. With that change you can get rid of the ugly
typecasts.

On Tue, Jan 8, 2013 at 5:09 PM, Vasily Khoruzhick anars...@gmail.com wrote:

 +typedef int EGLDisplay;
 +typedef int EGLSurface;
 +typedef long int EGLNativeDisplayType;
 +typedef long int EGLNativeWindowType;
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [RFC] Sub-surface protocol and implementation v1

2013-01-07 Thread John Kåre Alsaker
On Fri, Dec 21, 2012 at 12:56 PM, Pekka Paalanen ppaala...@gmail.com wrote:
 Hi all,

 we started a discussion about sub-surfaces in Wayland in [1].
 Based on that, and with heavy limiting of scope, I present my first
 sub-surface draft:

 http://cgit.collabora.com/git/user/pq/wayland.git/log/?h=subsurface-v1
 http://cgit.collabora.com/git/user/pq/weston.git/log/?h=subsurface-v1

 The following changes since commit 812bd4dd0fb20161aaf07029fbd6146d530b9932:

   client: remove two unused function pointer typedefs (2012-12-12 11:04:53 
 -0500)

 are available in the git repository at:
   git://git.collabora.co.uk/git/user/pq/wayland.git subsurface-v1

 Pekka Paalanen (1):
   protocol: add sub-surfaces

  protocol/wayland.xml |  122 
 ++
  1 files changed, 122 insertions(+), 0 deletions(-)


 The following changes since commit e565b405695d2dbcfbf0fd228fe8885ff522ae84:

   tests: Pass --backend so the test suite runs with the right modules 
 (2012-12-14 16:34:00 -0500)

 are available in the git repository at:
   git://git.collabora.co.uk/git/user/pq/weston.git subsurface-v1

 Pekka Paalanen (11):
   misc:
   tests: make signal other than ABRT a hard failure
   shell: remove remnants of screensaver surface list
   surface transform inheritance:
   compositor: update weston_surface:transform.matrix always
   compositor: introduce weston_surface_geometry_dirty()
   compositor: put weston_matrix_pointer into transformation_list
   compositor: popup inherits surface transformation
   sub-surface work:
   compositor: introduce sub-surfaces
   tests: add sub-surface protocol tests
   compositor: add sub-surfaces to repaint list
   shell: keyboard focus with sub-surfaces
   clients: add a demo for sub-surfaces

  clients/.gitignore|1 +
  clients/Makefile.am   |4 +
  clients/subsurfaces.c |  239 
  src/compositor.c  |  489 
 ++---
  src/compositor.h  |   85 ++--
  src/shell.c   |  121 ++-
  src/util.c|9 +-
  src/xwayland/window-manager.c |4 +-
  tests/.gitignore  |2 +
  tests/Makefile.am |4 +
  tests/subsurface-test.c   |  306 ++
  tests/weston-test-runner.c|6 +-
  tests/weston-test.c   |2 +-
  13 files changed, 1165 insertions(+), 107 deletions(-)
  create mode 100644 clients/subsurfaces.c
  create mode 100644 tests/subsurface-test.c


 I will reply to this email with the sub-surface patches, but leave out
 the preparatory work that I have already posted earlier.

 I chose to leave all clipping and scaling outside the scope of this
 work for now, we should discuss those later.

 This proposal allows sub-surfaces to form a single window, that may be
 updated in unison, honouring the every frame is perfect -principle. The
 implementation does not yet fully achieve that, though. More details are
 in the commit messages and the protocol definition.

 I'll quote my earlier email [2], explaining the base for the design:
 Conceptually I see it like this: zero or more sub-surfaces with one main
 surface forms a window. A window is the entity managed at window
 manager level, and therefore a window is what can be a top-level
 window, tooltip, menu, etc.
 ...

 On the protocol level, these concepts are built on the wl_surface
 object, which can form a part of a window (by being a sub-surface), or
 the base object of a window (the main surface, or parent). The base
 object can then be associated with meanings like top-level and tooltip
 in the shell level of the protocol.

 Even though the implementation is a little lacking, it is already
 usable. The 'subsurfaces' client just shows a red rectangle [3], but
 could be extended to draw something more interesting. Sub-surfaces are
 drawn and positioned properly, so people could already start
 experimenting with them. Committing is subject to change, but otherwise
 I don't expect changes to be required in clients written against the v1
 proposal.

 This would be a nice moment to try and write a video player with
 decorations _and_ YUV video content, or a GL application with software
 rendered decorations. You might even try whether stitching decorations
 from 4 sub-surfaces would work. These should offer us more insight on
 how sub-surfaces should be defined. :-)

 There are still many open questions, ranging from implementation
 details I noted in code comments and commit messages, to heavy protocol
 additions like clipping. Some of the open questions are:
 - how should commits work in parent vs. sub-surfaces?
Commit should work the same way. It should commit itself and all it's
children. Furthermore it should commit all reordering of it's
immediate children.

 - do we allow nested sub-surfaces?
Yes, this makes it easier for client code to 

Re: [PATCH v2 2/2] x11 backend: add option to use pixman renderer

2013-01-07 Thread John Kåre Alsaker
The use-shm variable (which I assume stand for use-x11-shm) and
configuration option should probably be renamed to use-pixman so it
could be used with multiple backends and not be easily confused with
wl_shm.

On Mon, Jan 7, 2013 at 7:49 PM, Kristian Høgsberg hoegsb...@gmail.com wrote:
 On Mon, Jan 07, 2013 at 08:39:50PM +0300, Vasily Khoruzhick wrote:
 When --use-shm is passed to weston and x11 backend is active,
 it will use SHM surfaces with pixman renderer instead of EGL

 Signed-off-by: Vasily Khoruzhick anars...@gmail.com
 ---
 v2: - handle missing MIT-SHM extension gracefully (tested with Xnest)
 - follow libc convention of error handling
 - get rid of xcb-aux dependency

 Looks good, committed.  Is there a reason you don't just use
 visual_type-depth instead of get_depth_of_visual()?  Maybe I'm
 missing something.  Also, to test the pixman renderer with desktop
 shell, I have to comment out

 #define HAVE_CAIRO_EGL 1

 in config.h, since we don't have a configure option to disable that.
 If you feel like making a patch to add --disable-egl or such, that
 would be great.  And of course, it would be nice to make the
 pixman-renderer work with the kms backend.

 Anyway, thanks a lot, I really appreciate this work.  It's a great
 validation of the weston_renderer infrastructure and of course it's
 very cool that we can now run weston without hw acceleration without
 having to rely on sw mesa.

 Kristian

  src/compositor-x11.c | 355 
 +--
  1 file changed, 342 insertions(+), 13 deletions(-)

 diff --git a/src/compositor-x11.c b/src/compositor-x11.c
 index 04a1803..a994a97 100644
 --- a/src/compositor-x11.c
 +++ b/src/compositor-x11.c
 @@ -1,6 +1,7 @@
  /*
   * Copyright © 2008-2011 Kristian Høgsberg
   * Copyright © 2010-2011 Intel Corporation
 + * Copyright © 2013 Vasily Khoruzhick anars...@gmail.com
   *
   * Permission to use, copy, modify, distribute, and sell this software and
   * its documentation for any purpose is hereby granted without fee, provided
 @@ -33,9 +34,11 @@
  #include unistd.h
  #include errno.h
  #include sys/time.h
 +#include sys/shm.h
  #include linux/input.h

  #include xcb/xcb.h
 +#include xcb/shm.h
  #ifdef HAVE_XCB_XKB
  #include xcb/xkb.h
  #endif
 @@ -47,6 +50,7 @@

  #include compositor.h
  #include gl-renderer.h
 +#include pixman-renderer.h
  #include ../shared/config-parser.h
  #include ../shared/cairo-util.h

 @@ -79,6 +83,7 @@ struct x11_compositor {
   struct xkb_keymap   *xkb_keymap;
   unsigned int has_xkb;
   uint8_t  xkb_event_base;
 + int  use_shm;

   /* We could map multi-pointer X to multiple wayland seats, but
* for now we only support core X input. */
 @@ -107,6 +112,15 @@ struct x11_output {
   xcb_window_twindow;
   struct weston_mode  mode;
   struct wl_event_source *finish_frame_timer;
 +
 + xcb_gc_tgc;
 + xcb_shm_seg_t   segment;
 + pixman_image_t *hw_surface;
 + pixman_image_t *shadow_surface;
 + int shm_id;
 + void   *buf;
 + void   *shadow_buf;
 + uint8_t depth;
  };

  static struct xkb_keymap *
 @@ -307,16 +321,95 @@ x11_input_destroy(struct x11_compositor *compositor)
  }

  static void
 -x11_output_repaint(struct weston_output *output_base,
 -pixman_region32_t *damage)
 +x11_output_repaint_gl(struct weston_output *output_base,
 +   pixman_region32_t *damage)
 +{
 + struct x11_output *output = (struct x11_output *)output_base;
 + struct weston_compositor *ec = output-base.compositor;
 +
 + ec-renderer-repaint_output(output_base, damage);
 +
 + pixman_region32_subtract(ec-primary_plane.damage,
 +  ec-primary_plane.damage, damage);
 +
 + wl_event_source_timer_update(output-finish_frame_timer, 10);
 +}
 +
 +static void
 +x11_output_repaint_shm(struct weston_output *output_base,
 +pixman_region32_t *damage)
  {
   struct x11_output *output = (struct x11_output *)output_base;
   struct weston_compositor *ec = output-base.compositor;
 + struct x11_compositor *c = (struct x11_compositor *)ec;
 + pixman_box32_t *rects;
 + int nrects, i, src_x, src_y, x1, y1, x2, y2, width, height;
 + xcb_void_cookie_t cookie;
 + xcb_generic_error_t *err;

 + pixman_renderer_output_set_buffer(output_base, output-shadow_surface);
   ec-renderer-repaint_output(output_base, damage);

 + width = pixman_image_get_width(output-shadow_surface);
 + height = pixman_image_get_height(output-shadow_surface);
 + rects = pixman_region32_rectangles(damage, nrects);
 + for (i = 0; i  nrects; i++) {
 + switch (output_base-transform) {
 + default:
 + case WL_OUTPUT_TRANSFORM_NORMAL:
 + 

Re: Sub-surface protocol

2012-12-18 Thread John Kåre Alsaker
On Tue, Dec 18, 2012 at 6:40 AM, Bill Spitzak spit...@gmail.com wrote:

 On Dec 17, 2012, at 5:01 PM, John Kåre Alsaker wrote:


 Then a client such as gimp could draw all it's display into a single buffer.

 To get the different color correction of the center display, it would

 declare a subsurface containing this and set it's color correction

 differently using the wayland api. This would only allocate one buffer,

 which would save memory, but more importantly it should make internal code

 in gimp easier as you could work around a toolkit that assumes only one

 output buffer.

 My approach to color correct rendering in Wayland is to let the
 compositor handle it and have the clients simply specify which color
 space they are using. Only the compositor can render clients correctly
 when they are displayed on multiple monitors (unless we introduce a
 way to set a buffer per-output).


 Yes this is what I am assuming.

 What I am asking is if it makes sense for a client to draw two different
 color spaces into the same buffer. The larger area is color space A, and a
 rectangular subarea is color space B. It then tells wayland to make the main
 window color space A, and then makes a subwindow with the clipped subarea
 and tells wayland that it is color space B.

 The reason is to save buffer space, and also to work around toolkits and
 drawing api's where it is a lot more practical to draw everything into the
 same buffer.
I don't see any problem with doing that unless you want different bit-depths.



___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Sub-surface protocol

2012-12-18 Thread John Kåre Alsaker
On Tue, Dec 18, 2012 at 2:59 PM, John Kåre Alsaker
john.kare.alsa...@gmail.com wrote:
 On Tue, Dec 18, 2012 at 6:40 AM, Bill Spitzak spit...@gmail.com wrote:

 On Dec 17, 2012, at 5:01 PM, John Kåre Alsaker wrote:


 Then a client such as gimp could draw all it's display into a single buffer.

 To get the different color correction of the center display, it would

 declare a subsurface containing this and set it's color correction

 differently using the wayland api. This would only allocate one buffer,

 which would save memory, but more importantly it should make internal code

 in gimp easier as you could work around a toolkit that assumes only one

 output buffer.

 My approach to color correct rendering in Wayland is to let the
 compositor handle it and have the clients simply specify which color
 space they are using. Only the compositor can render clients correctly
 when they are displayed on multiple monitors (unless we introduce a
 way to set a buffer per-output).


 Yes this is what I am assuming.

 What I am asking is if it makes sense for a client to draw two different
 color spaces into the same buffer. The larger area is color space A, and a
 rectangular subarea is color space B. It then tells wayland to make the main
 window color space A, and then makes a subwindow with the clipped subarea
 and tells wayland that it is color space B.

 The reason is to save buffer space, and also to work around toolkits and
 drawing api's where it is a lot more practical to draw everything into the
 same buffer.
 I don't see any problem with doing that unless you want different bit-depths.




You could also construct the window decorations out of 4 subsurfaces
and get alpha working on the center subsurface, but I doubt that's
particularly useful.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Sub-surface protocol

2012-12-18 Thread John Kåre Alsaker
On Tue, Dec 18, 2012 at 10:29 AM, Pekka Paalanen ppaala...@gmail.com wrote:
 On Mon, 17 Dec 2012 15:47:43 +
 Richard Hughes hughsi...@gmail.com wrote:

 On 5 December 2012 14:32, Pekka Paalanen ppaala...@gmail.com wrote:
  One of the most important use cases is a video player in a window. It
  has XRGB or ARGB window decorations, usually the video content in YUV,
  and possibly an ARGB overlay for subtitles etc. Currently, the client
  has to color-convert the video, and merge it with the decorations and
  subtitles, before sending the grand ARGB buffer to the compositor.

 Sorry to be late to the party. A subsurface idea was what Øyvind and I
 were discussing at the OpenICC hackfest when we were talking about
 tagging a surface with a known ICC profile. This would allow a toolkit
 to declare a window area to be AdobeRGB or some home-created camera
 profile from a jpeg that has been tagged with an embedded color
 profile.

 The use case are image viewers like EOG or editors like GIMP. Gimp
 wants to color correct the main preview window converting from source
 ICC profile to monitor ICC profile, but we really don't want to turn
 off the sRGB-monitor profile for the selector widgets. By using a
 subsurface we get the behaviour we want without adding tons of code.
 At the moment Oyranos has some kind of xatom protocol to let the
 compositor know an opt-out-region and this isn't ideal or suited at
 all to wayland.

 Anyway, I just wanted to say rock on and maybe to keep ICC color
 correction at the back of your mind for the sub-surface. :)

 Cool, yeah, I think color management of individual sub-surfaces will
 not be a problem, as long as the color management interfaces deal with
 wl_surface and wl_buffer objects from the core protocol. In my current
 plan, sub-surfaces are wl_surface objects with a particular role, so
 color management should interact in obvious ways, and work with minimal
 additional changes.


 Thanks,
 pq
 ___
 wayland-devel mailing list
 wayland-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/wayland-devel
My plan has always been to attach color space profiles to wl_surfaces,
which will just end to working with subsurfaces. When combined with
clipping, you can attach color profiles to any part of your surface.

On Tue, Dec 18, 2012 at 10:45 AM, Kai-Uwe Behrmann k...@gmx.de wrote:
 Am 17.12.2012 16:47, schrieb Richard Hughes:

 On 5 December 2012 14:32, Pekka Paalanen ppaala...@gmail.com wrote:

 One of the most important use cases is a video player in a window. It
 has XRGB or ARGB window decorations, usually the video content in YUV,
 and possibly an ARGB overlay for subtitles etc. Currently, the client
 has to color-convert the video, and merge it with the decorations and
 subtitles, before sending the grand ARGB buffer to the compositor.


 A subsurface idea was what Øyvind and I
 were discussing at the OpenICC hackfest when we were talking about
 tagging a surface with a known ICC profile. This would allow a toolkit
 to declare a window area to be AdobeRGB or some home-created camera
 profile from a jpeg that has been tagged with an embedded color
 profile.


 Sounds natural to help with CM compositing needs and was therefore discussed
 [1] for OpenICC's Color Management Near X project in 2008.

 However, we had seen quite some objections around subwindows at that time.
 Did something substancial change on that matter?
I don't see anything there that applies to Wayland. The link to the
original proposal is also dead.


 To put the question in other words, do Qt and Gtk developers now or soon
 play with the idea to use wayland as the internal compositing core?
I believe both have paths to use subwindows/subsurfaces for video and
OpenGL. These can probably be used for clients with special needs,
such as image viewers and editors which would simply attach an ICC
profile to the subsurface. For compositing they could use subsurfaces
to optimize scrolling performance. I expect that to happen with
browsers first and I believe Android already does that.


 kind regards
 Kai-Uwe Behrmann

 [1] https://mail.gnome.org/archives/gtk-devel-list/2008-June/msg00150.html



 ___
 wayland-devel mailing list
 wayland-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/wayland-devel
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Sub-surface protocol

2012-12-17 Thread John Kåre Alsaker
On Mon, Dec 17, 2012 at 11:40 PM, Bill Spitzak spit...@gmail.com wrote:
 That brings up an interesting possibility: can a subsurface (or any surface)
 image be a subrectangle of an image used for another surface? I'm thinking
 this would be possible, provided the client can tell the wayland server to
 use an arbitrary stride between rows.
You should be able to attach the same wl_buffer to multiple
wl_surfaces and set clipping (when that is added) accordingly.


 Then a client such as gimp could draw all it's display into a single buffer.
 To get the different color correction of the center display, it would
 declare a subsurface containing this and set it's color correction
 differently using the wayland api. This would only allocate one buffer,
 which would save memory, but more importantly it should make internal code
 in gimp easier as you could work around a toolkit that assumes only one
 output buffer.
My approach to color correct rendering in Wayland is to let the
compositor handle it and have the clients simply specify which color
space they are using. Only the compositor can render clients correctly
when they are displayed on multiple monitors (unless we introduce a
way to set a buffer per-output).


 The main problem I see is that the subsurface has to be opaque, because any
 transparent parts would just show the same data with the wrong color
 correction. But that would cover a lot of uses.


 Richard Hughes wrote:

 On 5 December 2012 14:32, Pekka Paalanen ppaala...@gmail.com wrote:

 One of the most important use cases is a video player in a window. It
 has XRGB or ARGB window decorations, usually the video content in YUV,
 and possibly an ARGB overlay for subtitles etc. Currently, the client
 has to color-convert the video, and merge it with the decorations and
 subtitles, before sending the grand ARGB buffer to the compositor.


 Sorry to be late to the party. A subsurface idea was what Øyvind and I
 were discussing at the OpenICC hackfest when we were talking about
 tagging a surface with a known ICC profile. This would allow a toolkit
 to declare a window area to be AdobeRGB or some home-created camera
 profile from a jpeg that has been tagged with an embedded color
 profile.

 The use case are image viewers like EOG or editors like GIMP. Gimp
 wants to color correct the main preview window converting from source
 ICC profile to monitor ICC profile, but we really don't want to turn
 off the sRGB-monitor profile for the selector widgets. By using a
 subsurface we get the behaviour we want without adding tons of code.
 At the moment Oyranos has some kind of xatom protocol to let the
 compositor know an opt-out-region and this isn't ideal or suited at
 all to wayland.

 Anyway, I just wanted to say rock on and maybe to keep ICC color
 correction at the back of your mind for the sub-surface. :)

 Thanks,

 Richard
 ___
 wayland-devel mailing list
 wayland-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/wayland-devel

 ___
 wayland-devel mailing list
 wayland-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/wayland-devel
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Sub-surface protocol

2012-12-13 Thread John Kåre Alsaker
Here is my subsurface proposal. I don't like video sinks (or other
things) running on an independent framerate. I don't want to maintain
more state in the compositor side for them or have increased
complexity in the protocol. They are inefficient and can be solved by
a number of other ways. In those cases you can change the API, clients
can buffer the video sink state for themselves (which you should be
able to do if you're porting to Wayland), or the worst case, render
them off-screen and let the client draw them as they please.

My proposal uses a wl_surface_group which is a group of wl_surfaces
with a defined Z-order and positions relative to the main surface. It
has a concept of a main surface which is the surface which acts as the
wl_surface_group for other APIs like wl_shell_surface. Member surfaces
cannot have a wl_shell_surface. main_surface is created in the
create_surface_group request so it can have a different implementation
(which probably won't happen in Weston).

interface name=wl_surface_group_factory version=1
request name=create_surface_group
description summary=remove a surface from the group
This creates a new wl_surface_group with 'main_surface' 
as the main surface.
The commit for the main surface commits all pending 
place_above,
place_below, set_position,
remove requests and pending state for all member 
surfaces. Doing
commit on member surface will be a no-op.
Moving the main surface will move all member surfaces 
too.
Only the main surface can have an associated 
wl_shell_surface or
other shell interfaces.
/description
arg name=new_surface_group type=new_id 
interface=wl_surface_group/
arg name=main_surface type=new_id interface=wl_surface/
/request
/interface

interface name=wl_surface_group version=1
request name=destroy type=destructor
/request

request name=remove
description summary=remove a surface from the group
This removes a surface from the group. The main surface 
cannot be removed.
/description
arg name=surface type=object interface=wl_surface/
/request

request name=set_position
description summary=remove a surface from the group
This sets the position of a member surface relative to 
the main surface.
/description
arg name=surface type=object interface=wl_surface/
arg name=x type=int/
arg name=y type=int/
/request

request name=place_above
description summary=place a surface above another
This places 'surface' right above 'relative_to_surface'.
If 'surface' isn't a member of this group, it will 
become a member.
It is an error to add a surface belonging to another 
wl_surface_group.
/description
arg name=surface type=object interface=wl_surface/
arg name=relative_to_surface type=object 
interface=wl_surface/
/request

request name=place_below
description summary=place a surface below another
This places 'surface' right below 'relative_to_surface'.
If 'surface' isn't a member of this group, it will 
become a member.
It is an error to add a surface belonging to another 
wl_surface_group.
/description
arg name=surface type=object interface=wl_surface/
arg name=relative_to_surface type=object 
interface=wl_surface/
/request
/interface

I see that in weston the shell is very glad in inspecting geometry. We
also have to alpha property per-surface instead of per
wl_shell_surface. For fading windows we may want to compose the
wl_surface_group into a single image.

On Thu, Dec 13, 2012 at 12:54 PM, Pekka Paalanen ppaala...@gmail.com wrote:
 On Wed, 12 Dec 2012 14:22:49 +
 Daniel Stone dan...@fooishbar.org wrote:

 Hi,

 On 6 December 2012 01:32, Pekka Paalanen ppaala...@gmail.com wrote:

  Clipping
 
  The term sub-surface sounds like a sub-window, which may cause one to
  think, that it will be clipped to the parent surface area. I do not
  think we should force or even allow such clipping.
 
  Forcing the clipping would waste memory: for every pixel in
  sub-surfaces there would have to be a pixel in the parent surface.
  For instance, drawing window decorations in four sub-surfaces around
  the main content would not only waste memory, but also not solve the
  problem of GL applications, where one needs to reserve a margin for the
  decorations. (Hello, window decoration libraries? :-)
 

 Right, so we're coming at this from completely different angles 

Re: Sub-surface protocol

2012-12-13 Thread John Kåre Alsaker
I added buffered commit_surface in wl_surface_group, which allows
clients to atomically update the surfaces of their choice. This way we
don't have to change wl_surface.commit. We can also update the main or
parent surface and subsurfaces independently.

interface name=wl_surface_group_factory version=1
request name=create_surface_group
description summary=remove a surface from the group
This creates a new wl_surface_group with 'main_surface' 
as the main surface.
Moving the main surface will move all member surfaces 
too.
Only the main surface can have an associated 
wl_shell_surface or
other shell interfaces.
/description
arg name=new_surface_group type=new_id 
interface=wl_surface_group/
arg name=main_surface type=object interface=wl_surface/
/request
/interface

interface name=wl_surface_group version=1
request name=destroy type=destructor
/request

request name=remove
description summary=remove a surface from the group
This removes a surface from the group. The main surface 
cannot be removed.

This won't take effect until wl_surface_group.commit is 
called.
/description
arg name=surface type=object interface=wl_surface/
/request

request name=set_position
description summary=remove a surface from the group
This sets the position of a member surface relative to 
the main surface.

This won't take effect until wl_surface_group.commit is 
called.
/description
arg name=surface type=object interface=wl_surface/
arg name=x type=int/
arg name=y type=int/
/request

request name=place_above
description summary=place a surface above another
This places 'surface' right above 'relative_to_surface'.
If 'surface' isn't a member of this group, it will 
become a member.
It is an error to add a surface belonging to another 
wl_surface_group.

This won't take effect until wl_surface_group.commit is 
called.
/description
arg name=surface type=object interface=wl_surface/
arg name=relative_to_surface type=object 
interface=wl_surface/
/request

request name=place_below
description summary=place a surface below another
This places 'surface' right below 'relative_to_surface'.
If 'surface' isn't a member of this group, it will 
become a member.
It is an error to add a surface belonging to another 
wl_surface_group.

This won't take effect until wl_surface_group.commit is 
called.
/description
arg name=surface type=object interface=wl_surface/
arg name=relative_to_surface type=object 
interface=wl_surface/
/request

request name=commit_surface
description summary=commit a surface
This calls wl_surface.commit on the surface.

This won't take effect until wl_surface_group.commit is 
called.
/description
arg name=surface type=object interface=wl_surface/
/request

request name=commit
description summary=execute all pending requests
This executes all pending remove, place_above, 
place_below,
set_position and commit_surface requests atomically.
/description
/request
/interface

On Thu, Dec 13, 2012 at 3:30 PM, Pekka Paalanen ppaala...@gmail.com wrote:
 Hi John,


 On Thu, 13 Dec 2012 14:51:17 +0100
 John Kåre Alsaker john.kare.alsa...@gmail.com wrote:

 Here is my subsurface proposal. I don't like video sinks (or other
 things) running on an independent framerate. I don't want to maintain
 more state in the compositor side for them or have increased
 complexity in the protocol. They are inefficient and can be solved by
 a number of other ways. In those cases you can change the API, clients
 can buffer the video sink state for themselves (which you should be
 able to do if you're porting to Wayland), or the worst case, render
 them off-screen and let the client draw them as they please.

 Ok, that is the reason you chose that sub-surface commits are no-op,
 right?

 My proposal uses a wl_surface_group which is a group of wl_surfaces
 with a defined Z-order and positions relative to the main surface. It
 has a concept of a main surface which

Re: Sub-surface protocol

2012-12-13 Thread John Kåre Alsaker
On Thu, Dec 13, 2012 at 11:47 PM, Bill Spitzak spit...@gmail.com wrote:
 I see no reason for extra objects. What I would do is add a parent to the
 normal surface. If it is NULL then it is a main surface. If it points at
 another surface then it is a subsurface or a floating window. The parent
 can be changed arbitrarily.
We could also go the other way and add a child pointer, but linked
lists require more operations to do things like raise a surface above
another.

 We must be able to change an existing surface
 from being a main to a subsurface to a floating window at any time, so
 making them different types is not acceptable.
I don't see why we have to be able to change the surface type.
Creating a new surface for each purpose is cheap. I'm not against
being able to do that, I just don't see it as a requirement.


 If the parent is another subsurface that controls the ordering, in an
 obvious manner without any additional api!

 FLOATING WINDOWS:

 PLEASE consider reusing this code for floating windows. THEY ARE THE SAME
 THING!!! The only difference is that the compositor can insert other
 surfaces between floating children and the parent.
Floating windows can have interactions with the shell (animations,
etc) while subsurfaces are a way to offload rendering work from the
clients and represent a single window.


 This would allow all work done to make subsurfaces not blink and to move in
 lock-step also be used for floating windows, which would address a
 deficiency in all existing window systems.
We should probably have a way to atomically update multiple wl_surface
anyway (which could be shared for subsurfaces and floating windows).
Too bad atomicity wasn't added generically to the wayland protocol. We
wouldn't have to add complexity to the code or protocol.

 It will also force the floating
 window api to allow the client to be in final control of the stacking order,
 a deficiency in all existing window systems.
Do you have an example of an UI which require the client to be in
control of the stacking order? If that's really desired it probably
won't hurt if is shared the stacking order protocol with subsurfaces.


 Conversely floating windows have a lot of stuff figured out for focus and
 event handling, like grabs, and cooperation between tasks and threads. All
 of this is needed for subwindows so it makes sense to reuse it.
That's mostly wl_surface things which will just work for subsurfaces.


 CLIP:

 Main windows need clip as well. This is so they can draw edge decorations
 that are clipped off when the window is maximized, rather than having to
 alter their drawings depending on maximization.
Most existing toolkits alters and redraws for the maximized state or
even resizes already.

I think we all agree that clipping and perhaps eventually scaling,
which the compositor can do more efficiently has nothing to do with
subsurface and should be added to wl_surface.


 More important surfaces should be able to inverse-clip (the clip is bigger
 than the image) for cases where the shell api needs a particular size but
 the client knows that a portion is not visible. We need this to clip off any
 number of edges less than 4 (for maximize horizontal/vertical), and to allow
 subwindows to be used to add the borders to a main window that is actually
 smaller than it tells the shell.
I'm not sure what you mean by that.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [RFCv2 6/9] gl-renderer: Add support for blending in linear space.

2012-11-19 Thread John Kåre Alsaker
On Mon, Nov 19, 2012 at 9:52 AM, Pekka Paalanen ppaala...@gmail.com wrote:
 On Sat, 17 Nov 2012 03:23:57 +0100
 John Kåre Alsaker john.kare.alsa...@gmail.com wrote:

 This makes compositing gamma correct by assuming all input surfaces are in
 the sRGB color space. It can be enabled by setting color-managed to true
 in the compositor section of weston.ini.

 It's implemented by converting from sRGB gamma using the new 
 CONVERSION_FROM_SRGB
 shader attribute and drawing the surfaces into a temporary buffer (the
 indirect rendering introduced in the previous patch). That buffer is then
 drawed to the framebuffer using the OUTPUT_TO_SRGB shader attribute which
 converts back into sRGB gamma.

 Both the temporary buffer and sRGB decode LUT needs atleast 12-bits per
 channel for flawless results with 8-bit input/output. This is not provided
 by OpenGL ES 2 by default so desktop OpenGL is required for usable results.

 It also adds a check to ensure we have enough texture units for the planes
 and the LUT.
 ---
  src/compositor.c  |   7 ++-
  src/compositor.h  |   2 +
  src/gl-internal.h |  13 -
  src/gl-renderer.c |  45 ---
  src/gl-shaders.c  | 169 
 +++---
  5 files changed, 219 insertions(+), 17 deletions(-)
 ...
 diff --git a/src/gl-renderer.c b/src/gl-renderer.c
 index b35c5a9..c5b3908 100644
 --- a/src/gl-renderer.c
 +++ b/src/gl-renderer.c
 ...
 @@ -1600,6 +1619,7 @@ gl_renderer_setup(struct weston_compositor *ec, 
 EGLSurface egl_surface)
   gr-bgra_format = GL_BGRA;
   gr-short_type = GL_UNSIGNED_SHORT;
   gr-rgba16_internal_format = GL_RGBA16;
 + gr-l16_internal_format = GL_LUMINANCE16;
  #else
   static const EGLint context_attribs[] = {
   EGL_CONTEXT_CLIENT_VERSION, 2,
 @@ -1610,6 +1630,7 @@ gl_renderer_setup(struct weston_compositor *ec, 
 EGLSurface egl_surface)
   gr-bgra_format = GL_BGRA_EXT;
   gr-short_type = GL_UNSIGNED_BYTE;
   gr-rgba16_internal_format = GL_RGBA;
 + gr-l16_internal_format = GL_LUMINANCE;
  #endif

 You are using a luminance format for a single-channel texture, that
 acts as the lookup table?
I am


 Did you consider GL_R formats? I.e. single-channel color formats.
 Luminance is sort of a thing of the past, AFAIU.
So is OpenGL 2.1, which I'm using.



 Thanks,
 pq
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [RFCv2 3/9] gl-renderer: Add optional support for desktop OpenGL.

2012-11-19 Thread John Kåre Alsaker
On Mon, Nov 19, 2012 at 9:46 AM, Pekka Paalanen ppaala...@gmail.com wrote:
 On Sat, 17 Nov 2012 03:23:54 +0100
 John Kåre Alsaker john.kare.alsa...@gmail.com wrote:

 This adds support for desktop OpenGL which can be enabled by with
 ./configure --enable-opengl.

 Most of the differences in API between OpenGL and OpenGL ES is hidden by
 the new gl_renderer fields.

 It also accesses GLES2 extensions by including GLES2/gl2platform.h directly.
 ---
  configure.ac | 13 +-
  src/Makefile.am  |  4 +--
  src/compositor-rpi.c |  2 +-
  src/compositor.h |  1 +
  src/gl-internal.h|  7 +++---
  src/gl-renderer.c| 69 
 +++-
  src/gl-renderer.h| 19 +++
  src/gl-shaders.c |  5 ++--
  8 files changed, 89 insertions(+), 31 deletions(-)

 diff --git a/src/gl-internal.h b/src/gl-internal.h
 index 994c139..83b351f 100644
 --- a/src/gl-internal.h
 +++ b/src/gl-internal.h
 @@ -24,9 +24,6 @@
  #ifndef _GL_INTERNAL_H_
  #define _GL_INTERNAL_H_

 -#include GLES2/gl2.h
 -#include GLES2/gl2ext.h
 -
  #include stdlib.h
  #include string.h
  #include ctype.h
 @@ -106,6 +103,10 @@ struct gl_renderer {
   int32_t width, height;
   } border;

 + GLenum bgra_internal_format, bgra_format;
 + GLenum rgba16_internal_format;

 Just noticed, rgba16 format is set but not used here, so it would
 belong to the linear blending patch.
Yeah, it's a refactoring artifact.


 Are you sure it doesn't depend on an extension?
It doesn't.



 Thanks,
 pq
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [RFCv2 3/9] gl-renderer: Add optional support for desktop OpenGL.

2012-11-19 Thread John Kåre Alsaker
On Mon, Nov 19, 2012 at 9:21 AM, Pekka Paalanen ppaala...@gmail.com wrote:
 On Sat, 17 Nov 2012 03:23:54 +0100
 John Kåre Alsaker john.kare.alsa...@gmail.com wrote:

 This adds support for desktop OpenGL which can be enabled by with
 ./configure --enable-opengl.

 Most of the differences in API between OpenGL and OpenGL ES is hidden by
 the new gl_renderer fields.

 It also accesses GLES2 extensions by including GLES2/gl2platform.h directly.

 Hi John,

 I'm only commenting on some details of this patch, I'm concerned about
 the differences between desktop GL and GLESv2. I'm assuming the
 general approach is fine, and not evaluating that.

 ---
  configure.ac | 13 +-
  src/Makefile.am  |  4 +--
  src/compositor-rpi.c |  2 +-
  src/compositor.h |  1 +
  src/gl-internal.h|  7 +++---
  src/gl-renderer.c| 69 
 +++-
  src/gl-renderer.h| 19 +++
  src/gl-shaders.c |  5 ++--
  8 files changed, 89 insertions(+), 31 deletions(-)

 diff --git a/configure.ac b/configure.ac
 index 7329c06..df43c75 100644
 --- a/configure.ac
 +++ b/configure.ac
 @@ -34,7 +34,18 @@ AC_CHECK_HEADERS([execinfo.h])
  AC_CHECK_FUNCS([mkostemp strchrnul])

  PKG_CHECK_MODULES(COMPOSITOR,
 -   [wayland-server egl = 7.10 glesv2 xkbcommon pixman-1])
 +   [wayland-server egl = 7.10 xkbcommon pixman-1])
 +
 +
 +AC_ARG_ENABLE(opengl, [  --enable-opengl],,
 +   enable_opengl=no)
 +AM_CONDITIONAL(ENABLE_OPENGL, test x$enable_opengl = xyes)
 +if test x$enable_opengl = xyes; then
 +  PKG_CHECK_MODULES([OPENGL], gl)
 +  AC_DEFINE([BUILD_OPENGL], [1], [Build using desktop OpenGL])
 +else
 +  PKG_CHECK_MODULES([OPENGL], glesv2)
 +fi

 This produces some strange looking results. If you are on GLESv2, then
 BUILD_OPENGL will not be defined, yet OPENGL_{LIBS,CFLAGS} will be used
 and needed. Perhaps rename BUILD_OPENGL to HAVE_DESKTOP_OPENGL or
 something, so it's less confusing.
I'll rename that to BUILD_DESKTOP_OPENGL.


  AC_ARG_ENABLE(setuid-install, [  --enable-setuid-install],,
 diff --git a/src/Makefile.am b/src/Makefile.am
 index 0c315cd..bea769c 100644
 --- a/src/Makefile.am
 +++ b/src/Makefile.am
 @@ -7,8 +7,8 @@ AM_CPPFLAGS = \
   -DLIBEXECDIR='$(libexecdir)'

  weston_LDFLAGS = -export-dynamic
 -weston_CFLAGS = $(GCC_CFLAGS) $(COMPOSITOR_CFLAGS)
 -weston_LDADD = $(COMPOSITOR_LIBS) $(DLOPEN_LIBS) -lm ../shared/libshared.la
 +weston_CFLAGS = $(GCC_CFLAGS) $(COMPOSITOR_CFLAGS) $(OPENGL_CFLAGS)
 +weston_LDADD = $(COMPOSITOR_LIBS) $(OPENGL_LIBS) $(DLOPEN_LIBS) -lm 
 ../shared/libshared.la

  weston_SOURCES = \
   git-version.h   \
 diff --git a/src/compositor-rpi.c b/src/compositor-rpi.c
 index 339032a..d3ae952 100644
 --- a/src/compositor-rpi.c
 +++ b/src/compositor-rpi.c
 @@ -1457,7 +1457,7 @@ rpi_compositor_create(struct wl_display *display, int 
 argc, char *argv[],
   EGL_GREEN_SIZE, 1,
   EGL_BLUE_SIZE, 1,
   EGL_ALPHA_SIZE, 0,
 - EGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT,
 + EGL_RENDERABLE_TYPE, GL_EGL_OPENGL_BIT,

 This looks wrong at first, it appears that you are using a GL value in
 an EGL call. Even if you invented GL_EGL_OPENGL_BIT yourself (google
 does not know about it), the name should be clearly distinguishable from
 official GL and EGL macros.

 Yeah, IIRC there was another similar thing already in window.c, and
 that's not good either.

 So, we'd need a name that is clearly our own. How about WESTON_EGL_*
 and WESTON_GL_* name spaces?
We should probably just use GL_RENDERER_*, although that could be confusing too.


   EGL_NONE
   };

 diff --git a/src/compositor.h b/src/compositor.h
 index e5c579b..a5dd3c6 100644
 --- a/src/compositor.h
 +++ b/src/compositor.h
 @@ -24,6 +24,7 @@
  #ifndef _WAYLAND_SYSTEM_COMPOSITOR_H_
  #define _WAYLAND_SYSTEM_COMPOSITOR_H_

 +#include config.h
  #include pixman.h
  #include xkbcommon/xkbcommon.h
  #include wayland-server.h
 diff --git a/src/gl-internal.h b/src/gl-internal.h
 index 994c139..83b351f 100644
 --- a/src/gl-internal.h
 +++ b/src/gl-internal.h
 @@ -24,9 +24,6 @@
  #ifndef _GL_INTERNAL_H_
  #define _GL_INTERNAL_H_

 -#include GLES2/gl2.h
 -#include GLES2/gl2ext.h
 -
  #include stdlib.h
  #include string.h
  #include ctype.h
 @@ -106,6 +103,10 @@ struct gl_renderer {
   int32_t width, height;
   } border;

 + GLenum bgra_internal_format, bgra_format;
 + GLenum rgba16_internal_format;
 + GLenum short_type;

 These are chosen only at compile time, right? Not runtime, so we don't
 really need these as variables. Maybe WESTON_GL_* macros?
I heard Kristian didn't like macros.


 +
   PFNGLEGLIMAGETARGETTEXTURE2DOESPROC image_target_texture_2d;
   PFNEGLCREATEIMAGEKHRPROC create_image;
   PFNEGLDESTROYIMAGEKHRPROC destroy_image;
 diff --git a/src/gl

Re: Problems with the newest wayland git

2012-11-19 Thread John Kåre Alsaker
On Mon, Nov 19, 2012 at 8:32 PM, Bill Spitzak spit...@gmail.com wrote:
 Can somebody explain why compiling weston with the switch
 --with-cairo-glesv2 would perhaps fix the weston clients (I am not at my
 machine to try this so I don't know if it does). Doesn't that switch *make*
 it use EGL rather than having cairo write to the image buffer?
It will try to use EGL and GLESv2 if that exists on the system and
fall back to wl_shm and cairo's image backend otherwise.



 Bill Spitzak wrote:

 Scott Moreau wrote:

 I believe all open drivers should work with drm at this point. There was
 a nouveau kernel module bug that was recently fixed. The report can be found
 here https://bugs.freedesktop.org/show_bug.cgi?id=54576


 Nouveau has a bug that makes x11-compositor very very slow (event-ready
 returns false after each event is read for Wayland). This has remained true
 through 4 versions of Ubuntu and every window manager I can think of. I
 posted several emails here about it trying to get hints as to how to fix
 this (I assume there is something x11-compositor can do, as I do not see
 equivalent bugs in other programs).

 Despite the obvious slowness, software rendering on the Nvidia driver made
 x11-compositor actually usable by avoiding this bug. Since I am interested
 in trying to design parts of the client/server api of Wayland I do not think
 I should be running it on a system with obvious synchronization error as
 that could hide errors in the design.

 ___
 wayland-devel mailing list
 wayland-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/wayland-devel
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Problems with the newest wayland git

2012-11-19 Thread John Kåre Alsaker
On Mon, Nov 19, 2012 at 9:05 PM, Bill Spitzak spit...@gmail.com wrote:


 John Kåre Alsaker wrote:

 On Mon, Nov 19, 2012 at 8:32 PM, Bill Spitzak spit...@gmail.com wrote:

 Can somebody explain why compiling weston with the switch
 --with-cairo-glesv2 would perhaps fix the weston clients (I am not at
 my
 machine to try this so I don't know if it does). Doesn't that switch
 *make*
 it use EGL rather than having cairo write to the image buffer?


 It will try to use EGL and GLESv2 if that exists on the system and
 fall back to wl_shm and cairo's image backend otherwise.


 I'm still confused. What does it do if you *don't* use that switch?
It will try to use EGL and desktop OpenGL and fall back to wl_shm and
cairo's image backend otherwise.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


[RFCv2 0/9] Adding gamma correct compositing

2012-11-16 Thread John Kåre Alsaker
Previous RFC:
http://lists.freedesktop.org/archives/wayland-devel/2012-November/006244.html

This is the set of patches which adds more flexible shader generation, adds 
support for desktop OpenGL and finally adds gamma correct compositing which 
depends on the former two.

The most problematic commit is probably the desktop OpenGL one. It links to 
prototypes directly and includes GLES2 headers. I'd rather use GLES3 which 
doesn't have such problems, but it's not supported for Mesa yet and it will 
probably take quite a while until it's supported on most Mesa drivers.

*** BLURB HERE ***

John Kåre Alsaker (9):
  gl-renderer: Move shader functions into it's own file.
  gl-renderer: Add flexible shader generation.
  gl-renderer: Add optional support for desktop OpenGL.
  gl-renderer: Tweak YUV shader.
  gl-renderer: Adds the ability to draw damage to a texture and in turn
to the framebuffer.
  gl-renderer: Add support for blending in linear space.
  gl-renderer: Don't multiply with alpha uniform when it's 1.0.
  gl-renderer: Remove unneeded glActiveTexture call.
  gl-renderer: Clean up EGL image and GL texture destruction.

 configure.ac |  13 +-
 src/Makefile.am  |   6 +-
 src/compositor-rpi.c |   2 +-
 src/compositor.c |   7 +-
 src/compositor.h |   3 +
 src/gl-internal.h| 189 +
 src/gl-renderer.c| 731 --
 src/gl-renderer.h|  19 ++
 src/gl-shaders.c | 745 +++
 9 files changed, 1276 insertions(+), 439 deletions(-)
 create mode 100644 src/gl-internal.h
 create mode 100644 src/gl-shaders.c

-- 
1.8.0

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


[RFCv2 1/9] gl-renderer: Move shader functions into it's own file.

2012-11-16 Thread John Kåre Alsaker
This moves all shader functions into it's own file in preparation of a more
flexible shader generation. It adds gl-internal.h for shared definitions
between the files.
---
 src/Makefile.am   |   2 +
 src/gl-internal.h | 138 
 src/gl-renderer.c | 376 ++
 src/gl-shaders.c  | 284 +
 4 files changed, 437 insertions(+), 363 deletions(-)
 create mode 100644 src/gl-internal.h
 create mode 100644 src/gl-shaders.c

diff --git a/src/Makefile.am b/src/Makefile.am
index ef71d00..0c315cd 100644
--- a/src/Makefile.am
+++ b/src/Makefile.am
@@ -34,6 +34,8 @@ weston_SOURCES =  \
util.c  \
matrix.c\
matrix.h\
+   gl-shaders.c\
+   gl-internal.h   \
gl-renderer.h   \
gl-renderer.c   \
noop-renderer.c \
diff --git a/src/gl-internal.h b/src/gl-internal.h
new file mode 100644
index 000..90912df
--- /dev/null
+++ b/src/gl-internal.h
@@ -0,0 +1,138 @@
+/*
+ * Copyright © 2012 Intel Corporation
+ * Copyright © 2012 John Kåre Alsaker
+ *
+ * Permission to use, copy, modify, distribute, and sell this software and
+ * its documentation for any purpose is hereby granted without fee, provided
+ * that the above copyright notice appear in all copies and that both that
+ * copyright notice and this permission notice appear in supporting
+ * documentation, and that the name of the copyright holders not be used in
+ * advertising or publicity pertaining to distribution of the software
+ * without specific, written prior permission.  The copyright holders make
+ * no representations about the suitability of this software for any
+ * purpose.  It is provided as is without express or implied warranty.
+ *
+ * THE COPYRIGHT HOLDERS DISCLAIM ALL WARRANTIES WITH REGARD TO THIS
+ * SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
+ * FITNESS, IN NO EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY
+ * SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER
+ * RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF
+ * CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
+ * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+#ifndef _GL_INTERNAL_H_
+#define _GL_INTERNAL_H_
+
+#include GLES2/gl2.h
+#include GLES2/gl2ext.h
+
+#include stdlib.h
+#include string.h
+#include ctype.h
+#include float.h
+#include assert.h
+#include linux/input.h
+
+#include gl-renderer.h
+
+#include EGL/eglext.h
+#include weston-egl-ext.h
+
+struct gl_shader {
+   GLuint program;
+   GLuint vertex_shader, fragment_shader;
+   GLint proj_uniform;
+   GLint tex_uniforms[3];
+   GLint alpha_uniform;
+   GLint color_uniform;
+};
+
+struct gl_output_state {
+   EGLSurface egl_surface;
+};
+
+struct gl_surface_state {
+   GLfloat color[4];
+   struct gl_shader *shader;
+
+   GLuint textures[3];
+   int num_textures;
+
+   EGLImageKHR images[3];
+   GLenum target;
+   int num_images;
+};
+
+struct gl_renderer {
+   struct weston_renderer base;
+   int fragment_shader_debug;
+
+   EGLDisplay egl_display;
+   EGLContext egl_context;
+   EGLConfig egl_config;
+
+   struct {
+   int32_t top, bottom, left, right;
+   GLuint texture;
+   int32_t width, height;
+   } border;
+
+   PFNGLEGLIMAGETARGETTEXTURE2DOESPROC image_target_texture_2d;
+   PFNEGLCREATEIMAGEKHRPROC create_image;
+   PFNEGLDESTROYIMAGEKHRPROC destroy_image;
+
+   int has_unpack_subimage;
+
+   PFNEGLBINDWAYLANDDISPLAYWL bind_display;
+   PFNEGLUNBINDWAYLANDDISPLAYWL unbind_display;
+   PFNEGLQUERYWAYLANDBUFFERWL query_buffer;
+   int has_bind_display;
+
+   int has_egl_image_external;
+
+   struct gl_shader texture_shader_rgba;
+   struct gl_shader texture_shader_rgbx;
+   struct gl_shader texture_shader_egl_external;
+   struct gl_shader texture_shader_y_uv;
+   struct gl_shader texture_shader_y_u_v;
+   struct gl_shader texture_shader_y_xuxv;
+   struct gl_shader invert_color_shader;
+   struct gl_shader solid_shader;
+   struct gl_shader *current_shader;
+};
+
+static inline struct gl_output_state *
+get_output_state(struct weston_output *output)
+{
+   return (struct gl_output_state *)output-renderer_state;
+}
+
+static inline struct gl_surface_state *
+get_surface_state(struct weston_surface *surface)
+{
+   return (struct gl_surface_state *)surface-renderer_state;
+}
+
+static inline struct gl_renderer *
+get_renderer(struct weston_compositor *ec)
+{
+   return (struct gl_renderer *)ec-renderer;
+}
+
+int
+gl_init_shaders

[RFCv2 3/9] gl-renderer: Add optional support for desktop OpenGL.

2012-11-16 Thread John Kåre Alsaker
This adds support for desktop OpenGL which can be enabled by with
./configure --enable-opengl.

Most of the differences in API between OpenGL and OpenGL ES is hidden by
the new gl_renderer fields.

It also accesses GLES2 extensions by including GLES2/gl2platform.h directly.
---
 configure.ac | 13 +-
 src/Makefile.am  |  4 +--
 src/compositor-rpi.c |  2 +-
 src/compositor.h |  1 +
 src/gl-internal.h|  7 +++---
 src/gl-renderer.c| 69 +++-
 src/gl-renderer.h| 19 +++
 src/gl-shaders.c |  5 ++--
 8 files changed, 89 insertions(+), 31 deletions(-)

diff --git a/configure.ac b/configure.ac
index 7329c06..df43c75 100644
--- a/configure.ac
+++ b/configure.ac
@@ -34,7 +34,18 @@ AC_CHECK_HEADERS([execinfo.h])
 AC_CHECK_FUNCS([mkostemp strchrnul])
 
 PKG_CHECK_MODULES(COMPOSITOR,
- [wayland-server egl = 7.10 glesv2 xkbcommon pixman-1])
+ [wayland-server egl = 7.10 xkbcommon pixman-1])
+
+
+AC_ARG_ENABLE(opengl, [  --enable-opengl],,
+ enable_opengl=no)
+AM_CONDITIONAL(ENABLE_OPENGL, test x$enable_opengl = xyes)
+if test x$enable_opengl = xyes; then
+  PKG_CHECK_MODULES([OPENGL], gl)
+  AC_DEFINE([BUILD_OPENGL], [1], [Build using desktop OpenGL])
+else
+  PKG_CHECK_MODULES([OPENGL], glesv2)
+fi
 
 
 AC_ARG_ENABLE(setuid-install, [  --enable-setuid-install],,
diff --git a/src/Makefile.am b/src/Makefile.am
index 0c315cd..bea769c 100644
--- a/src/Makefile.am
+++ b/src/Makefile.am
@@ -7,8 +7,8 @@ AM_CPPFLAGS =   \
-DLIBEXECDIR='$(libexecdir)'
 
 weston_LDFLAGS = -export-dynamic
-weston_CFLAGS = $(GCC_CFLAGS) $(COMPOSITOR_CFLAGS)
-weston_LDADD = $(COMPOSITOR_LIBS) $(DLOPEN_LIBS) -lm ../shared/libshared.la
+weston_CFLAGS = $(GCC_CFLAGS) $(COMPOSITOR_CFLAGS) $(OPENGL_CFLAGS)
+weston_LDADD = $(COMPOSITOR_LIBS) $(OPENGL_LIBS) $(DLOPEN_LIBS) -lm 
../shared/libshared.la
 
 weston_SOURCES =   \
git-version.h   \
diff --git a/src/compositor-rpi.c b/src/compositor-rpi.c
index 339032a..d3ae952 100644
--- a/src/compositor-rpi.c
+++ b/src/compositor-rpi.c
@@ -1457,7 +1457,7 @@ rpi_compositor_create(struct wl_display *display, int 
argc, char *argv[],
EGL_GREEN_SIZE, 1,
EGL_BLUE_SIZE, 1,
EGL_ALPHA_SIZE, 0,
-   EGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT,
+   EGL_RENDERABLE_TYPE, GL_EGL_OPENGL_BIT,
EGL_NONE
};
 
diff --git a/src/compositor.h b/src/compositor.h
index e5c579b..a5dd3c6 100644
--- a/src/compositor.h
+++ b/src/compositor.h
@@ -24,6 +24,7 @@
 #ifndef _WAYLAND_SYSTEM_COMPOSITOR_H_
 #define _WAYLAND_SYSTEM_COMPOSITOR_H_
 
+#include config.h
 #include pixman.h
 #include xkbcommon/xkbcommon.h
 #include wayland-server.h
diff --git a/src/gl-internal.h b/src/gl-internal.h
index 994c139..83b351f 100644
--- a/src/gl-internal.h
+++ b/src/gl-internal.h
@@ -24,9 +24,6 @@
 #ifndef _GL_INTERNAL_H_
 #define _GL_INTERNAL_H_
 
-#include GLES2/gl2.h
-#include GLES2/gl2ext.h
-
 #include stdlib.h
 #include string.h
 #include ctype.h
@@ -106,6 +103,10 @@ struct gl_renderer {
int32_t width, height;
} border;
 
+   GLenum bgra_internal_format, bgra_format;
+   GLenum rgba16_internal_format;
+   GLenum short_type;
+
PFNGLEGLIMAGETARGETTEXTURE2DOESPROC image_target_texture_2d;
PFNEGLCREATEIMAGEKHRPROC create_image;
PFNEGLDESTROYIMAGEKHRPROC destroy_image;
diff --git a/src/gl-renderer.c b/src/gl-renderer.c
index 1a9f782..1956759 100644
--- a/src/gl-renderer.c
+++ b/src/gl-renderer.c
@@ -878,11 +878,12 @@ gl_renderer_read_pixels(struct weston_output *output,
   uint32_t x, uint32_t y,
   uint32_t width, uint32_t height)
 {
+   struct gl_renderer *gr = get_renderer(output-compositor);
GLenum gl_format;
 
switch (format) {
case PIXMAN_a8r8g8b8:
-   gl_format = GL_BGRA_EXT;
+   gl_format = gr-bgra_format;
break;
case PIXMAN_a8b8g8r8:
gl_format = GL_RGBA;
@@ -928,9 +929,9 @@ gl_renderer_flush_damage(struct weston_surface *surface)
glBindTexture(GL_TEXTURE_2D, gs-textures[0]);
 
if (!gr-has_unpack_subimage) {
-   glTexImage2D(GL_TEXTURE_2D, 0, GL_BGRA_EXT,
+   glTexImage2D(GL_TEXTURE_2D, 0, gr-bgra_internal_format,
 surface-pitch, surface-buffer-height, 0,
-GL_BGRA_EXT, GL_UNSIGNED_BYTE,
+gr-bgra_format, GL_UNSIGNED_BYTE,
 wl_shm_buffer_get_data(surface-buffer));
 
goto done;
@@ -948,7 +949,7 @@ gl_renderer_flush_damage(struct weston_surface *surface)
rectangles[i].x1, rectangles[i].y1,

[RFCv2 4/9] gl-renderer: Tweak YUV shader.

2012-11-16 Thread John Kåre Alsaker
A more vectorized version of the YUV shader, may or may not be faster.

It's also untested...
---
 src/gl-shaders.c | 13 ++---
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/src/gl-shaders.c b/src/gl-shaders.c
index 32bb70d..8595daf 100644
--- a/src/gl-shaders.c
+++ b/src/gl-shaders.c
@@ -225,13 +225,12 @@ shader_yuv_constructor(struct shader_builder *sb)
}
 
append(sb-body, sample);
-   append(sb-body,
-   yuv = vec3(1.16438356 * (yuv.x - 0.0625), yuv.yz - 0.5);\n \
-   gl_FragColor.r = yuv.x + 1.59602678 * yuv.z;\n \
-   gl_FragColor.g = yuv.x - 0.39176229 * yuv.y -  \
-   0.81296764 * yuv.z;\n \
-   gl_FragColor.b = yuv.x + 2.01723214 * yuv.y;\n \
-   gl_FragColor.a = 1.0;\n);
+   append(sb-body,  yuv = yuv * vec3(1.16438356, 1.0, 0.81296764) -  \
+   vec3(0.07277397, 0.5, 0.40648382);\n \
+   vec3 diff = vec3(yuv.x, yuv.x - yuv.z, 1.0);\n \
+   gl_FragColor = yuv.zyyy *  \
+   vec4(1.96321071, -0.39176229, 2.01723214, 0.0) +  \
+   diff.xyxz;\n);
 
return 1;
 }
-- 
1.8.0

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


[RFCv2 5/9] gl-renderer: Adds the ability to draw damage to a texture and in turn to the framebuffer.

2012-11-16 Thread John Kåre Alsaker
---
 src/gl-internal.h |   5 ++
 src/gl-renderer.c | 170 --
 2 files changed, 158 insertions(+), 17 deletions(-)

diff --git a/src/gl-internal.h b/src/gl-internal.h
index 83b351f..40f109b 100644
--- a/src/gl-internal.h
+++ b/src/gl-internal.h
@@ -75,6 +75,11 @@ struct gl_shader {
 
 struct gl_output_state {
EGLSurface egl_surface;
+
+   int indirect_disable;
+   int indirect_drawing;
+   GLuint indirect_texture;
+   GLuint indirect_fbo;
 };
 
 struct gl_surface_state {
diff --git a/src/gl-renderer.c b/src/gl-renderer.c
index 1956759..b35c5a9 100644
--- a/src/gl-renderer.c
+++ b/src/gl-renderer.c
@@ -529,23 +529,12 @@ triangle_fan_debug(struct weston_surface *surface, int 
first, int count)
 }
 
 static void
-repaint_region(struct weston_surface *es, pixman_region32_t *region,
-   pixman_region32_t *surf_region)
+repaint_region(struct weston_compositor *ec, struct weston_surface *es,
+   int nfans)
 {
-   struct weston_compositor *ec = es-compositor;
GLfloat *v;
unsigned int *vtxcnt;
-   int i, first, nfans;
-
-   /* The final region to be painted is the intersection of
-* 'region' and 'surf_region'. However, 'region' is in the global
-* coordinates, and 'surf_region' is in the surface-local
-* coordinates. texture_region() will iterate over all pairs of
-* rectangles from both regions, compute the intersection
-* polygon for each pair, and store it as a triangle fan if
-* it has a non-zero area (at least 3 vertices, actually).
-*/
-   nfans = texture_region(es, region, surf_region);
+   int i, first;
 
v = ec-vertices.data;
vtxcnt = ec-vtxcnt.data;
@@ -560,7 +549,7 @@ repaint_region(struct weston_surface *es, pixman_region32_t 
*region,
 
for (i = 0, first = 0; i  nfans; i++) {
glDrawArrays(GL_TRIANGLE_FAN, first, vtxcnt[i]);
-   if (ec-fan_debug)
+   if (es  ec-fan_debug)
triangle_fan_debug(es, first, vtxcnt[i]);
first += vtxcnt[i];
}
@@ -596,6 +585,144 @@ use_output(struct weston_output *output)
 }
 
 static void
+repaint_surface(struct weston_surface *es, pixman_region32_t *region,
+   pixman_region32_t *surf_region)
+{
+   /* The final region to be painted is the intersection of
+* 'region' and 'surf_region'. However, 'region' is in the global
+* coordinates, and 'surf_region' is in the surface-local
+* coordinates. texture_region() will iterate over all pairs of
+* rectangles from both regions, compute the intersection
+* polygon for each pair, and store it as a triangle fan if
+* it has a non-zero area (at least 3 vertices, actually).
+*/
+   int nfans = texture_region(es, region, surf_region);
+
+   repaint_region(es-compositor, es, nfans);
+}
+
+static void
+output_emit_vertex(struct weston_output *output, GLfloat **v, int32_t x, 
int32_t y)
+{
+   struct weston_vector vector;
+
+   /* position: */
+   *((*v)++) = x;
+   *((*v)++) = y;
+
+   /* texcoord: */
+
+   vector.f[0] = x;
+   vector.f[1] = y;
+   vector.f[2] = 0.0f;
+   vector.f[3] = 1.0f;
+
+   weston_matrix_transform(output-matrix, vector);
+
+   *((*v)++) = (vector.f[0] + 1.0f) * 0.5f;
+   *((*v)++) = (vector.f[1] + 1.0f) * 0.5f;
+}
+
+static void
+repaint_output(struct weston_output *output, pixman_region32_t *region)
+{
+   struct weston_compositor *ec = output-compositor;
+   GLfloat *v;
+   unsigned int *vtxcnt, nvtx = 0;
+   pixman_box32_t *rects;
+   int i, nrects;
+
+   rects = pixman_region32_rectangles(region, nrects);
+
+   v = wl_array_add(ec-vertices, nrects * 4 * 4 * sizeof *v);
+   vtxcnt = wl_array_add(ec-vtxcnt, nrects * sizeof *vtxcnt);
+
+   for (i = 0; i  nrects; i++) {
+   pixman_box32_t *rect = rects[i];
+
+   output_emit_vertex(output, v, rect-x1, rect-y1);
+   output_emit_vertex(output, v, rect-x2, rect-y1);
+   output_emit_vertex(output, v, rect-x2, rect-y2);
+   output_emit_vertex(output, v, rect-x1, rect-y2);
+
+   vtxcnt[nvtx++] = 4;
+   }
+
+   repaint_region(ec, NULL, nvtx);
+}
+
+static void
+create_indirect_texture(struct weston_output *output)
+{
+   struct gl_output_state *go = get_output_state(output);
+   GLenum status;
+
+   glGenTextures(1, go-indirect_texture);
+
+   glBindTexture(GL_TEXTURE_2D, go-indirect_texture);
+
+   glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
+   glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
+   glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
+   glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
+
+   

[RFCv2 7/9] gl-renderer: Don't multiply with alpha uniform when it's 1.0.

2012-11-16 Thread John Kåre Alsaker
This eliminates the multiplication of the alpha uniform for the common case
of surfaces with 1.0 as alpha.
---
 src/gl-internal.h |  1 +
 src/gl-renderer.c | 14 --
 src/gl-shaders.c  |  6 +++---
 3 files changed, 12 insertions(+), 9 deletions(-)

diff --git a/src/gl-internal.h b/src/gl-internal.h
index 2a1f0c6..b8140c8 100644
--- a/src/gl-internal.h
+++ b/src/gl-internal.h
@@ -53,6 +53,7 @@ enum gl_conversion_attribute {
 
 enum gl_output_attribute {
OUTPUT_BLEND,
+   OUTPUT_TRANSPARENT,
OUTPUT_TO_SRGB,
OUTPUT_COUNT
 };
diff --git a/src/gl-renderer.c b/src/gl-renderer.c
index c5b3908..5847e12 100644
--- a/src/gl-renderer.c
+++ b/src/gl-renderer.c
@@ -744,7 +744,8 @@ draw_surface(struct weston_surface *es, struct 
weston_output *output,
pixman_region32_t surface_blend;
pixman_region32_t *buffer_damage;
GLint filter;
-   int i;
+   int i, transparent;
+   enum gl_output_attribute output_attribute;
 
pixman_region32_init(repaint);
pixman_region32_intersect(repaint,
@@ -764,7 +765,10 @@ draw_surface(struct weston_surface *es, struct 
weston_output *output,
gl_shader_setup(gr-solid_shader, es, output);
}
 
-   shader = gl_select_shader(gr, gs-input, OUTPUT_BLEND, gs-conversion);
+   transparent = es-alpha  1.0;
+   output_attribute = transparent ? OUTPUT_TRANSPARENT : OUTPUT_BLEND;
+
+   shader = gl_select_shader(gr, gs-input, output_attribute, 
gs-conversion);
 
gl_use_shader(gr, shader);
gl_shader_setup(shader, es, output);
@@ -796,13 +800,13 @@ draw_surface(struct weston_surface *es, struct 
weston_output *output,
 
struct gl_shader *rgbx_shader = gl_select_shader(gr,
INPUT_RGBX,
-   OUTPUT_BLEND,
+   output_attribute,
gs-conversion);
gl_use_shader(gr, rgbx_shader);
gl_shader_setup(rgbx_shader, es, output);
}
 
-   if (es-alpha  1.0)
+   if (transparent)
glEnable(GL_BLEND);
else
glDisable(GL_BLEND);
@@ -931,8 +935,6 @@ draw_border(struct weston_output *output)
 
gl_shader_set_output(shader, output);
 
-   glUniform1f(shader-alpha_uniform, 1);
-
n = texture_border(output);
 
glActiveTexture(GL_TEXTURE0);
diff --git a/src/gl-shaders.c b/src/gl-shaders.c
index 0f49eb8..9a88ec8 100644
--- a/src/gl-shaders.c
+++ b/src/gl-shaders.c
@@ -489,8 +489,7 @@ create_shader_permutation(struct gl_renderer *renderer,
 
if (OPENGL_ES_VER)
append(sb.global, precision mediump float;\n);
-   append(sb.global, varying vec2 texture_coord;\n \
-   uniform float alpha;\n);
+   append(sb.global, varying vec2 texture_coord;\n);
 
append(sb.body, void main()\n{\n);
 
@@ -502,7 +501,8 @@ create_shader_permutation(struct gl_renderer *renderer,
add_conversion(sb);
 
switch (sb.attributes[ATTRIBUTE_OUTPUT]) {
-   case OUTPUT_BLEND:
+   case OUTPUT_TRANSPARENT:
+   append(sb.global, uniform float alpha;\n);
append(sb.body, gl_FragColor *= alpha;\n);
break;
case OUTPUT_TO_SRGB:
-- 
1.8.0

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


[RFCv2 8/9] gl-renderer: Remove unneeded glActiveTexture call.

2012-11-16 Thread John Kåre Alsaker
---
 src/gl-renderer.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/src/gl-renderer.c b/src/gl-renderer.c
index 5847e12..c82bed8 100644
--- a/src/gl-renderer.c
+++ b/src/gl-renderer.c
@@ -1725,8 +1725,6 @@ gl_renderer_setup(struct weston_compositor *ec, 
EGLSurface egl_surface)
gr-has_bind_display = 0;
}
 
-   glActiveTexture(GL_TEXTURE0);
-
if (gl_init_shaders(gr)  0)
return -1;
 
-- 
1.8.0

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


[RFCv2 9/9] gl-renderer: Clean up EGL image and GL texture destruction.

2012-11-16 Thread John Kåre Alsaker
This also ensures we destroy all EGL images if a plane fails to be created.
---
 src/gl-renderer.c | 43 +++
 1 file changed, 27 insertions(+), 16 deletions(-)

diff --git a/src/gl-renderer.c b/src/gl-renderer.c
index c82bed8..b022ba6 100644
--- a/src/gl-renderer.c
+++ b/src/gl-renderer.c
@@ -1124,6 +1124,24 @@ ensure_textures(struct gl_surface_state *gs, int 
num_textures)
 }
 
 static void
+destroy_images(struct gl_renderer *gr, struct gl_surface_state *gs)
+{
+   int i;
+
+   for (i = 0; i  gs-num_images; i++)
+   gr-destroy_image(gr-egl_display, gs-images[i]);
+
+   gs-num_images = 0;
+}
+
+static void
+destroy_textures(struct gl_surface_state *gs)
+{
+   glDeleteTextures(gs-num_textures, gs-textures);
+   gs-num_textures = 0;
+}
+
+static void
 gl_renderer_attach(struct weston_surface *es, struct wl_buffer *buffer)
 {
struct weston_compositor *ec = es-compositor;
@@ -1132,14 +1150,10 @@ gl_renderer_attach(struct weston_surface *es, struct 
wl_buffer *buffer)
EGLint attribs[3], format;
int i, num_planes;
 
+   destroy_images(gr, gs);
+
if (!buffer) {
-   for (i = 0; i  gs-num_images; i++) {
-   gr-destroy_image(gr-egl_display, gs-images[i]);
-   gs-images[i] = NULL;
-   }
-   gs-num_images = 0;
-   glDeleteTextures(gs-num_textures, gs-textures);
-   gs-num_textures = 0;
+   destroy_textures(gs);
return;
}
 
@@ -1158,8 +1172,6 @@ gl_renderer_attach(struct weston_surface *es, struct 
wl_buffer *buffer)
gs-input = INPUT_RGBA;
} else if (gr-query_buffer(gr-egl_display, buffer,
EGL_TEXTURE_FORMAT, format)) {
-   for (i = 0; i  gs-num_images; i++)
-   gr-destroy_image(gr-egl_display, gs-images[i]);
gs-num_images = 0;
gs-target = GL_TEXTURE_2D;
switch (format) {
@@ -1203,10 +1215,11 @@ gl_renderer_attach(struct weston_surface *es, struct 
wl_buffer *buffer)
 EGL_WAYLAND_BUFFER_WL,
 buffer, attribs);
if (!gs-images[i]) {
+   gs-num_images = i;
+   destroy_images(gr, gs);
weston_log(failed to create img for plane 
%d\n, i);
-   continue;
+   return;
}
-   gs-num_images++;
 
glActiveTexture(GL_TEXTURE0 + i);
glBindTexture(gs-target, gs-textures[i]);
@@ -1214,6 +1227,7 @@ gl_renderer_attach(struct weston_surface *es, struct 
wl_buffer *buffer)
gs-images[i]);
}
 
+   gs-num_images = num_planes;
es-pitch = buffer-width;
} else {
weston_log(unhandled buffer type!\n);
@@ -1260,12 +1274,9 @@ gl_renderer_destroy_surface(struct weston_surface 
*surface)
 {
struct gl_surface_state *gs = get_surface_state(surface);
struct gl_renderer *gr = get_renderer(surface-compositor);
-   int i;
-
-   glDeleteTextures(gs-num_textures, gs-textures);
 
-   for (i = 0; i  gs-num_images; i++)
-   gr-destroy_image(gr-egl_display, gs-images[i]);
+   destroy_textures(gs);
+   destroy_images(gr, gs);
 
free(gs);
 }
-- 
1.8.0

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [RFC weston 0/2] Adding a more flexible shader generation

2012-11-15 Thread John Kåre Alsaker
On Thu, Nov 15, 2012 at 11:01 AM, Ander Conselvan de Oliveira
conselv...@gmail.com wrote:
 On 11/14/2012 05:41 PM, John Kåre Alsaker wrote:

 These two patches adds a flexible shader generation which is based on
 concatenating strings instead of duplicating them into a lot of literals.
 This becomes more useful as I add shader variants for converting to/from
 sRGB gamma and it also allows me to easily get rid of the alpha uniform used
 for fading surfaces.

 I have not tested the YUV shaders since that doesn't appear to be too easy
 without an Intel GPU. I also have a more vectorized variant of the YUV-RGB
 conversion I'd like to test.

 I've split this into two patches to make the introduction of this a bit
 clearer. I seemed to rewrite all of gl-shaders.c anyway with only 2
 functions being recognisable. So I was wondering if I should just merge
 these patches.

 The first commit which really utilizes this is the one which adds gamma
 correct rendering. Should I wait until cleaning up that before submitting
 this?


 I think it it would be a good idea to send it if that would show why do you
 want to make this change.

 A few thoughts:

  - Are the shader constructor functions and the shader_builder really
 necessary? It seems the only thing the constructors do is call
 shader_build_add(). If the you could use a struct with a few string fields
 (one for global, one for main, etc) maybe the shader source wouldn't be
 burried so deep in the constructor calls.
The constructor functions deal only with generating the input and
would have to be replaced with a large switch in
create_shader_permutation.

I don't see how using string fields instead of a string array would
clear things up much. You'd still need a shader_builder_add utility
function to concatenate them (or the pointers to them if you like).


 Besides, if you only have a list of string you can avoid the memcpy in
 shader_build_add() and let glShaderSource do the concatenation.
That would just be a insignificant performance improvement.


 Anyway, I just think this makes the shader sources much less readable.
That is a side-effect of splitting them up.


  - What about performance implications?

 The whole create all permutations is scary. If the number of shaders
 start growing it would be good to compile only the shaders that are being
 used.
We could change it to compile on demand if it grows too high, but that
might lead to stuttering.


 Also, before the shader would be selected for the surface on attach, but now
 you call gl_use_shader() on draw surface. Is this really necessary? I ask
 because this function is a for loop on the number of attributes and this
 makes me think of how scalable this is.
I think you mean gl_select_shader? This is because only the input
shader attribute (and later the conversion attribute) is known at
attach-time. The loop in permutation_from_attributes should be
unrolled be the compiler. I see the lookup into the array of shader
pointers as the biggest performance problem there.


 Cheers,
 Ander




 John Kåre Alsaker (2):
gl-renderer: Move shader functions into it's own file.
gl-renderer: Add flexible shader generation.

   src/Makefile.am   |   2 +
   src/gl-internal.h | 171 
   src/gl-renderer.c | 428 --
   src/gl-shaders.c  | 602
 ++
   4 files changed, 819 insertions(+), 384 deletions(-)
   create mode 100644 src/gl-internal.h
   create mode 100644 src/gl-shaders.c


___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


[RFC weston 0/2] Adding a more flexible shader generation

2012-11-14 Thread John Kåre Alsaker
These two patches adds a flexible shader generation which is based on 
concatenating strings instead of duplicating them into a lot of literals. This 
becomes more useful as I add shader variants for converting to/from sRGB gamma 
and it also allows me to easily get rid of the alpha uniform used for fading 
surfaces.

I have not tested the YUV shaders since that doesn't appear to be too easy 
without an Intel GPU. I also have a more vectorized variant of the YUV-RGB 
conversion I'd like to test.

I've split this into two patches to make the introduction of this a bit 
clearer. I seemed to rewrite all of gl-shaders.c anyway with only 2 functions 
being recognisable. So I was wondering if I should just merge these patches.

The first commit which really utilizes this is the one which adds gamma correct 
rendering. Should I wait until cleaning up that before submitting this?

John Kåre Alsaker (2):
  gl-renderer: Move shader functions into it's own file.
  gl-renderer: Add flexible shader generation.

 src/Makefile.am   |   2 +
 src/gl-internal.h | 171 
 src/gl-renderer.c | 428 --
 src/gl-shaders.c  | 602 ++
 4 files changed, 819 insertions(+), 384 deletions(-)
 create mode 100644 src/gl-internal.h
 create mode 100644 src/gl-shaders.c

-- 
1.8.0

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


[PATCH v6 01/12] Move weston_output EGL state into gles2-renderer.

2012-11-13 Thread John Kåre Alsaker
This introduces callbacks for output creation and destruction for the 
gles2-renderer. This enables the gles2-renderer to have per-output state. EGL 
surface creation is now done by the output_create callback and the EGL surface 
is stored in the new per-output gles2-renderer state. On the first 
output_create call, the gles2-renderer will setup it's GL context. This is 
because EGL requires a EGL surface to be able to use the GL context.
---
 src/compositor-android.c | 16 +++---
 src/compositor-drm.c | 50 +++---
 src/compositor-rpi.c | 52 +++
 src/compositor-wayland.c | 16 +++---
 src/compositor-x11.c | 25 +--
 src/compositor.h | 10 --
 src/gles2-renderer.c | 80 ++--
 7 files changed, 130 insertions(+), 119 deletions(-)

diff --git a/src/compositor-android.c b/src/compositor-android.c
index 3c0273a..5bf1df3 100644
--- a/src/compositor-android.c
+++ b/src/compositor-android.c
@@ -145,6 +145,8 @@ android_output_destroy(struct weston_output *base)
wl_list_remove(output-base.link);
weston_output_destroy(output-base);
 
+   gles2_renderer_output_destroy(base);
+
android_framebuffer_destroy(output-fb);
 
free(output);
@@ -406,16 +408,9 @@ android_init_egl(struct android_compositor *compositor,
return -1;
}
 
-   output-base.egl_surface =
-   eglCreateWindowSurface(compositor-base.egl_display,
-  compositor-base.egl_config,
-  output-fb-native_window,
-  NULL);
-   if (output-base.egl_surface == EGL_NO_SURFACE) {
-   weston_log(Failed to create FB EGLSurface.\n);
-   print_egl_error_state();
+   if (gles2_renderer_output_create(output-base,
+   output-fb-native_window)  0)
return -1;
-   }
 
return 0;
 }
@@ -478,9 +473,6 @@ android_compositor_create(struct wl_display *display, int 
argc, char *argv[],
 
android_compositor_add_output(compositor, output);
 
-   if (gles2_renderer_init(compositor-base)  0)
-   goto err_egl;
-
compositor-seat = android_seat_create(compositor);
if (!compositor-seat)
goto err_egl;
diff --git a/src/compositor-drm.c b/src/compositor-drm.c
index 5fe234d..f7b8d68 100644
--- a/src/compositor-drm.c
+++ b/src/compositor-drm.c
@@ -829,7 +829,8 @@ drm_output_destroy(struct weston_output *output_base)
c-crtc_allocator = ~(1  output-crtc_id);
c-connector_allocator = ~(1  output-connector_id);
 
-   eglDestroySurface(c-base.egl_display, output-base.egl_surface);
+   gles2_renderer_output_destroy(output_base);
+
gbm_surface_destroy(output-surface);
 
weston_plane_release(output-fb_plane);
@@ -875,7 +876,6 @@ drm_output_switch_mode(struct weston_output *output_base, 
struct weston_mode *mo
int ret;
struct drm_compositor *ec;
struct gbm_surface *surface;
-   EGLSurface egl_surface;
 
if (output_base == NULL) {
weston_log(output is NULL.\n);
@@ -935,14 +935,11 @@ drm_output_switch_mode(struct weston_output *output_base, 
struct weston_mode *mo
return -1;
}
 
-   egl_surface =
-   eglCreateWindowSurface(ec-base.egl_display,
-  ec-base.egl_config,
-  surface, NULL);
+   gles2_renderer_output_destroy(output-base);
 
-   if (egl_surface == EGL_NO_SURFACE) {
-   weston_log(failed to create egl surface\n);
-   goto err;
+   if (!gles2_renderer_output_create(output-base, surface)) {
+   weston_log(failed to create renderer output\n);
+   goto err_gbm;
}
 
ret = drmModeSetCrtc(ec-drm.fd,
@@ -951,7 +948,7 @@ drm_output_switch_mode(struct weston_output *output_base, 
struct weston_mode *mo
 output-connector_id, 1, drm_mode-mode_info);
if (ret) {
weston_log(failed to set mode\n);
-   goto err;
+   goto err_gles2;
}
 
/* reset rendering stuff. */
@@ -973,9 +970,7 @@ drm_output_switch_mode(struct weston_output *output_base, 
struct weston_mode *mo
}
output-next = NULL;
 
-   eglDestroySurface(ec-base.egl_display, output-base.egl_surface);
gbm_surface_destroy(output-surface);
-   output-base.egl_surface = egl_surface;
output-surface = surface;
 
/*update output*/
@@ -984,8 +979,9 @@ drm_output_switch_mode(struct weston_output *output_base, 
struct weston_mode *mo
weston_output_move(output-base, output-base.x, output-base.y);
return 0;
 
-err:
-   eglDestroySurface(ec-base.egl_display, egl_surface);
+err_gles2:
+   

[PATCH v6 02/12] Move EGLConfig, EGLContext and EGLDisplay fields into gles2-renderer.

2012-11-13 Thread John Kåre Alsaker
This moves the EGLConfig, EGLContext and EGLDisplay fields into gles2-renderer. 
It also moves EGLDisplay creation and EGLConfig selection into gles2-renderer.
---
 src/compositor-android.c | 150 +++
 src/compositor-drm.c |  39 ++---
 src/compositor-rpi.c |  80 ---
 src/compositor-wayland.c |  48 +++-
 src/compositor-x11.c |  58 ++
 src/compositor.h |  11 ++-
 src/gles2-renderer.c | 200 ++-
 7 files changed, 221 insertions(+), 365 deletions(-)

diff --git a/src/compositor-android.c b/src/compositor-android.c
index 5bf1df3..cced65d 100644
--- a/src/compositor-android.c
+++ b/src/compositor-android.c
@@ -78,42 +78,6 @@ to_android_compositor(struct weston_compositor *base)
return container_of(base, struct android_compositor, base);
 }
 
-static const char *
-egl_error_string(EGLint code)
-{
-#define MYERRCODE(x) case x: return #x;
-   switch (code) {
-   MYERRCODE(EGL_SUCCESS)
-   MYERRCODE(EGL_NOT_INITIALIZED)
-   MYERRCODE(EGL_BAD_ACCESS)
-   MYERRCODE(EGL_BAD_ALLOC)
-   MYERRCODE(EGL_BAD_ATTRIBUTE)
-   MYERRCODE(EGL_BAD_CONTEXT)
-   MYERRCODE(EGL_BAD_CONFIG)
-   MYERRCODE(EGL_BAD_CURRENT_SURFACE)
-   MYERRCODE(EGL_BAD_DISPLAY)
-   MYERRCODE(EGL_BAD_SURFACE)
-   MYERRCODE(EGL_BAD_MATCH)
-   MYERRCODE(EGL_BAD_PARAMETER)
-   MYERRCODE(EGL_BAD_NATIVE_PIXMAP)
-   MYERRCODE(EGL_BAD_NATIVE_WINDOW)
-   MYERRCODE(EGL_CONTEXT_LOST)
-   default:
-   return unknown;
-   }
-#undef MYERRCODE
-}
-
-static void
-print_egl_error_state(void)
-{
-   EGLint code;
-
-   code = eglGetError();
-   weston_log(EGL error state: %s (0x%04lx)\n,
-   egl_error_string(code), (long)code);
-}
-
 static void
 android_finish_frame(void *data)
 {
@@ -319,127 +283,37 @@ android_seat_create(struct android_compositor 
*compositor)
 }
 
 static int
-android_egl_choose_config(struct android_compositor *compositor,
- struct android_framebuffer *fb,
- const EGLint *attribs)
-{
-   EGLBoolean ret;
-   EGLint count = 0;
-   EGLint matched = 0;
-   EGLConfig *configs;
-   int i;
-
-   /*
-* The logic is copied from Android frameworks/base/services/
-* surfaceflinger/DisplayHardware/DisplayHardware.cpp
-*/
-
-   compositor-base.egl_config = NULL;
-
-   ret = eglGetConfigs(compositor-base.egl_display, NULL, 0, count);
-   if (ret == EGL_FALSE || count  1)
-   return -1;
-
-   configs = calloc(count, sizeof *configs);
-   if (!configs)
-   return -1;
-
-   ret = eglChooseConfig(compositor-base.egl_display, attribs, configs,
- count, matched);
-   if (ret == EGL_FALSE || matched  1)
-   goto out;
-
-   for (i = 0; i  matched; ++i) {
-   EGLint id;
-   ret = eglGetConfigAttrib(compositor-base.egl_display,
-configs[i], EGL_NATIVE_VISUAL_ID,
-id);
-   if (ret == EGL_FALSE)
-   continue;
-   if (id  0  fb-format == id) {
-   compositor-base.egl_config = configs[i];
-   break;
-   }
-   }
-
-out:
-   free(configs);
-   if (!compositor-base.egl_config)
-   return -1;
-
-   return 0;
-}
-
-static int
 android_init_egl(struct android_compositor *compositor,
 struct android_output *output)
 {
-   EGLint eglmajor, eglminor;
-   int ret;
+   EGLint visual_id = output-fb-format;
 
-   static const EGLint config_attrs[] = {
-   EGL_SURFACE_TYPE, EGL_WINDOW_BIT,
-   EGL_RED_SIZE, 1,
-   EGL_GREEN_SIZE, 1,
-   EGL_BLUE_SIZE, 1,
-   EGL_ALPHA_SIZE, 0,
-   EGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT,
-   EGL_NONE
-   };
-
-   compositor-base.egl_display = eglGetDisplay(EGL_DEFAULT_DISPLAY);
-   if (compositor-base.egl_display == EGL_NO_DISPLAY) {
-   weston_log(Failed to create EGL display.\n);
-   print_egl_error_state();
+   if (gles2_renderer_create(compositor-base,
+   EGL_DEFAULT_DISPLAY, gles2_renderer_opaque_attribs,
+   visual_id)  0)
return -1;
-   }
-
-   ret = eglInitialize(compositor-base.egl_display, eglmajor, eglminor);
-   if (!ret) {
-   weston_log(Failed to initialise EGL.\n);
-   print_egl_error_state();
-   return -1;
-   }
-
-   ret = android_egl_choose_config(compositor, output-fb, config_attrs);
-   if (ret  0) {
-   weston_log(Failed to find an EGL config.\n);
-   print_egl_error_state();
- 

[PATCH v6 04/12] compositor: Add a renderer function to read out pixels

2012-11-13 Thread John Kåre Alsaker
---
 src/compositor.h |  4 
 src/gles2-renderer.c | 63 
 src/noop-renderer.c  | 10 +
 3 files changed, 68 insertions(+), 9 deletions(-)

diff --git a/src/compositor.h b/src/compositor.h
index e4ba19c..e2dbd0d 100644
--- a/src/compositor.h
+++ b/src/compositor.h
@@ -276,6 +276,10 @@ struct weston_plane {
 };
 
 struct weston_renderer {
+   int (*read_pixels)(struct weston_output *output,
+  pixman_format_code_t format, void *pixels,
+  uint32_t x, uint32_t y,
+  uint32_t width, uint32_t height);
void (*repaint_output)(struct weston_output *output,
   pixman_region32_t *output_damage);
void (*flush_damage)(struct weston_surface *surface);
diff --git a/src/gles2-renderer.c b/src/gles2-renderer.c
index 818a1a3..27c23eb 100644
--- a/src/gles2-renderer.c
+++ b/src/gles2-renderer.c
@@ -608,6 +608,29 @@ repaint_region(struct weston_surface *es, 
pixman_region32_t *region,
ec-vtxcnt.size = 0;
 }
 
+static int
+use_output(struct weston_output *output)
+{
+   static int errored;
+   struct gles2_output_state *go = get_output_state(output);
+   struct gles2_renderer *gr = get_renderer(output-compositor);
+   EGLBoolean ret;
+
+   ret = eglMakeCurrent(gr-egl_display, go-egl_surface,
+go-egl_surface, gr-egl_context);
+
+   if (ret == EGL_FALSE) {
+   if (errored)
+   return -1;
+   errored = 1;
+   weston_log(Failed to make EGL context current.\n);
+   print_egl_error_state();
+   return -1;
+   }
+
+   return 0;
+}
+
 static void
 weston_compositor_use_shader(struct weston_compositor *compositor,
 struct weston_shader *shader)
@@ -863,16 +886,8 @@ gles2_renderer_repaint_output(struct weston_output *output,
 
glViewport(0, 0, width, height);
 
-   ret = eglMakeCurrent(gr-egl_display, go-egl_surface,
-go-egl_surface, gr-egl_context);
-   if (ret == EGL_FALSE) {
-   if (errored)
-   return;
-   errored = 1;
-   weston_log(Failed to make EGL context current.\n);
-   print_egl_error_state();
+   if (use_output(output)  0)
return;
-   }
 
/* if debugging, redraw everything outside the damage to clean up
 * debug lines from the previous draw on this buffer:
@@ -914,6 +929,35 @@ gles2_renderer_repaint_output(struct weston_output *output,
 
 }
 
+static int
+gles2_renderer_read_pixels(struct weston_output *output,
+  pixman_format_code_t format, void *pixels,
+  uint32_t x, uint32_t y,
+  uint32_t width, uint32_t height)
+{
+   GLenum gl_format;
+
+   switch (format) {
+   case PIXMAN_a8r8g8b8:
+   gl_format = GL_BGRA_EXT;
+   break;
+   case PIXMAN_a8b8g8r8:
+   gl_format = GL_RGBA;
+   break;
+   default:
+   return -1;
+   }
+
+   if (use_output(output)  0)
+   return -1;
+
+   glPixelStorei(GL_PACK_ALIGNMENT, 1);
+   glReadPixels(x, y, width, height, gl_format,
+GL_UNSIGNED_BYTE, pixels);
+
+   return 0;
+}
+
 static void
 gles2_renderer_flush_damage(struct weston_surface *surface)
 {
@@ -1552,6 +1596,7 @@ gles2_renderer_create(struct weston_compositor *ec, 
EGLNativeDisplayType display
if (gr == NULL)
return -1;
 
+   gr-base.read_pixels = gles2_renderer_read_pixels;
gr-base.repaint_output = gles2_renderer_repaint_output;
gr-base.flush_damage = gles2_renderer_flush_damage;
gr-base.attach = gles2_renderer_attach;
diff --git a/src/noop-renderer.c b/src/noop-renderer.c
index 116fc00..76f1e8f 100644
--- a/src/noop-renderer.c
+++ b/src/noop-renderer.c
@@ -26,6 +26,15 @@
 
 #include compositor.h
 
+static int
+noop_renderer_read_pixels(struct weston_output *output,
+  pixman_format_code_t format, void *pixels,
+  uint32_t x, uint32_t y,
+  uint32_t width, uint32_t height)
+{
+   return 0;
+}
+
 static void
 noop_renderer_repaint_output(struct weston_output *output,
 pixman_region32_t *output_damage)
@@ -64,6 +73,7 @@ noop_renderer_init(struct weston_compositor *ec)
if (renderer == NULL)
return -1;
 
+   renderer-read_pixels = noop_renderer_read_pixels;
renderer-repaint_output = noop_renderer_repaint_output;
renderer-flush_damage = noop_renderer_flush_damage;
renderer-attach = noop_renderer_attach;
-- 
1.8.0

___
wayland-devel mailing list

[PATCH v6 05/12] screenshooter: Use the renderer function for reading out pixels

2012-11-13 Thread John Kåre Alsaker
This also changes the compositor's read_format to a pixman format.
---
 src/compositor.h |  2 +-
 src/gles2-renderer.c |  4 ++--
 src/screenshooter.c  | 27 +++
 3 files changed, 18 insertions(+), 15 deletions(-)

diff --git a/src/compositor.h b/src/compositor.h
index e2dbd0d..57d363a 100644
--- a/src/compositor.h
+++ b/src/compositor.h
@@ -353,7 +353,7 @@ struct weston_compositor {
PFNEGLDESTROYIMAGEKHRPROC destroy_image;
 
int has_unpack_subimage;
-   GLenum read_format;
+   pixman_format_code_t read_format;
 
PFNEGLBINDWAYLANDDISPLAYWL bind_display;
PFNEGLUNBINDWAYLANDDISPLAYWL unbind_display;
diff --git a/src/gles2-renderer.c b/src/gles2-renderer.c
index 27c23eb..2a62da2 100644
--- a/src/gles2-renderer.c
+++ b/src/gles2-renderer.c
@@ -1751,9 +1751,9 @@ gles2_renderer_setup(struct weston_compositor *ec, 
EGLSurface egl_surface)
}
 
if (strstr(extensions, GL_EXT_read_format_bgra))
-   ec-read_format = GL_BGRA_EXT;
+   ec-read_format = PIXMAN_a8r8g8b8;
else
-   ec-read_format = GL_RGBA;
+   ec-read_format = PIXMAN_a8b8g8r8;
 
if (strstr(extensions, GL_EXT_unpack_subimage))
ec-has_unpack_subimage = 1;
diff --git a/src/screenshooter.c b/src/screenshooter.c
index ffcc970..23181a2 100644
--- a/src/screenshooter.c
+++ b/src/screenshooter.c
@@ -112,10 +112,10 @@ screenshooter_frame_notify(struct wl_listener *listener, 
void *data)
return;
}
 
-   glPixelStorei(GL_PACK_ALIGNMENT, 1);
-   glReadPixels(0, 0, output-current-width, output-current-height,
-output-compositor-read_format,
-GL_UNSIGNED_BYTE, pixels);
+   output-compositor-renderer-read_pixels(output,
+output-compositor-read_format, pixels,
+0, 0, output-current-width,
+output-current-height);
 
stride = wl_shm_buffer_get_stride(l-buffer);
 
@@ -123,10 +123,10 @@ screenshooter_frame_notify(struct wl_listener *listener, 
void *data)
s = pixels + stride * (l-buffer-height - 1);
 
switch (output-compositor-read_format) {
-   case GL_BGRA_EXT:
+   case PIXMAN_a8r8g8b8:
copy_bgra_yflip(d, s, output-current-height, stride);
break;
-   case GL_RGBA:
+   case PIXMAN_a8b8g8r8:
copy_rgba_yflip(d, s, output-current-height, stride);
break;
default:
@@ -299,10 +299,10 @@ weston_recorder_frame_notify(struct wl_listener 
*listener, void *data)
for (i = 0; i  n; i++) {
width = r[i].x2 - r[i].x1;
height = r[i].y2 - r[i].y1;
-   glReadPixels(r[i].x1, output-current-height - r[i].y2,
-width, height,
-output-compositor-read_format,
-GL_UNSIGNED_BYTE, recorder-rect);
+   output-compositor-renderer-read_pixels(output,
+output-compositor-read_format, recorder-rect,
+r[i].x1, output-current-height - r[i].y2,
+width, height);
 
s = recorder-rect;
p = recorder-rect;
@@ -367,12 +367,15 @@ weston_recorder_create(struct weston_output *output, 
const char *filename)
header.magic = WCAP_HEADER_MAGIC;
 
switch (output-compositor-read_format) {
-   case GL_BGRA_EXT:
+   case PIXMAN_a8r8g8b8:
header.format = WCAP_FORMAT_XRGB;
break;
-   case GL_RGBA:
+   case PIXMAN_a8b8g8r8:
header.format = WCAP_FORMAT_XBGR;
break;
+   default:
+   weston_log(unknown recorder format\n);
+   break;
}
 
header.width = output-current-width;
-- 
1.8.0

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


[PATCH v6 07/12] gles2-renderer: Renaming some functions.

2012-11-13 Thread John Kåre Alsaker
---
 src/gles2-renderer.c | 53 ++--
 1 file changed, 26 insertions(+), 27 deletions(-)

diff --git a/src/gles2-renderer.c b/src/gles2-renderer.c
index a02cdac..20a34b3 100644
--- a/src/gles2-renderer.c
+++ b/src/gles2-renderer.c
@@ -642,7 +642,7 @@ use_output(struct weston_output *output)
 }
 
 static void
-weston_compositor_use_shader(struct weston_compositor *compositor,
+use_shader(struct weston_compositor *compositor,
 struct weston_shader *shader)
 {
if (compositor-current_shader == shader)
@@ -653,7 +653,7 @@ weston_compositor_use_shader(struct weston_compositor 
*compositor,
 }
 
 static void
-weston_shader_uniforms(struct weston_shader *shader,
+shader_uniforms(struct weston_shader *shader,
   struct weston_surface *surface,
   struct weston_output *output)
 {
@@ -696,12 +696,12 @@ draw_surface(struct weston_surface *es, struct 
weston_output *output,
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
 
if (ec-fan_debug) {
-   weston_compositor_use_shader(ec, ec-solid_shader);
-   weston_shader_uniforms(ec-solid_shader, es, output);
+   use_shader(ec, ec-solid_shader);
+   shader_uniforms(ec-solid_shader, es, output);
}
 
-   weston_compositor_use_shader(ec, es-shader);
-   weston_shader_uniforms(es-shader, es, output);
+   use_shader(ec, es-shader);
+   shader_uniforms(es-shader, es, output);
 
if (es-transform.enabled || output-zoom.active)
filter = GL_LINEAR;
@@ -727,8 +727,8 @@ draw_surface(struct weston_surface *es, struct 
weston_output *output,
 * that forces texture alpha = 1.0.
 * Xwayland surfaces need this.
 */
-   weston_compositor_use_shader(ec, 
ec-texture_shader_rgbx);
-   weston_shader_uniforms(ec-texture_shader_rgbx, es, 
output);
+   use_shader(ec, ec-texture_shader_rgbx);
+   shader_uniforms(ec-texture_shader_rgbx, es, output);
}
 
if (es-alpha  1.0)
@@ -740,7 +740,7 @@ draw_surface(struct weston_surface *es, struct 
weston_output *output,
}
 
if (pixman_region32_not_empty(surface_blend)) {
-   weston_compositor_use_shader(ec, es-shader);
+   use_shader(ec, es-shader);
glEnable(GL_BLEND);
repaint_region(es, repaint, surface_blend);
}
@@ -849,8 +849,7 @@ draw_border(struct weston_output *output)
int n;
 
glDisable(GL_BLEND);
-   glUseProgram(shader-program);
-   ec-current_shader = shader;
+   use_shader(ec, shader);
 
glUniformMatrix4fv(shader-proj_uniform,
   1, GL_FALSE, output-matrix.d);
@@ -1307,7 +1306,7 @@ compile_shader(GLenum type, int count, const char 
**sources)
 }
 
 static int
-weston_shader_init(struct weston_shader *shader, struct weston_compositor *ec,
+shader_init(struct weston_shader *shader, struct weston_compositor *ec,
   const char *vertex_source, const char *fragment_source)
 {
char msg[512];
@@ -1359,7 +1358,7 @@ weston_shader_init(struct weston_shader *shader, struct 
weston_compositor *ec,
 }
 
 static void
-weston_shader_release(struct weston_shader *shader)
+shader_release(struct weston_shader *shader)
 {
glDeleteShader(shader-vertex_shader);
glDeleteShader(shader-fragment_shader);
@@ -1682,26 +1681,26 @@ gles2_renderer_display(struct weston_compositor *ec)
 static int
 compile_shaders(struct weston_compositor *ec)
 {
-   if (weston_shader_init(ec-texture_shader_rgba, ec,
+   if (shader_init(ec-texture_shader_rgba, ec,
 vertex_shader, texture_fragment_shader_rgba)  0)
return -1;
-   if (weston_shader_init(ec-texture_shader_rgbx, ec,
+   if (shader_init(ec-texture_shader_rgbx, ec,
 vertex_shader, texture_fragment_shader_rgbx)  0)
return -1;
if (ec-has_egl_image_external 
-   weston_shader_init(ec-texture_shader_egl_external, ec,
+   shader_init(ec-texture_shader_egl_external, ec,
vertex_shader, 
texture_fragment_shader_egl_external)  0)
return -1;
-   if (weston_shader_init(ec-texture_shader_y_uv, ec,
+   if (shader_init(ec-texture_shader_y_uv, ec,
   vertex_shader, texture_fragment_shader_y_uv)  0)
return -1;
-   if (weston_shader_init(ec-texture_shader_y_u_v, ec,
+   if (shader_init(ec-texture_shader_y_u_v, ec,
   vertex_shader, texture_fragment_shader_y_u_v)  
0)
return -1;
-   if (weston_shader_init(ec-texture_shader_y_xuxv, ec,
+   if 

[PATCH v6 09/12] Move weston_surface GL and EGL state into gles2-renderer.

2012-11-13 Thread John Kåre Alsaker
---
 src/compositor.c |  2 --
 src/compositor.h |  6 
 src/gles2-renderer.c | 87 +---
 3 files changed, 48 insertions(+), 47 deletions(-)

diff --git a/src/compositor.c b/src/compositor.c
index bd95e8c..587fded 100644
--- a/src/compositor.c
+++ b/src/compositor.c
@@ -242,8 +242,6 @@ weston_surface_create(struct weston_compositor *compositor)
return NULL;
}
 
-   surface-num_textures = 0;
-   surface-num_images = 0;
pixman_region32_init(surface-texture_damage);
 
surface-buffer = NULL;
diff --git a/src/compositor.h b/src/compositor.h
index 32a78f5..a398a74 100644
--- a/src/compositor.h
+++ b/src/compositor.h
@@ -397,8 +397,6 @@ struct weston_region {
 struct weston_surface {
struct wl_surface surface;
struct weston_compositor *compositor;
-   GLuint textures[3];
-   int num_textures;
pixman_region32_t texture_damage;
pixman_region32_t clip;
pixman_region32_t damage;
@@ -458,10 +456,6 @@ struct weston_surface {
 
struct wl_list frame_callback_list;
 
-   EGLImageKHR images[3];
-   GLenum target;
-   int num_images;
-
struct wl_buffer *buffer;
struct wl_listener buffer_destroy_listener;
 
diff --git a/src/gles2-renderer.c b/src/gles2-renderer.c
index a256a57..f740155 100644
--- a/src/gles2-renderer.c
+++ b/src/gles2-renderer.c
@@ -47,6 +47,13 @@ struct gles2_output_state {
 struct gles2_surface_state {
GLfloat color[4];
struct gles2_shader *shader;
+
+   GLuint textures[3];
+   int num_textures;
+
+   EGLImageKHR images[3];
+   GLenum target;
+   int num_images;
 };
 
 struct gles2_renderer {
@@ -686,7 +693,7 @@ shader_uniforms(struct gles2_shader *shader,
glUniform4fv(shader-color_uniform, 1, gs-color);
glUniform1f(shader-alpha_uniform, surface-alpha);
 
-   for (i = 0; i  surface-num_textures; i++)
+   for (i = 0; i  gs-num_textures; i++)
glUniform1i(shader-tex_uniforms[i], i);
 }
 
@@ -731,11 +738,11 @@ draw_surface(struct weston_surface *es, struct 
weston_output *output,
else
filter = GL_NEAREST;
 
-   for (i = 0; i  es-num_textures; i++) {
+   for (i = 0; i  gs-num_textures; i++) {
glActiveTexture(GL_TEXTURE0 + i);
-   glBindTexture(es-target, es-textures[i]);
-   glTexParameteri(es-target, GL_TEXTURE_MIN_FILTER, filter);
-   glTexParameteri(es-target, GL_TEXTURE_MAG_FILTER, filter);
+   glBindTexture(gs-target, gs-textures[i]);
+   glTexParameteri(gs-target, GL_TEXTURE_MIN_FILTER, filter);
+   glTexParameteri(gs-target, GL_TEXTURE_MAG_FILTER, filter);
}
 
/* blended region is whole surface minus opaque region: */
@@ -994,6 +1001,8 @@ gles2_renderer_read_pixels(struct weston_output *output,
 static void
 gles2_renderer_flush_damage(struct weston_surface *surface)
 {
+   struct gles2_surface_state *gs = get_surface_state(surface);
+
 #ifdef GL_UNPACK_ROW_LENGTH
pixman_box32_t *rectangles;
void *data;
@@ -1012,7 +1021,7 @@ gles2_renderer_flush_damage(struct weston_surface 
*surface)
if (!pixman_region32_not_empty(surface-texture_damage))
return;
 
-   glBindTexture(GL_TEXTURE_2D, surface-textures[0]);
+   glBindTexture(GL_TEXTURE_2D, gs-textures[0]);
 
if (!surface-compositor-has_unpack_subimage) {
glTexImage2D(GL_TEXTURE_2D, 0, GL_BGRA_EXT,
@@ -1045,23 +1054,23 @@ done:
 }
 
 static void
-ensure_textures(struct weston_surface *es, int num_textures)
+ensure_textures(struct gles2_surface_state *gs, int num_textures)
 {
int i;
 
-   if (num_textures = es-num_textures)
+   if (num_textures = gs-num_textures)
return;
 
-   for (i = es-num_textures; i  num_textures; i++) {
-   glGenTextures(1, es-textures[i]);
-   glBindTexture(es-target, es-textures[i]);
-   glTexParameteri(es-target,
+   for (i = gs-num_textures; i  num_textures; i++) {
+   glGenTextures(1, gs-textures[i]);
+   glBindTexture(gs-target, gs-textures[i]);
+   glTexParameteri(gs-target,
GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
-   glTexParameteri(es-target,
+   glTexParameteri(gs-target,
GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
}
-   es-num_textures = num_textures;
-   glBindTexture(es-target, 0);
+   gs-num_textures = num_textures;
+   glBindTexture(gs-target, 0);
 }
 
 static void
@@ -1074,22 +1083,22 @@ gles2_renderer_attach(struct weston_surface *es, struct 
wl_buffer *buffer)
int i, num_planes;
 
if (!buffer) {
-   for (i = 0; i  es-num_images; i++) {
-   ec-destroy_image(gr-egl_display, es-images[i]);
- 

[PATCH v6 10/12] Move weston_compositor GL and EGL state into gles2-renderer.

2012-11-13 Thread John Kåre Alsaker
---
 src/compositor.h | 14 ---
 src/gles2-renderer.c | 67 ++--
 2 files changed, 39 insertions(+), 42 deletions(-)

diff --git a/src/compositor.h b/src/compositor.h
index a398a74..2f701c5 100644
--- a/src/compositor.h
+++ b/src/compositor.h
@@ -333,22 +333,8 @@ struct weston_compositor {
 
struct weston_renderer *renderer;
 
-   PFNGLEGLIMAGETARGETRENDERBUFFERSTORAGEOESPROC
-   image_target_renderbuffer_storage;
-   PFNGLEGLIMAGETARGETTEXTURE2DOESPROC image_target_texture_2d;
-   PFNEGLCREATEIMAGEKHRPROC create_image;
-   PFNEGLDESTROYIMAGEKHRPROC destroy_image;
-
-   int has_unpack_subimage;
pixman_format_code_t read_format;
 
-   PFNEGLBINDWAYLANDDISPLAYWL bind_display;
-   PFNEGLUNBINDWAYLANDDISPLAYWL unbind_display;
-   PFNEGLQUERYWAYLANDBUFFERWL query_buffer;
-   int has_bind_display;
-
-   int has_egl_image_external;
-
void (*destroy)(struct weston_compositor *ec);
void (*restore)(struct weston_compositor *ec);
int (*authenticate)(struct weston_compositor *c, uint32_t id);
diff --git a/src/gles2-renderer.c b/src/gles2-renderer.c
index f740155..4f08a0c 100644
--- a/src/gles2-renderer.c
+++ b/src/gles2-renderer.c
@@ -70,6 +70,19 @@ struct gles2_renderer {
int32_t width, height;
} border;
 
+   PFNGLEGLIMAGETARGETTEXTURE2DOESPROC image_target_texture_2d;
+   PFNEGLCREATEIMAGEKHRPROC create_image;
+   PFNEGLDESTROYIMAGEKHRPROC destroy_image;
+
+   int has_unpack_subimage;
+
+   PFNEGLBINDWAYLANDDISPLAYWL bind_display;
+   PFNEGLUNBINDWAYLANDDISPLAYWL unbind_display;
+   PFNEGLQUERYWAYLANDBUFFERWL query_buffer;
+   int has_bind_display;
+
+   int has_egl_image_external;
+
struct gles2_shader texture_shader_rgba;
struct gles2_shader texture_shader_rgbx;
struct gles2_shader texture_shader_egl_external;
@@ -1001,6 +1014,7 @@ gles2_renderer_read_pixels(struct weston_output *output,
 static void
 gles2_renderer_flush_damage(struct weston_surface *surface)
 {
+   struct gles2_renderer *gr = get_renderer(surface-compositor);
struct gles2_surface_state *gs = get_surface_state(surface);
 
 #ifdef GL_UNPACK_ROW_LENGTH
@@ -1023,7 +1037,7 @@ gles2_renderer_flush_damage(struct weston_surface 
*surface)
 
glBindTexture(GL_TEXTURE_2D, gs-textures[0]);
 
-   if (!surface-compositor-has_unpack_subimage) {
+   if (!gr-has_unpack_subimage) {
glTexImage2D(GL_TEXTURE_2D, 0, GL_BGRA_EXT,
 surface-pitch, surface-buffer-height, 0,
 GL_BGRA_EXT, GL_UNSIGNED_BYTE,
@@ -1084,7 +1098,7 @@ gles2_renderer_attach(struct weston_surface *es, struct 
wl_buffer *buffer)
 
if (!buffer) {
for (i = 0; i  gs-num_images; i++) {
-   ec-destroy_image(gr-egl_display, gs-images[i]);
+   gr-destroy_image(gr-egl_display, gs-images[i]);
gs-images[i] = NULL;
}
gs-num_images = 0;
@@ -1106,10 +1120,10 @@ gles2_renderer_attach(struct weston_surface *es, struct 
wl_buffer *buffer)
gs-shader = gr-texture_shader_rgbx;
else
gs-shader = gr-texture_shader_rgba;
-   } else if (ec-query_buffer(gr-egl_display, buffer,
+   } else if (gr-query_buffer(gr-egl_display, buffer,
EGL_TEXTURE_FORMAT, format)) {
for (i = 0; i  gs-num_images; i++)
-   ec-destroy_image(gr-egl_display, gs-images[i]);
+   gr-destroy_image(gr-egl_display, gs-images[i]);
gs-num_images = 0;
gs-target = GL_TEXTURE_2D;
switch (format) {
@@ -1143,7 +1157,7 @@ gles2_renderer_attach(struct weston_surface *es, struct 
wl_buffer *buffer)
attribs[0] = EGL_WAYLAND_PLANE_WL;
attribs[1] = i;
attribs[2] = EGL_NONE;
-   gs-images[i] = ec-create_image(gr-egl_display,
+   gs-images[i] = gr-create_image(gr-egl_display,
 NULL,
 EGL_WAYLAND_BUFFER_WL,
 buffer, attribs);
@@ -1155,7 +1169,7 @@ gles2_renderer_attach(struct weston_surface *es, struct 
wl_buffer *buffer)
 
glActiveTexture(GL_TEXTURE0 + i);
glBindTexture(gs-target, gs-textures[i]);
-   ec-image_target_texture_2d(gs-target,
+   gr-image_target_texture_2d(gs-target,
gs-images[i]);
}
 
@@ -1199,14 +1213,13 @@ static void
 gles2_renderer_destroy_surface(struct weston_surface 

  1   2   >