Re: [PATCH 2/2] protocol: Support scaled outputs and surfaces

2013-05-24 Thread Pekka Paalanen
On Thu, 23 May 2013 14:51:16 -0400 (EDT)
Alexander Larsson al...@redhat.com wrote:

  What if a client sets scale=0?
 
 I guess we should forbid that, as it risks things dividing by zero.
 
  Maybe the scale should also be signed here? I think all sizes are
  signed, too, even though a negative size does not make sense. We seem
  to have a convention, that numbers you compute with are signed, and
  enums and flags and bitfields and handles and such are unsigned. And
  timestamps, since there we need the overflow behaviour. I
  believe it's due to the C promotion or implicit cast rules more than
  anything else.
 
 Yeah, we should change it to signed.
 
   @@ -1548,6 +1596,8 @@
  summary=indicates this is the current mode/
  entry name=preferred value=0x2
  summary=indicates this is the preferred mode/
   +  entry name=scaled value=0x4
   +  summary=indicates that this is a scaled mode/
  
  What do we need the scaled flag for? And what does this flag mean?
  How is it used? I mean, can we get duplicate native modes that differ
  only by the scaled flag?
  
  Unfortunately I didn't get to answer that thread before, but I had some
  disagreement or not understanding there.
 
 Yeah, this is the area of the scaling stuff that is least baked. 
 
 Right now what happens is that the modes get listed at the scaled resolution
 (i.e. divided by to two, etc), and such scaled mode gets reported with a bit
 set so client can tell they are not native size. However, this doesn't seem
 quite right for a few reasons:
 
 * We don't report rotated/flipped modes, nor do we swap the width/height for
   these so this is inconsistent
 * The clients can tell what the scale is anyway, so what use is it?
 
 However, listing the unscaled resolution for the modes is also somewhat
 problematic. For instance, if we listed the raw modes and an app wanted
 to go fullscreen in a mode it would need to create a surface of the scaled
 width/heigh (with the right scale), as otherwise the buffer size would not
 match the scanout size. 
 
 For instance, if the output scale is 2 and there is a 800x600 native mode
 then the app should use a 400x300 surface with a 800x600 buffer and a 
 buffer_scale of 2.
 
 Hmmm, I guess if the app used a 800x600 surface with buffer scale 1 we could 
 still
 scan out from it. Although we'd have to be very careful about how we treat
 input and pointer position then, as its not quite the same.
 
 I'll have a look at changing this.

I agree with all that. There are some more considerations. One is
the wl_shell_surface.geometry event. If you look at the specification
of wl_shell_surface.set_fullscreen, it requires the compositor to reply
with a geometry event with the dimensions to make the surface
fullscreen in the current native video mode. Since that is in pels
like John pointed out, it would carry 400x300 for a 800x600 mode, if
output_scale=2. I haven't read enough of the patches to see how you
handled that.

An old application not knowing about buffer_scale would simply use
400x300, and get scaled up. All good. Might even be scanned out
directly, if an overlay allows hardware scaling.

An old application looking at the mode list could pick the 800x600
mode, and use that with the implicit scale 1. Because fullscreen state
specifies, that the compositor makes the surface fullscreen, and allows
e.g. scaling, we can as well just simply scan it out.

The difference between these two cases is the surface size in pels,
400x300 in the former, and 800x600 in the latter. No problem for the
client. In the server we indeed need to make sure the input coordinates
are right.

An issue I see here is that the 800x600 buffer_scale=1 fullscreen setup
will have the very problem the whole output scale is trying to solve:
the application will draw its GUI in single-density, and it ends up 1:1
on a double-density screen, unreadable. However, if the application is
using the mode list to begin with, it probably has a way for the user
to pick a mode. So with a magnifying glass, the user can fix the
situation in the application settings. Cue in Weston desktop zoom...

Now, should we require, that applications that have a video mode menu,
will also have an entry called default or native, which will come
from the geometry event?

If not, do we need to make sure there are output modes listed that match
exactly mode*output_scale == default native mode, and fake a new mode as
needed? Do we need that for all native modes, in case the default mode
changes?

Or maybe we don't need any of that, if we assume the user can configure
the application to use an arbitrary mode?

I'm thinking about an application (game), that renders in pixel units,
and only offers the user a choice between the server reported output
video modes. Is that an important use case?

If the application is output_scale/buffer_scale aware, it knows how to
do the right thing, even if it chooses a mode from the output mode
list. I guess one 

Re: [PATCH 2/2] protocol: Support scaled outputs and surfaces

2013-05-23 Thread Pekka Paalanen
On Wed, 22 May 2013 21:49:25 -0400
Kristian Høgsberg hoegsb...@gmail.com wrote:

 On Mon, May 20, 2013 at 10:49:27AM +0300, Pekka Paalanen wrote:
  Hi Alexander,
  
  nice to see this going forward, and sorry for replying so rarely and
  late.
  
  On Thu, 16 May 2013 15:49:36 +0200
  al...@redhat.com wrote:
  
   From: Alexander Larsson al...@redhat.com
   
   This adds the wl_surface.set_buffer_scale request, and a wl_output.scale
   event. These together lets us support automatic upscaling of old
   clients on very high resolution monitors, while allowing new clients
   to take advantage of this to render at the higher resolution when the
   surface is displayed on the scaled output.
   
   It is similar to set_buffer_transform in that the buffer is stored in
   a transformed pixels (in this case scaled). This means that if an output
   is scaled we can directly use the pre-scaled buffer with additional data,
   rather than having to scale it.
   
   Additionally this adds a scaled flag to the wl_output.mode flags
   so that clients know which resolutions are native and which are scaled.
   
   Also, in places where the documentation was previously not clear as to
   what coordinate system was used this was fleshed out.
   
   It also adds a scaling_factor event to wl_output that specifies the
   scaling of an output.
   
   This is meant to be used for outputs with a very high DPI to tell the
   client that this particular output has subpixel precision. Coordinates
   in other parts of the protocol, like input events, relative window
   positioning and output positioning are still in the compositor space
  
  We don't have a single compositor space.
  
  This needs some way of explaining that surface coordinates are always
  the same, regardless of the attached buffer's scale. That is, surface
  coordinates always correspond to the size of a buffer with scale 1.
  
   rather than the scaled space. However, input has subpixel precision
   so you can still get input at full resolution.
   
   This setup means global properties like mouse acceleration/speed,
   pointer size, monitor geometry, etc can be specified in a mostly
   similar resolution even on a multimonitor setup where some monitors
   are low dpi and some are e.g. retina-class outputs.
   ---
protocol/wayland.xml | 107 
   ---
1 file changed, 93 insertions(+), 14 deletions(-)
   
   diff --git a/protocol/wayland.xml b/protocol/wayland.xml
   index d3ae149..acfb140 100644
   --- a/protocol/wayland.xml
   +++ b/protocol/wayland.xml
...
   @@ -860,6 +864,9 @@

 The client is free to dismiss all but the last configure
 event it received.
   +
   + The width and height arguments specify the size of the window
   + in surface local coordinates.
  
  Yes, window is definitely the correct term here. Saying surface
  would be incorrect, due to sub-surfaces, if I recall my discussion with
  Giulio Camuffo right.
 
 I think we need to introduce the window geometry concept before this
 gets out of hand.  I'm already a little uncomfortable with the idea of
 including sub-surfaces in the surface bounding box - it seems like
 something the client should control.  So the suggestion is to add a
 new wl_shell_surface request:
 
 request name=set_geometry
   description summary=set logical surface geometry
   This request sets the the logical geometry of the surface.
 
   The logical geometry is the rectangle that compositor uses for
   window management placement decisions.  For example, a buffer
   often contains drop shadow for a surface or other content that
   shouldn't be considered part of the surface geometry.  By
   setting the surface geometry, the client can communicate to
   the compositor the sub-rectangle of the surface that it
   considers the window geometry.  The compositor will the use
   this rectangle for initial placement, snapping and other
   placement decisions.  If the client has provided a geometry
   for a surface, the new size suggested by the configure event
   will refer to the window geometry.
 
   The geometry is a rectangle in surface local coordinates.
 
   Geometry is double-buffered state, see wl_surface.commit.
 
   wl_shell_surface.set_geometry changes the pending geometry.
   wl_surface.commit copies the pending geometry to the current
   geometry.  Otherwise, the pending and current geometry are
   never changed.
 
   Initially the geometry will track the bounding box of the
 surface but once the geometry is set, it will only change when
 the geomtry is set again.
   /description
 
   arg name=x type=int/
   arg name=y type=int/
   arg name=width type=int/
   arg name=height type=int/
 /request
 

Yeah, that should work. Avoids nicely using the input region bounding
box as the surface extents for window management purposes.


Re: [PATCH 2/2] protocol: Support scaled outputs and surfaces

2013-05-23 Thread Alexander Larsson
 What if a client sets scale=0?

I guess we should forbid that, as it risks things dividing by zero.

 Maybe the scale should also be signed here? I think all sizes are
 signed, too, even though a negative size does not make sense. We seem
 to have a convention, that numbers you compute with are signed, and
 enums and flags and bitfields and handles and such are unsigned. And
 timestamps, since there we need the overflow behaviour. I
 believe it's due to the C promotion or implicit cast rules more than
 anything else.

Yeah, we should change it to signed.

  @@ -1548,6 +1596,8 @@
   summary=indicates this is the current mode/
 entry name=preferred value=0x2
   summary=indicates this is the preferred mode/
  +  entry name=scaled value=0x4
  +summary=indicates that this is a scaled mode/
 
 What do we need the scaled flag for? And what does this flag mean?
 How is it used? I mean, can we get duplicate native modes that differ
 only by the scaled flag?
 
 Unfortunately I didn't get to answer that thread before, but I had some
 disagreement or not understanding there.

Yeah, this is the area of the scaling stuff that is least baked. 

Right now what happens is that the modes get listed at the scaled resolution
(i.e. divided by to two, etc), and such scaled mode gets reported with a bit
set so client can tell they are not native size. However, this doesn't seem
quite right for a few reasons:

* We don't report rotated/flipped modes, nor do we swap the width/height for
  these so this is inconsistent
* The clients can tell what the scale is anyway, so what use is it?

However, listing the unscaled resolution for the modes is also somewhat
problematic. For instance, if we listed the raw modes and an app wanted
to go fullscreen in a mode it would need to create a surface of the scaled
width/heigh (with the right scale), as otherwise the buffer size would not
match the scanout size. 

For instance, if the output scale is 2 and there is a 800x600 native mode
then the app should use a 400x300 surface with a 800x600 buffer and a 
buffer_scale of 2.

Hmmm, I guess if the app used a 800x600 surface with buffer scale 1 we could 
still
scan out from it. Although we'd have to be very careful about how we treat
input and pointer position then, as its not quite the same.

I'll have a look at changing this.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 2/2] protocol: Support scaled outputs and surfaces

2013-05-22 Thread Pekka Paalanen
On Mon, 20 May 2013 13:56:27 -0500
Jason Ekstrand ja...@jlekstrand.net wrote:

 On Mon, May 20, 2013 at 4:00 AM, Pekka Paalanen ppaala...@gmail.com wrote:
 
  On Thu, 16 May 2013 16:43:52 -0500
  Jason Ekstrand ja...@jlekstrand.net wrote:
 
   The point of this soi is to allow surfaces to render the same size on
   different density outputs.
 
  Are you serious? Really? Same size measured in meters?
 
 
 No, measured in inches. :-P
 
 Seriously though.  While we can't make it *exactly* the same on all your
 displays, we should be able to make it usably close.

I do not think that should be a goal here, on the core protocol level.
It's a can of worms, like you see from all fractional pixel problems
raised, which the current integer-only proposal does not have.

  I do not think that will ever work:
  http://blogs.gnome.org/danni/2011/12/15/more-on-dpi/
  and doing it via scaling is going to be worse.
 
 
 Yes, scaling looks bad.  I don't know that we can avoid it in all cases
 (see also the 200DPI and 300 DPI case).

Sorry, which email was this in?

  Going for the same size is a very different problem than just trying to
  get all apps readable by default. I'm not sure same size is a better
  goal than same look.
 
  And on a side note:
  http://web.archive.org/web/20120102153021/http://www.fooishbar.org/blog
 
 
 What I would like in the end is a per-output slider bar (or something of
 that ilk) that let's the user select the interface size on that output.
 Sure, they probably won't be able to select *any* resolution (the
 compositor may limit it to multiples of 24 dpi or something).  And they can
 certainly make an ugly set-up for themselves.  However, I want them to be
 able to make something more-or-less reasonable and I see no reason why the
 compositor shouldn't coordinate this and why this scale factor can't be
 used for that.

I think that is an orthogonal issue. That would be a DE thing, just
like choosing font sizes. Buffer_scale OTOH is a Wayland core feature,
and is best kept as simple as possible.

The slider would control window and widget sizes, while buffer_scale
only controls the resolution they are rendered in. Or...

 My primary concern is that integer multiples of 96 DPI isn't going to be
 enough granularity.  I don't know whether we can really accomplish a higher
 granularity in a reasonable way.

For the cases where buffer_scale cannot offer a usable resolution, we
can still fall back to arbitrary scaling in the compositor by
private surface or output transformations. That does not allow
pixel-accurate/high-resolution presentation of windows like
buffer_scale, but I believe is an acceptable compromise. Didn't OS X or
something do similar for the 1.5 factor? I recall someone mentioning
about that, but couldn't find it.

...or the slider could control buffer_scale and output scaling in
tandem, using buffer_scale for integer factors (which the GUI would
recommend), and realize non-integer factors by some combination of the
two.

Naturally the units in slider would be scaling factors, not DPI, since
DPI is meaningless to a user. I can imagine how hilarious it would be
to have Please, try to use integer multiples of 96 DPI for the best
performance and look in the GUI. ;-)


Thanks,
pq
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 2/2] protocol: Support scaled outputs and surfaces

2013-05-22 Thread Alexander Larsson
On tis, 2013-05-21 at 20:57 +0300, Pekka Paalanen wrote:
 On Tue, 21 May 2013 08:35:53 -0700
 Bill Spitzak spit...@gmail.com wrote:

  This proposal does not actually restrict widget positions or line sizes, 
  since they are drawn by the client at buffer resolution. Although 
 
 No, but I expect the toolkits may.

Gtk very much will do this at least.

  annoying, the outside buffer size is not that limiting. The client can 
  just place a few transparent pixels along the edge to make it look like 
  it is any size.
  
  However it does restrict the positions of widgets that use subsurfaces.
  
  I see this as a serious problem and I'm not sure why you don't think it 
  is. It is an arbitrary artificial limit in the api that has nothing to 
  do with any hardware limits.
 
 It is a design decision with the least negative impact, and it is
 not serious. Sub-surfaces will not be that common, and they
 certainly will not be used for common widgets like buttons.

Yeah, this is a simple solution to an actual real-life problem that is
easy to implement (I've got weston and gtk+ mostly working). If you want
to do something very complicated then just don't use scaling and draw
however you want. We don't want to overcomplicate the normal case with
fractional complexity and extra coordinate spaces.

  The reason you want to position widgets at finer positions is so they 
  can be positioned evenly, and so they can be moved smoothly, and so they 
  can be perfectly aligned with hi-resolution graphics.
 
 But why? You have a real, compelling use case? Otherwise it just
 complicates things.

Exactly.

 A what? No way, buffer_scale is private to a surface, and does not
 affect any other surface, not even sub-surfaces. It is not
 inherited, that would be insane.

Yes, buffer_transform and buffer_scale only define how you map the
client supplied buffer pixels into the surface coordinates. It does not
really change the size of the surface or affect subsurfaces. (Well,
technically it does since we don't separatately specify the surface size
but derive it from the buffer and its transform, but after that the
surface is what it is in an abstract space).

 There is no compositor coordinate space in the protocol. There
 are only surface coordinates, and now to a small extent we are
 getting buffer coordinates.

Very small extent. I think the only place in the protocol where they are
used is when specifying the size of the surface.

   The x,y do not
   describe how the surface moves, they describe how pixel rows and
   columns are added or removed on the edges.
  
   No, it is in the surface coordinate system, like written in the patch.
  
  Then I would not describe it as pixel rows and columns added or removed 
  on the edges. If the scaler is set to 70/50 than a delta of -1,0 is 
  adding 1.4 pixels to the left edge of the buffer. I agree that having it 
  in the parent coordinates works otherwise.
 
 We use the units of pixels in the surface coordinate system, even
 if they do not correspond exactly to any real pixels like
 elements in a buffer or on screen.

Actually this is sort of a problem. Maybe the docs would be clearer if
we just used a different name for these? points?



___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 2/2] protocol: Support scaled outputs and surfaces

2013-05-22 Thread Alexander Larsson
On mån, 2013-05-20 at 13:56 -0500, Jason Ekstrand wrote:
 On Mon, May 20, 2013 at 4:00 AM, Pekka Paalanen ppaala...@gmail.com
 wrote:
 On Thu, 16 May 2013 16:43:52 -0500
 Jason Ekstrand ja...@jlekstrand.net wrote:
 
  The point of this soi is to allow surfaces to render the
 same size on
  different density outputs.
 
 
 Are you serious? Really? Same size measured in meters?
 
 
 No, measured in inches. :-P
 
 
 Seriously though.  While we can't make it *exactly* the same on all
 your displays, we should be able to make it usably close.

Having the exact physical size (be it in inches or steradians) of a
window on two different monitors is *not* a goal of my work. Its already
the case that windows are different sizes on different monitor, and that
has been ok for users since the first a CRT was used with a computer.
People have been bikeshedding about solving this problem for ages, and
I'm not interested in that.

No, I have an *actual* problem, which is that its super hard to see or
hit widgets on a high dpi monitor, and a single DPI setting for the
whole desktop is a nonstarter since you may be using mixed monitors.
This is a practical problem I have on my pixel laptop and which will
only get more common as hw moves on. The solution required is not exact
size matches, but rather make it the same ballpark, which my proposal
solves in a very simple fashion.


 What I would like in the end is a per-output slider bar (or something
 of that ilk) that let's the user select the interface size on that
 output.  Sure, they probably won't be able to select *any* resolution
 (the compositor may limit it to multiples of 24 dpi or something).
 And they can certainly make an ugly set-up for themselves.  However, I
 want them to be able to make something more-or-less reasonable and I
 see no reason why the compositor shouldn't coordinate this and why
 this scale factor can't be used for that.

The compositor is free to scale the final result to any fraction it
wants, which will get you this kind of behavior. It will not be
perfect, but the other solution (adding complexity of fractional
surface sizes, multiple coordinate spaces, etc) also will also not look
very nice (i.e. widget clipping on fractional coordinates will look
ugly, non-integer-width lines will *never* look good, etc), and in fact,
internal details makes it unlikely that toolkits like Gtk will ever be
able to support fractional scaling, so it has little practical use.



___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 2/2] protocol: Support scaled outputs and surfaces

2013-05-22 Thread Alexander Larsson
On ons, 2013-05-22 at 09:11 +0300, Pekka Paalanen wrote:
  
  What I would like in the end is a per-output slider bar (or something of
  that ilk) that let's the user select the interface size on that output.
  Sure, they probably won't be able to select *any* resolution (the
  compositor may limit it to multiples of 24 dpi or something).  And they can
  certainly make an ugly set-up for themselves.  However, I want them to be
  able to make something more-or-less reasonable and I see no reason why the
  compositor shouldn't coordinate this and why this scale factor can't be
  used for that.
 
 I think that is an orthogonal issue. That would be a DE thing, just
 like choosing font sizes. Buffer_scale OTOH is a Wayland core feature,
 and is best kept as simple as possible.

 The slider would control window and widget sizes, while buffer_scale
 only controls the resolution they are rendered in. Or...

This is doable, but slightly problematic as you would need a way to
coordinate between all clients how to combine these...

  My primary concern is that integer multiples of 96 DPI isn't going to be
  enough granularity.  I don't know whether we can really accomplish a higher
  granularity in a reasonable way.
 
 For the cases where buffer_scale cannot offer a usable resolution, we
 can still fall back to arbitrary scaling in the compositor by
 private surface or output transformations. That does not allow
 pixel-accurate/high-resolution presentation of windows like
 buffer_scale, but I believe is an acceptable compromise. Didn't OS X or
 something do similar for the 1.5 factor? I recall someone mentioning
 about that, but couldn't find it.

So, in practice I think this is the more reasonable way to do it, and it
is indeed how OSX does it. In some theoretical way doing it by drawing
at 2x and downscaling to 1.5x is looks worse, but neither is going to
look really good anyway, and the integer surface scaling makes the
implementation vastly simpler.

 Naturally the units in slider would be scaling factors, not DPI, since
 DPI is meaningless to a user. I can imagine how hilarious it would be
 to have Please, try to use integer multiples of 96 DPI for the best
 performance and look in the GUI. ;-)

I don't think a slider for scaling factor is the right UI at all. What
you want is to just list various resolutions and let the user pick
one. Some resolution are native, some are scaled by an integer and some
are scaled by an fraction. You want to somehow mark out in the UI that
some resolutions are worse than others, but otherwise this is
generally how most end user would think of this anyway.


___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Wayland surface units (Re: [PATCH 2/2] protocol: Support scaled outputs and surfaces)

2013-05-22 Thread Pekka Paalanen
On Wed, 22 May 2013 10:12:00 +0200
Alexander Larsson al...@redhat.com wrote:

 On tis, 2013-05-21 at 20:57 +0300, Pekka Paalanen wrote:
 
  We use the units of pixels in the surface coordinate system, even
  if they do not correspond exactly to any real pixels like
  elements in a buffer or on screen.
 
 Actually this is sort of a problem. Maybe the docs would be clearer if
 we just used a different name for these? points?

That is a good idea, but choosing a name is hard. I think points will
get too easily confused with font sizes or the 1/72 inches unit and all
that mess.

Follow Android with dp?
http://developer.android.com/guide/topics/resources/more-resources.html#Dimension

I'm not sure that is exactly the definition we want to use, with just
160 dpi changed to 96 dpi. I feel uneasy mentioning dpi at all. So
maybe not dp, then.

We need a name for the surface length unit, since the surface local
coordinate system is such a central concept in Wayland.

I just can't seem to come up with anything serious.
- wayland units, wu
- wayland pixels, wp, wpx
- surface units, su
- surface unit pixels, sux
- surface pixels, sp, spx
- wayland surface units, wsu
- wayland atomic units, wau
- pips, pip
ehhh...


Cheers,
pq
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Wayland surface units (Re: [PATCH 2/2] protocol: Support scaled outputs and surfaces)

2013-05-22 Thread Alexander Larsson
On ons, 2013-05-22 at 12:11 +0300, Pekka Paalanen wrote:
 On Wed, 22 May 2013 10:12:00 +0200
 Alexander Larsson al...@redhat.com wrote:
 
  On tis, 2013-05-21 at 20:57 +0300, Pekka Paalanen wrote:
  
   We use the units of pixels in the surface coordinate system, even
   if they do not correspond exactly to any real pixels like
   elements in a buffer or on screen.
  
  Actually this is sort of a problem. Maybe the docs would be clearer if
  we just used a different name for these? points?
 
 That is a good idea, but choosing a name is hard. I think points will
 get too easily confused with font sizes or the 1/72 inches unit and all
 that mess.
 
 Follow Android with dp?
 http://developer.android.com/guide/topics/resources/more-resources.html#Dimension
 
 I'm not sure that is exactly the definition we want to use, with just
 160 dpi changed to 96 dpi. I feel uneasy mentioning dpi at all. So
 maybe not dp, then.

Yeah, i'd rather have it disconnected from some physical notion of size.

 We need a name for the surface length unit, since the surface local
 coordinate system is such a central concept in Wayland.
 
 I just can't seem to come up with anything serious.
 - wayland units, wu
 - wayland pixels, wp, wpx
 - surface units, su
 - surface unit pixels, sux
 - surface pixels, sp, spx
 - wayland surface units, wsu
 - wayland atomic units, wau
 - pips, pip
 ehhh...

Yeah, names are hard. But sux is awesome!

Other alternatives:
- compositor units, cu
- pels, pel
- dots, dot
- surface element, surfel, sel

none of these are great either...







___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Wayland surface units (Re: [PATCH 2/2] protocol: Support scaled outputs and surfaces)

2013-05-22 Thread Alexander Larsson
On ons, 2013-05-22 at 11:42 +0200, Alexander Larsson wrote:
 On ons, 2013-05-22 at 12:11 +0300, Pekka Paalanen wrote:

  - wayland units, wu
  - wayland pixels, wp, wpx
  - surface units, su
  - surface unit pixels, sux
  - surface pixels, sp, spx
  - wayland surface units, wsu
  - wayland atomic units, wau
  - pips, pip
  ehhh...
 
 Yeah, names are hard. But sux is awesome!
 
 Other alternatives:
 - compositor units, cu
 - pels, pel
 - dots, dot
 - surface element, surfel, sel
 
 none of these are great either...

For the record, Microsoft uses DIP, for Device-independent-pixels, and
Apple uses Points for the non-hardware coordinates (in the app level
APIs).



___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Wayland surface units (Re: [PATCH 2/2] protocol: Support scaled outputs and surfaces)

2013-05-22 Thread Pekka Paalanen
On Wed, 22 May 2013 11:50:31 +0200
Alexander Larsson al...@redhat.com wrote:

 On ons, 2013-05-22 at 11:42 +0200, Alexander Larsson wrote:
  On ons, 2013-05-22 at 12:11 +0300, Pekka Paalanen wrote:
 
   - wayland units, wu
   - wayland pixels, wp, wpx
   - surface units, su
   - surface unit pixels, sux
   - surface pixels, sp, spx
   - wayland surface units, wsu
   - wayland atomic units, wau
   - pips, pip
   ehhh...
  
  Yeah, names are hard. But sux is awesome!
  
  Other alternatives:
  - compositor units, cu
  - pels, pel
  - dots, dot
  - surface element, surfel, sel
  
  none of these are great either...
 
 For the record, Microsoft uses DIP, for Device-independent-pixels, and
 Apple uses Points for the non-hardware coordinates (in the app level
 APIs).

And we have no well-known equivalent in the FOSS or Linux world?

DIP would sound fine otherwise, except Microsoft seems to tie it to the
physical units via dpi again. Meh.

pel sounds nice...


- pq
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Wayland surface units (Re: [PATCH 2/2] protocol: Support scaled outputs and surfaces)

2013-05-22 Thread Alexander Larsson
On ons, 2013-05-22 at 13:01 +0300, Pekka Paalanen wrote:

  For the record, Microsoft uses DIP, for Device-independent-pixels, and
  Apple uses Points for the non-hardware coordinates (in the app level
  APIs).
 
 And we have no well-known equivalent in the FOSS or Linux world?

Not that I know of.

 DIP would sound fine otherwise, except Microsoft seems to tie it to the
 physical units via dpi again. Meh.
 
 pel sounds nice...

pel is from Picture ELement, and was historically used equivalently to
pixel. But pixel won that fight and nobody remembers pel. So, by
now we can probably use them as two not-quite-equivalent units without
risking confusion.



___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 2/2] protocol: Support scaled outputs and surfaces

2013-05-22 Thread Kristian Høgsberg
On Mon, May 20, 2013 at 10:49:27AM +0300, Pekka Paalanen wrote:
 Hi Alexander,
 
 nice to see this going forward, and sorry for replying so rarely and
 late.
 
 On Thu, 16 May 2013 15:49:36 +0200
 al...@redhat.com wrote:
 
  From: Alexander Larsson al...@redhat.com
  
  This adds the wl_surface.set_buffer_scale request, and a wl_output.scale
  event. These together lets us support automatic upscaling of old
  clients on very high resolution monitors, while allowing new clients
  to take advantage of this to render at the higher resolution when the
  surface is displayed on the scaled output.
  
  It is similar to set_buffer_transform in that the buffer is stored in
  a transformed pixels (in this case scaled). This means that if an output
  is scaled we can directly use the pre-scaled buffer with additional data,
  rather than having to scale it.
  
  Additionally this adds a scaled flag to the wl_output.mode flags
  so that clients know which resolutions are native and which are scaled.
  
  Also, in places where the documentation was previously not clear as to
  what coordinate system was used this was fleshed out.
  
  It also adds a scaling_factor event to wl_output that specifies the
  scaling of an output.
  
  This is meant to be used for outputs with a very high DPI to tell the
  client that this particular output has subpixel precision. Coordinates
  in other parts of the protocol, like input events, relative window
  positioning and output positioning are still in the compositor space
 
 We don't have a single compositor space.
 
 This needs some way of explaining that surface coordinates are always
 the same, regardless of the attached buffer's scale. That is, surface
 coordinates always correspond to the size of a buffer with scale 1.
 
  rather than the scaled space. However, input has subpixel precision
  so you can still get input at full resolution.
  
  This setup means global properties like mouse acceleration/speed,
  pointer size, monitor geometry, etc can be specified in a mostly
  similar resolution even on a multimonitor setup where some monitors
  are low dpi and some are e.g. retina-class outputs.
  ---
   protocol/wayland.xml | 107 
  ---
   1 file changed, 93 insertions(+), 14 deletions(-)
  
  diff --git a/protocol/wayland.xml b/protocol/wayland.xml
  index d3ae149..acfb140 100644
  --- a/protocol/wayland.xml
  +++ b/protocol/wayland.xml
  @@ -173,7 +173,7 @@
   /event
 /interface
   
  -  interface name=wl_compositor version=2
  +  interface name=wl_compositor version=3
 
 Ok.
 
   description summary=the compositor singleton
 A compositor.  This object is a singleton global.  The
 compositor is in charge of combining the contents of multiple
  @@ -709,7 +709,7 @@
   
  The x and y arguments specify the locations of the upper left
  corner of the surface relative to the upper left corner of the
  -   parent surface.
  +   parent surface, in surface local coordinates.
   
  The flags argument controls details of the transient behaviour.
 /description
  @@ -777,6 +777,10 @@
  in any of the clients surfaces is reported as normal, however,
  clicks in other clients surfaces will be discarded and trigger
  the callback.
  +
  +   The x and y arguments specify the locations of the upper left
  +   corner of the surface relative to the upper left corner of the
  +   parent surface, in surface local coordinates.
 
 Surface local coordinates are defined to have their origin in the
 surface top-left corner. If that is defined once and for all, you don't
 have to repeat relative to upper left... everywhere.
 
 Surface local coordinates relative to anything else do not exist.
 
 When these were originally written in the spec, the term surface
 coordinates had not settled yet.
 
 /description
   
 arg name=seat type=object interface=wl_seat summary=the 
  wl_seat whose pointer is used/
  @@ -860,6 +864,9 @@
   
  The client is free to dismiss all but the last configure
  event it received.
  +
  +   The width and height arguments specify the size of the window
  +   in surface local coordinates.
 
 Yes, window is definitely the correct term here. Saying surface
 would be incorrect, due to sub-surfaces, if I recall my discussion with
 Giulio Camuffo right.

I think we need to introduce the window geometry concept before this
gets out of hand.  I'm already a little uncomfortable with the idea of
including sub-surfaces in the surface bounding box - it seems like
something the client should control.  So the suggestion is to add a
new wl_shell_surface request:

request name=set_geometry
  description summary=set logical surface geometry
This request sets the the logical geometry of the surface.

The logical geometry is the rectangle that compositor uses for
window management placement decisions.  For example, a buffer
often 

Re: [PATCH 2/2] protocol: Support scaled outputs and surfaces

2013-05-21 Thread Pekka Paalanen
On Mon, 20 May 2013 17:58:30 -0700
Bill Spitzak spit...@gmail.com wrote:

 Pekka Paalanen wrote:
 
  This seems pretty limiting to me. What happens when *all* the outputs 
  are hi-res? You really think wayland clients should not be able to take 
  full advantage of this?
  
  Then the individual pixels are so small that it won't matter.
 
 It does not matter how tiny the pixels are. The step between possible 
 surface sizes and subsurface positions remains the size of a scale-1 
 unit. Or else I am seriously mis-understanding the proposal:
 
 Let's say the output is 10,000dpi and the compositor has set it's scale 
 to 100. Can a client make a buffer that is 10,050 pixels wide appear 1:1 
 on the pixels of this output? It looks to me like only multiples of 100 
 are possible.

As far as I understand, that is correct.

But it does not matter. You cannot employ any widgets or widget parts
that would need a finer resolution than 100 px steps, because a) the
user cannot clearly see them, and b) the user cannot clearly poke them
with e.g. a pointer, since they are so small. So there is no need to
have window size in finer resoution either. Even a resize handle in a
window border would have to be at least 300 pixels thick to be usable.

The scale factor only allows to specify the image in finer resolution,
so it looks better, not jagged-edged for instance. There is no point in
having anything else in finer resolution, since everything else is
related to input.

To be precice, in that scenario a client should never even attempt to
make a buffer of 10050 px wide.

  If nothing else it makes it so that subsurfaces are
  always positioned on integer positions on non-scaled displays, which
  makes things easier when monitor of differen scales are mixed.
  This is false if the subsurface is attached to a scaled parent surface.
  
  Huh?
 
 Parent surface uses the scaler api to change a buffer width of 100 to 
 150. The fullscreen and this hi-dpi interface can also produce similar 
 scales. The subsurface has a width of 51. Either the left or right edge 
 is going to land in the middle of an output pixel.

How can you say that? Where did you get the specification of how scaler
interacts with buffer_scale? We didn't write any yet.

And what is this talk about parent surfaces?

  The input rectangle to the scaler proposal is in the space between the 
  buffer transform and the scaling. Therefore there are *three* coordinate 
  spaces.
  
  Where did you get this? Where is this defined or proposed?
 
 The input rectangle is in the same direction as the output rectangle 
 even if the buffer is rotated 90 degrees by the buffer_transform.

Yeah. So how does that define anything about scaler and buffer_scale
interaction?

The only thing that that could imply, is that buffer_scale and
buffer_transform are applied simultaneously (they are orthogonal
operations), so I can't understand how you arrive at your conclusion.

The scaler transformation was designed to change old surface
coordinates into new surface coordinates, anyway, except not in those
words, since it does not make sense in the spec.

  On a quick thought, that seems only a different way of doing it,
  without any benefits, and possibly having cons.
 
 Benefits: the buffer can be any integer number of pixels in size, 
 non-integer buffer sizes cannot be specified by the api, you can align 
 subsurfaces with pixels in the buffer (which means a precomposite of 
 subsurfaces into the main one before scaling is possible).

Any size for buffer, okay.

How could you ever arrive to non-integer buffer sizes in the earlier
proposal?

Aligning sub-surfaces is still possible if anyone cares about that, one
just have to take the scale into account. That's a drawing problem. If
you had a scale 1 output and buffers, you cannot align to fractional
pixels, anyway.

Why would pre-compositing not be possible is some case?

  Actually, it means that the surface coordinate system can change
  dramatically when a client sends a new buffer with a different scale,
  which then raises a bucketful of races: is an incoming event using new
  or old surface coordinates? That includes at least all input events
  with a surface position,
 
 This is a good point and the only counter argument that makes sense.
 
 All solutions I can think of are equivalent to reporting events in the 
 output space, the same as your proposal. However I still feel that the 
 surface size, input area, and other communication from client to server 
 should be specified in input space.

Urgh, so you specify input region in one coordinate system, and then
get events in a different coordinate system? Utter madness.

Let's keep everything in the surface coordinates (including client
toolkit widget layout, AFAIU), except client rendering which needs to
happen in buffer coordinates, obviously. That is logical, consistent,
and easy to understand. That forces the clients to deal with two
coordinate systems at most, and 

Re: [PATCH 2/2] protocol: Support scaled outputs and surfaces

2013-05-21 Thread Bill Spitzak

On 05/20/2013 11:46 PM, Pekka Paalanen wrote:


Let's say the output is 10,000dpi and the compositor has set it's scale
to 100. Can a client make a buffer that is 10,050 pixels wide appear 1:1
on the pixels of this output? It looks to me like only multiples of 100
are possible.


As far as I understand, that is correct.

But it does not matter. You cannot employ any widgets or widget parts
that would need a finer resolution than 100 px steps, because a) the
user cannot clearly see them, and b) the user cannot clearly poke them
with e.g. a pointer, since they are so small. So there is no need to
have window size in finer resoution either. Even a resize handle in a
window border would have to be at least 300 pixels thick to be usable.


This proposal does not actually restrict widget positions or line sizes, 
since they are drawn by the client at buffer resolution. Although 
annoying, the outside buffer size is not that limiting. The client can 
just place a few transparent pixels along the edge to make it look like 
it is any size.


However it does restrict the positions of widgets that use subsurfaces.

I see this as a serious problem and I'm not sure why you don't think it 
is. It is an arbitrary artificial limit in the api that has nothing to 
do with any hardware limits.


The reason you want to position widgets at finer positions is so they 
can be positioned evenly, and so they can be moved smoothly, and so they 
can be perfectly aligned with hi-resolution graphics.



How can you say that? Where did you get the specification of how scaler
interacts with buffer_scale? We didn't write any yet.


It is pretty obvious that if the parent has a scale and the child has 
one, these scales are multiplied to get the transform from the child to 
the parent's parent.


It is true that the resulting scale if the hi-dpi and scaler are applied 
to the *SAME* surface is not yet written.



And what is this talk about parent surfaces?


The subsurfaces have a parent. For main surfaces the parent is the 
compositor coordinate space.



The input rectangle is in the same direction as the output rectangle
even if the buffer is rotated 90 degrees by the buffer_transform.


Yes exactly. Thus it is a different space than the buffer pixels, as 
there may be a 90 degree rotation / reflections, and translation to put 
the origin in different corners of the buffer.



How could you ever arrive to non-integer buffer sizes in the earlier
proposal?


If the scale is 3/2 then specifying the surface size as 33 means the 
buffer is 49.5 pixels wide. I guess this is a protocol error? Still 
seems really strange to design the api so this is possible at all.



Aligning sub-surfaces is still possible if anyone cares about that, one
just have to take the scale into account. That's a drawing problem. If
you had a scale 1 output and buffers, you cannot align to fractional
pixels, anyway.


If there is a scale of 2 you cannot align to the odd pixels. And  a 
scale of 3/2 means you *can* align to fractional pixels.



Why would pre-compositing not be possible is some case?


Because it would require rendering a fractional-pixel aligned version of 
the subsurface and compositing that with the parent. This may make 
unwanted graphics leak through the anti-aliased edge. The most obvious 
example is if there are two subsurfaces and you try to make their edges 
touch.


However both proposals have this problem if pre-compositing is not done, 
and most practical shells I can figure out can't do pre-compositing 
because that requires another buffer for every parent, so maybe this is 
not a big deal.



Urgh, so you specify input region in one coordinate system, and then
get events in a different coordinate system? Utter madness.

Let's keep everything in the surface coordinates (including client
toolkit widget layout, AFAIU), except client rendering which needs to
happen in buffer coordinates, obviously.


Sounds like you have no problem with two coordinate spaces. I don't see 
any reason the size of windows and the positions of graphics should not 
be done in the same coordinates drawings are done in.



The x,y do not
describe how the surface moves, they describe how pixel rows and
columns are added or removed on the edges.


No, it is in the surface coordinate system, like written in the patch.


Then I would not describe it as pixel rows and columns added or removed 
on the edges. If the scaler is set to 70/50 than a delta of -1,0 is 
adding 1.4 pixels to the left edge of the buffer. I agree that having it 
in the parent coordinates works otherwise.


___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 2/2] protocol: Support scaled outputs and surfaces

2013-05-21 Thread John Kåre Alsaker
On Tue, May 21, 2013 at 5:35 PM, Bill Spitzak spit...@gmail.com wrote:
 However both proposals have this problem if pre-compositing is not done,
and most practical shells I can figure out can't do pre-compositing because
that requires another buffer for every parent, so maybe this is not a big
deal.
Pre-compositing or compositing of individual windows into buffers will be
required to be done for transparent subsurfaces which overlaps another
subsurface if the compositor wants to change the opacity of the window (a
common effect).

On Mon, May 20, 2013 at 11:23 AM, Pekka Paalanen ppaala...@gmail.comwrote:

 Actually, it means that the surface coordinate system can change
 dramatically when a client sends a new buffer with a different scale,
 which then raises a bucketful of races: is an incoming event using new
 or old surface coordinates? That includes at least all input events
 with a surface position, and the shell geometry event.

This is not a new race. Resizing and surface content changing have the same
problem. Changing the scaling factor would be a relatively rare event too.
I believe I was told that the frame callback was usable as a separator of
events for frames. That could allow clients which are changing scaling
factors to translate old input correctly or simply ignore it.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 2/2] protocol: Support scaled outputs and surfaces

2013-05-21 Thread Pekka Paalanen
On Tue, 21 May 2013 08:35:53 -0700
Bill Spitzak spit...@gmail.com wrote:

 On 05/20/2013 11:46 PM, Pekka Paalanen wrote:
 
  Let's say the output is 10,000dpi and the compositor has set it's scale
  to 100. Can a client make a buffer that is 10,050 pixels wide appear 1:1
  on the pixels of this output? It looks to me like only multiples of 100
  are possible.
 
  As far as I understand, that is correct.
 
  But it does not matter. You cannot employ any widgets or widget parts
  that would need a finer resolution than 100 px steps, because a) the
  user cannot clearly see them, and b) the user cannot clearly poke them
  with e.g. a pointer, since they are so small. So there is no need to
  have window size in finer resoution either. Even a resize handle in a
  window border would have to be at least 300 pixels thick to be usable.
 
 This proposal does not actually restrict widget positions or line sizes, 
 since they are drawn by the client at buffer resolution. Although 

No, but I expect the toolkits may.

 annoying, the outside buffer size is not that limiting. The client can 
 just place a few transparent pixels along the edge to make it look like 
 it is any size.
 
 However it does restrict the positions of widgets that use subsurfaces.
 
 I see this as a serious problem and I'm not sure why you don't think it 
 is. It is an arbitrary artificial limit in the api that has nothing to 
 do with any hardware limits.

It is a design decision with the least negative impact, and it is
not serious. Sub-surfaces will not be that common, and they
certainly will not be used for common widgets like buttons.

 The reason you want to position widgets at finer positions is so they 
 can be positioned evenly, and so they can be moved smoothly, and so they 
 can be perfectly aligned with hi-resolution graphics.

But why? You have a real, compelling use case? Otherwise it just
complicates things.

Remember, sub-surfaces are not supposed to be just any widgets.
They are video and openGL canvases, and such.

  How can you say that? Where did you get the specification of how scaler
  interacts with buffer_scale? We didn't write any yet.
 
 It is pretty obvious that if the parent has a scale and the child has 
 one, these scales are multiplied to get the transform from the child to 
 the parent's parent.

A what? No way, buffer_scale is private to a surface, and does not
affect any other surface, not even sub-surfaces. It is not
inherited, that would be insane.

The same goes with the scaler proposal, it is private to a surface,
and not inherited. They affect the contents, not the surface.

 It is true that the resulting scale if the hi-dpi and scaler are applied 
 to the *SAME* surface is not yet written.
 
  And what is this talk about parent surfaces?
 
 The subsurfaces have a parent. For main surfaces the parent is the 
 compositor coordinate space.

There is no compositor coordinate space in the protocol. There
are only surface coordinates, and now to a small extent we are
getting buffer coordinates.

Still, this parent reference made no sense in the context you used it.

  The input rectangle is in the same direction as the output rectangle
  even if the buffer is rotated 90 degrees by the buffer_transform.
 
 Yes exactly. Thus it is a different space than the buffer pixels, as 
 there may be a 90 degree rotation / reflections, and translation to put 
 the origin in different corners of the buffer.

Glad to see you agree with yourself.

  How could you ever arrive to non-integer buffer sizes in the earlier
  proposal?
 
 If the scale is 3/2 then specifying the surface size as 33 means the 
 buffer is 49.5 pixels wide. I guess this is a protocol error? Still 
 seems really strange to design the api so this is possible at all.

We have one scale factor which is integer. How can you come up with 3/2?

Even if you took the scaler extension into play, that will only
produce integers, no matter at which point of coordinate
transformations it is applied at.

  Aligning sub-surfaces is still possible if anyone cares about that, one
  just have to take the scale into account. That's a drawing problem. If
  you had a scale 1 output and buffers, you cannot align to fractional
  pixels, anyway.
 
 If there is a scale of 2 you cannot align to the odd pixels. And  a 
 scale of 3/2 means you *can* align to fractional pixels.
 
  Why would pre-compositing not be possible is some case?
 
 Because it would require rendering a fractional-pixel aligned version of 
 the subsurface and compositing that with the parent. This may make 
 unwanted graphics leak through the anti-aliased edge. The most obvious 
 example is if there are two subsurfaces and you try to make their edges 
 touch.

Umm, but since sub-surface positions and sizes are always integers
in the surface coordinate system, the edges will always align
perfectly, regardless of the individual buffer_scales.

 However both proposals have this problem if pre-compositing is not 

Re: [PATCH 2/2] protocol: Support scaled outputs and surfaces

2013-05-20 Thread Pekka Paalanen
Hi Alexander,

nice to see this going forward, and sorry for replying so rarely and
late.

On Thu, 16 May 2013 15:49:36 +0200
al...@redhat.com wrote:

 From: Alexander Larsson al...@redhat.com
 
 This adds the wl_surface.set_buffer_scale request, and a wl_output.scale
 event. These together lets us support automatic upscaling of old
 clients on very high resolution monitors, while allowing new clients
 to take advantage of this to render at the higher resolution when the
 surface is displayed on the scaled output.
 
 It is similar to set_buffer_transform in that the buffer is stored in
 a transformed pixels (in this case scaled). This means that if an output
 is scaled we can directly use the pre-scaled buffer with additional data,
 rather than having to scale it.
 
 Additionally this adds a scaled flag to the wl_output.mode flags
 so that clients know which resolutions are native and which are scaled.
 
 Also, in places where the documentation was previously not clear as to
 what coordinate system was used this was fleshed out.
 
 It also adds a scaling_factor event to wl_output that specifies the
 scaling of an output.
 
 This is meant to be used for outputs with a very high DPI to tell the
 client that this particular output has subpixel precision. Coordinates
 in other parts of the protocol, like input events, relative window
 positioning and output positioning are still in the compositor space

We don't have a single compositor space.

This needs some way of explaining that surface coordinates are always
the same, regardless of the attached buffer's scale. That is, surface
coordinates always correspond to the size of a buffer with scale 1.

 rather than the scaled space. However, input has subpixel precision
 so you can still get input at full resolution.
 
 This setup means global properties like mouse acceleration/speed,
 pointer size, monitor geometry, etc can be specified in a mostly
 similar resolution even on a multimonitor setup where some monitors
 are low dpi and some are e.g. retina-class outputs.
 ---
  protocol/wayland.xml | 107 
 ---
  1 file changed, 93 insertions(+), 14 deletions(-)
 
 diff --git a/protocol/wayland.xml b/protocol/wayland.xml
 index d3ae149..acfb140 100644
 --- a/protocol/wayland.xml
 +++ b/protocol/wayland.xml
 @@ -173,7 +173,7 @@
  /event
/interface
  
 -  interface name=wl_compositor version=2
 +  interface name=wl_compositor version=3

Ok.

  description summary=the compositor singleton
A compositor.  This object is a singleton global.  The
compositor is in charge of combining the contents of multiple
 @@ -709,7 +709,7 @@
  
   The x and y arguments specify the locations of the upper left
   corner of the surface relative to the upper left corner of the
 - parent surface.
 + parent surface, in surface local coordinates.
  
   The flags argument controls details of the transient behaviour.
/description
 @@ -777,6 +777,10 @@
   in any of the clients surfaces is reported as normal, however,
   clicks in other clients surfaces will be discarded and trigger
   the callback.
 +
 + The x and y arguments specify the locations of the upper left
 + corner of the surface relative to the upper left corner of the
 + parent surface, in surface local coordinates.

Surface local coordinates are defined to have their origin in the
surface top-left corner. If that is defined once and for all, you don't
have to repeat relative to upper left... everywhere.

Surface local coordinates relative to anything else do not exist.

When these were originally written in the spec, the term surface
coordinates had not settled yet.

/description
  
arg name=seat type=object interface=wl_seat summary=the 
 wl_seat whose pointer is used/
 @@ -860,6 +864,9 @@
  
   The client is free to dismiss all but the last configure
   event it received.
 +
 + The width and height arguments specify the size of the window
 + in surface local coordinates.

Yes, window is definitely the correct term here. Saying surface
would be incorrect, due to sub-surfaces, if I recall my discussion with
Giulio Camuffo right.

/description
  
arg name=edges type=uint/
 @@ -876,11 +883,16 @@
  /event
/interface
  
 -  interface name=wl_surface version=2
 +  interface name=wl_surface version=3
  description summary=an onscreen surface
A surface is a rectangular area that is displayed on the screen.
It has a location, size and pixel contents.
  
 +  The size of a surface (and relative positions on it) is described

The size of the surface and positions on the surface are described...?

 +  in surface local coordinates, which may differ from the buffer
 +  local coordinates of the pixel content, in case a buffer_transform
 +  or a buffer_scale is used.

I think we could additionally define here, that surface local

Re: [PATCH 2/2] protocol: Support scaled outputs and surfaces

2013-05-20 Thread Pekka Paalanen
On Thu, 16 May 2013 16:43:52 -0500
Jason Ekstrand ja...@jlekstrand.net wrote:

 The point of this soi is to allow surfaces to render the same size on
 different density outputs.

Are you serious? Really? Same size measured in meters?

I do not think that will ever work:
http://blogs.gnome.org/danni/2011/12/15/more-on-dpi/
and doing it via scaling is going to be worse.

Going for the same size is a very different problem than just trying to
get all apps readable by default. I'm not sure same size is a better
goal than same look.

And on a side note:
http://web.archive.org/web/20120102153021/http://www.fooishbar.org/blog

Which email was your detailed proposition?


Thanks,
pq
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 2/2] protocol: Support scaled outputs and surfaces

2013-05-20 Thread Pekka Paalanen
On Fri, 17 May 2013 12:06:35 -0700
Bill Spitzak spit...@gmail.com wrote:

 Alexander Larsson wrote:
 
  You can make a surface of any integer size (and it has to be integer due
  to existing APIs on surface coordinates/sizes), however the *buffer* has
  to be an integer multiple of the surface size. In other words, surface
  sizes and positions are described in the global compositor space, with
  integer sizes.
 
 This seems pretty limiting to me. What happens when *all* the outputs 
 are hi-res? You really think wayland clients should not be able to take 
 full advantage of this?

Then the individual pixels are so small that it won't matter.

If the pixels are not small enough to not matter, then you just set
scale=1 to everything, and the image it still legible.

  If nothing else it makes it so that subsurfaces are
  always positioned on integer positions on non-scaled displays, which
  makes things easier when monitor of differen scales are mixed.
 
 This is false if the subsurface is attached to a scaled parent surface.

Huh?

  I see it the other way. We currently have *two* coordinate spaces that
  the client has to think about. The buffer coordinates (it has to know
  this when rendering), and the surface coordinates (these are basically
  what all wayland APIs atm use, like in damage, positioning and input).
  The transform between two is currently the buffer_transform only. With
  the buffer_scale the transform is extended to also scale, but no
  additional coordinate space is added.
 
 The input rectangle to the scaler proposal is in the space between the 
 buffer transform and the scaling. Therefore there are *three* coordinate 
 spaces.

Where did you get this? Where is this defined or proposed?

 My proposal is that surface space be moved before the scaling. This 
 reduces the number of spaces back to two by using the same space for 
 input rectangle as for events and surface size, etc. It also means 
 integers always have a physical meaning for the client (ie buffer 
 pixels) and that odd-sized buffers are supported on the hi-res display.

On a quick thought, that seems only a different way of doing it,
without any benefits, and possibly having cons.

Actually, it means that the surface coordinate system can change
dramatically when a client sends a new buffer with a different scale,
which then raises a bucketful of races: is an incoming event using new
or old surface coordinates? That includes at least all input events
with a surface position, and the shell geometry event.

For the record, wl_surface.attach changes the surface coordinate system
by translating with x,y, but that is not a problem. The x,y do not
describe how the surface moves, they describe how pixel rows and
columns are added or removed on the edges. This means that the content
is presumed to stay put on screen. It's also hard to click a specific
point in a window whose size is changing, and the translation is not
dramatic. Even when one might claim that attach has the same problem as
your proposal, in practice it does not.

- pq
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 2/2] protocol: Support scaled outputs and surfaces

2013-05-20 Thread Jason Ekstrand
On Mon, May 20, 2013 at 4:00 AM, Pekka Paalanen ppaala...@gmail.com wrote:

 On Thu, 16 May 2013 16:43:52 -0500
 Jason Ekstrand ja...@jlekstrand.net wrote:

  The point of this soi is to allow surfaces to render the same size on
  different density outputs.

 Are you serious? Really? Same size measured in meters?


No, measured in inches. :-P

Seriously though.  While we can't make it *exactly* the same on all your
displays, we should be able to make it usably close.


 I do not think that will ever work:
 http://blogs.gnome.org/danni/2011/12/15/more-on-dpi/
 and doing it via scaling is going to be worse.


Yes, scaling looks bad.  I don't know that we can avoid it in all cases
(see also the 200DPI and 300 DPI case).


 Going for the same size is a very different problem than just trying to
 get all apps readable by default. I'm not sure same size is a better
 goal than same look.

 And on a side note:
 http://web.archive.org/web/20120102153021/http://www.fooishbar.org/blog


What I would like in the end is a per-output slider bar (or something of
that ilk) that let's the user select the interface size on that output.
Sure, they probably won't be able to select *any* resolution (the
compositor may limit it to multiples of 24 dpi or something).  And they can
certainly make an ugly set-up for themselves.  However, I want them to be
able to make something more-or-less reasonable and I see no reason why the
compositor shouldn't coordinate this and why this scale factor can't be
used for that.

My primary concern is that integer multiples of 96 DPI isn't going to be
enough granularity.  I don't know whether we can really accomplish a higher
granularity in a reasonable way.


 Which email was your detailed proposition?


Alexander already gave me a very good reason why my original idea won't
work (too restrictive on the protocol) and I now agree with him.  That
said, I'd like to find some way to accomplish the above.


Thanks,
--Jason Ekstrand
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 2/2] protocol: Support scaled outputs and surfaces

2013-05-20 Thread Bill Spitzak

Pekka Paalanen wrote:

This seems pretty limiting to me. What happens when *all* the outputs 
are hi-res? You really think wayland clients should not be able to take 
full advantage of this?


Then the individual pixels are so small that it won't matter.


It does not matter how tiny the pixels are. The step between possible 
surface sizes and subsurface positions remains the size of a scale-1 
unit. Or else I am seriously mis-understanding the proposal:


Let's say the output is 10,000dpi and the compositor has set it's scale 
to 100. Can a client make a buffer that is 10,050 pixels wide appear 1:1 
on the pixels of this output? It looks to me like only multiples of 100 
are possible.



If nothing else it makes it so that subsurfaces are
always positioned on integer positions on non-scaled displays, which
makes things easier when monitor of differen scales are mixed.

This is false if the subsurface is attached to a scaled parent surface.


Huh?


Parent surface uses the scaler api to change a buffer width of 100 to 
150. The fullscreen and this hi-dpi interface can also produce similar 
scales. The subsurface has a width of 51. Either the left or right edge 
is going to land in the middle of an output pixel.


The input rectangle to the scaler proposal is in the space between the 
buffer transform and the scaling. Therefore there are *three* coordinate 
spaces.


Where did you get this? Where is this defined or proposed?


The input rectangle is in the same direction as the output rectangle 
even if the buffer is rotated 90 degrees by the buffer_transform.



On a quick thought, that seems only a different way of doing it,
without any benefits, and possibly having cons.


Benefits: the buffer can be any integer number of pixels in size, 
non-integer buffer sizes cannot be specified by the api, you can align 
subsurfaces with pixels in the buffer (which means a precomposite of 
subsurfaces into the main one before scaling is possible).



Actually, it means that the surface coordinate system can change
dramatically when a client sends a new buffer with a different scale,
which then raises a bucketful of races: is an incoming event using new
or old surface coordinates? That includes at least all input events
with a surface position,


This is a good point and the only counter argument that makes sense.

All solutions I can think of are equivalent to reporting events in the 
output space, the same as your proposal. However I still feel that the 
surface size, input area, and other communication from client to server 
should be specified in input space.



and the shell geometry event.


Geometry is in the space of the parent surface, not this surface. This 
is true in both proposals. Both would get exactly the same geometry events.



For the record, wl_surface.attach changes the surface coordinate system
by translating with x,y, but that is not a problem. The x,y do not
describe how the surface moves, they describe how pixel rows and
columns are added or removed on the edges.


If x,y is in buffer pixels then it matches my proposal. It can change 
the results of the scaler to non-integers then, so I was under the 
impression it would be ignored in this case. Assuming logical use of the 
hi-dpi I don't see a problem with it being in buffer pixels then.


___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 2/2] protocol: Support scaled outputs and surfaces

2013-05-17 Thread Alexander Larsson
On tor, 2013-05-16 at 10:57 -0700, Bill Spitzak wrote:
 al...@redhat.com wrote:
 
  Coordinates
  in other parts of the protocol, like input events, relative window
  positioning and output positioning are still in the compositor space
  rather than the scaled space. However, input has subpixel precision
  so you can still get input at full resolution.
 
 If I understand this correctly, this means that a client that is aware 
 of the high-dpi is still unable to make a surface with a size that is 
 not a multiple of the scale, or to move the x/y by an amount that is not 
 a multiple of the scale, or position subsurfaces at this level of accuracy.

You can make a surface of any integer size (and it has to be integer due
to existing APIs on surface coordinates/sizes), however the *buffer* has
to be an integer multiple of the surface size. In other words, surface
sizes and positions are described in the global compositor space, with
integer sizes. (This is already true, although currently the
buffer-surface mapping is just reflections and rotations.)

Its true that this limits subsurface positioning, but i'm not sure that
is a huge issue. If nothing else it makes it so that subsurfaces are
always positioned on integer positions on non-scaled displays, which
makes things easier when monitor of differen scales are mixed.

 The only way I can see to make it work is that all protocol must be in 
 buffer space (or perhaps in buffer space after the rotation/reflection 
 defined by buffer_transform). This also has the advantage (imho) of 
 getting rid of one of the coordinate spaces a client has to think about.

I see it the other way. We currently have *two* coordinate spaces that
the client has to think about. The buffer coordinates (it has to know
this when rendering), and the surface coordinates (these are basically
what all wayland APIs atm use, like in damage, positioning and input).
The transform between two is currently the buffer_transform only. With
the buffer_scale the transform is extended to also scale, but no
additional coordinate space is added.

However, if we make the protocol work in post-translation but pre-scale
space we're adding a new coordinate space. And, we can't make the
protocol work in fully buffer coordinates, because that would break
existing clients since current APIs work in post-translation
coordinates.

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 2/2] protocol: Support scaled outputs and surfaces

2013-05-17 Thread Alexander Larsson
On tor, 2013-05-16 at 10:05 -0500, Jason Ekstrand wrote:

 I still think we can solve this problem better if the clients, instead
 of providing some sort of pre-scaled buffer that matches the output's
 arbitrary scale factor, simply told the compositor which output they
 rendered for. Then everything will be in that outputs coordinates.  If
 the surface ever lands on a different output, the compositor can scale
 everything relative to the selected output.  Surfaces which do not
 specify an output would just get scaled by the factor.  This has three
 advantages.

I don't like this. First of all, there is no way to know the outputs
coordinates other than reading the wl_output transform and scale from
the events. In fact, the output coordinate space is intentinally
hidden from the client, as pekka said The surface
transform, that is private to the compositor, could warp the surface
along a curve for all we care.

Furthermore, claiming that a client renders in output space means that
we lock down the specification of the output transform, as old clients
are claiming to be rendering for a given output. I.e. if we had used
this model before (say we rendered for an output, rather than give a
specific buffer_transform) we wouldn't be able to extend the output
transform by adding a buffer_scale.


___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 2/2] protocol: Support scaled outputs and surfaces

2013-05-17 Thread Bill Spitzak

Alexander Larsson wrote:


You can make a surface of any integer size (and it has to be integer due
to existing APIs on surface coordinates/sizes), however the *buffer* has
to be an integer multiple of the surface size. In other words, surface
sizes and positions are described in the global compositor space, with
integer sizes.


This seems pretty limiting to me. What happens when *all* the outputs 
are hi-res? You really think wayland clients should not be able to take 
full advantage of this?



If nothing else it makes it so that subsurfaces are
always positioned on integer positions on non-scaled displays, which
makes things easier when monitor of differen scales are mixed.


This is false if the subsurface is attached to a scaled parent surface.


I see it the other way. We currently have *two* coordinate spaces that
the client has to think about. The buffer coordinates (it has to know
this when rendering), and the surface coordinates (these are basically
what all wayland APIs atm use, like in damage, positioning and input).
The transform between two is currently the buffer_transform only. With
the buffer_scale the transform is extended to also scale, but no
additional coordinate space is added.


The input rectangle to the scaler proposal is in the space between the 
buffer transform and the scaling. Therefore there are *three* coordinate 
spaces.


My proposal is that surface space be moved before the scaling. This 
reduces the number of spaces back to two by using the same space for 
input rectangle as for events and surface size, etc. It also means 
integers always have a physical meaning for the client (ie buffer 
pixels) and that odd-sized buffers are supported on the hi-res display.


The number of spaces could be reduced one more by moving everything to 
before the buffer_transform. However this would change existing wayland api.

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


[PATCH 2/2] protocol: Support scaled outputs and surfaces

2013-05-16 Thread alexl
From: Alexander Larsson al...@redhat.com

This adds the wl_surface.set_buffer_scale request, and a wl_output.scale
event. These together lets us support automatic upscaling of old
clients on very high resolution monitors, while allowing new clients
to take advantage of this to render at the higher resolution when the
surface is displayed on the scaled output.

It is similar to set_buffer_transform in that the buffer is stored in
a transformed pixels (in this case scaled). This means that if an output
is scaled we can directly use the pre-scaled buffer with additional data,
rather than having to scale it.

Additionally this adds a scaled flag to the wl_output.mode flags
so that clients know which resolutions are native and which are scaled.

Also, in places where the documentation was previously not clear as to
what coordinate system was used this was fleshed out.

It also adds a scaling_factor event to wl_output that specifies the
scaling of an output.

This is meant to be used for outputs with a very high DPI to tell the
client that this particular output has subpixel precision. Coordinates
in other parts of the protocol, like input events, relative window
positioning and output positioning are still in the compositor space
rather than the scaled space. However, input has subpixel precision
so you can still get input at full resolution.

This setup means global properties like mouse acceleration/speed,
pointer size, monitor geometry, etc can be specified in a mostly
similar resolution even on a multimonitor setup where some monitors
are low dpi and some are e.g. retina-class outputs.
---
 protocol/wayland.xml | 107 ---
 1 file changed, 93 insertions(+), 14 deletions(-)

diff --git a/protocol/wayland.xml b/protocol/wayland.xml
index d3ae149..acfb140 100644
--- a/protocol/wayland.xml
+++ b/protocol/wayland.xml
@@ -173,7 +173,7 @@
 /event
   /interface
 
-  interface name=wl_compositor version=2
+  interface name=wl_compositor version=3
 description summary=the compositor singleton
   A compositor.  This object is a singleton global.  The
   compositor is in charge of combining the contents of multiple
@@ -709,7 +709,7 @@
 
The x and y arguments specify the locations of the upper left
corner of the surface relative to the upper left corner of the
-   parent surface.
+   parent surface, in surface local coordinates.
 
The flags argument controls details of the transient behaviour.
   /description
@@ -777,6 +777,10 @@
in any of the clients surfaces is reported as normal, however,
clicks in other clients surfaces will be discarded and trigger
the callback.
+
+   The x and y arguments specify the locations of the upper left
+   corner of the surface relative to the upper left corner of the
+   parent surface, in surface local coordinates.
   /description
 
   arg name=seat type=object interface=wl_seat summary=the wl_seat 
whose pointer is used/
@@ -860,6 +864,9 @@
 
The client is free to dismiss all but the last configure
event it received.
+
+   The width and height arguments specify the size of the window
+   in surface local coordinates.
   /description
 
   arg name=edges type=uint/
@@ -876,11 +883,16 @@
 /event
   /interface
 
-  interface name=wl_surface version=2
+  interface name=wl_surface version=3
 description summary=an onscreen surface
   A surface is a rectangular area that is displayed on the screen.
   It has a location, size and pixel contents.
 
+  The size of a surface (and relative positions on it) is described
+  in surface local coordinates, which may differ from the buffer
+  local coordinates of the pixel content, in case a buffer_transform
+  or a buffer_scale is used.
+
   Surfaces are also used for some special purposes, e.g. as
   cursor images for pointers, drag icons, etc.
 /description
@@ -895,20 +907,25 @@
   description summary=set the surface contents
Set a buffer as the content of this surface.
 
+   The new size of the surface is calculated based on the buffer
+   size transformed by the inverse buffer_transform and the
+   inverse buffer_scale. This means that the supplied buffer
+   must be an integer multiple of the buffer_scale.
+
The x and y arguments specify the location of the new pending
-   buffer's upper left corner, relative to the current buffer's
-   upper left corner. In other words, the x and y, and the width
-   and height of the wl_buffer together define in which directions
-   the surface's size changes.
+   buffer's upper left corner, relative to the current buffer's upper
+   left corner, in surface local coordinates. In other words, the
+   x and y, combined with the new surface size define in which
+   directions the surface's size changes.
 
Surface contents are 

Re: [PATCH 2/2] protocol: Support scaled outputs and surfaces

2013-05-16 Thread Jason Ekstrand
On May 16, 2013 8:44 AM, al...@redhat.com wrote:

 From: Alexander Larsson al...@redhat.com

 This adds the wl_surface.set_buffer_scale request, and a wl_output.scale
 event. These together lets us support automatic upscaling of old
 clients on very high resolution monitors, while allowing new clients
 to take advantage of this to render at the higher resolution when the
 surface is displayed on the scaled output.

 It is similar to set_buffer_transform in that the buffer is stored in
 a transformed pixels (in this case scaled). This means that if an output
 is scaled we can directly use the pre-scaled buffer with additional data,
 rather than having to scale it.

 Additionally this adds a scaled flag to the wl_output.mode flags
 so that clients know which resolutions are native and which are scaled.

 Also, in places where the documentation was previously not clear as to
 what coordinate system was used this was fleshed out.

 It also adds a scaling_factor event to wl_output that specifies the
 scaling of an output.

 This is meant to be used for outputs with a very high DPI to tell the
 client that this particular output has subpixel precision. Coordinates
 in other parts of the protocol, like input events, relative window
 positioning and output positioning are still in the compositor space
 rather than the scaled space. However, input has subpixel precision
 so you can still get input at full resolution.

 This setup means global properties like mouse acceleration/speed,
 pointer size, monitor geometry, etc can be specified in a mostly
 similar resolution even on a multimonitor setup where some monitors
 are low dpi and some are e.g. retina-class outputs.

This looks better.

I still think we can solve this problem better if the clients, instead of
providing some sort of pre-scaled buffer that matches the output's
arbitrary scale factor, simply told the compositor which output they
rendered for. Then everything will be in that outputs coordinates.  If the
surface ever lands on a different output, the compositor can scale
everything relative to the selected output.  Surfaces which do not specify
an output would just get scaled by the factor.  This has three advantages.

1. Everything is still pixel-perfect and there are no input rounding errors.

2. There is no confusion about things like subsurface positioning and
clients can place subsurfaces at any pixel, not just multiples of the scale
factor. (Sub surfaces would have to inherent their drawing on X monitor
setting from the parent to keep everything sane.)

3. Since surfaces are scaled relative to their preferred output, the user
can specify arbitrary scaling factors for each output and are not
restricted to integers.

I proposed this in more detail in a previous email but no one bothered to
respond to it.

Thanks,
--Jason Ekstrand

 ---
  protocol/wayland.xml | 107
---
  1 file changed, 93 insertions(+), 14 deletions(-)

 diff --git a/protocol/wayland.xml b/protocol/wayland.xml
 index d3ae149..acfb140 100644
 --- a/protocol/wayland.xml
 +++ b/protocol/wayland.xml
 @@ -173,7 +173,7 @@
  /event
/interface

 -  interface name=wl_compositor version=2
 +  interface name=wl_compositor version=3
  description summary=the compositor singleton
A compositor.  This object is a singleton global.  The
compositor is in charge of combining the contents of multiple
 @@ -709,7 +709,7 @@

 The x and y arguments specify the locations of the upper left
 corner of the surface relative to the upper left corner of the
 -   parent surface.
 +   parent surface, in surface local coordinates.

 The flags argument controls details of the transient behaviour.
/description
 @@ -777,6 +777,10 @@
 in any of the clients surfaces is reported as normal, however,
 clicks in other clients surfaces will be discarded and trigger
 the callback.
 +
 +   The x and y arguments specify the locations of the upper left
 +   corner of the surface relative to the upper left corner of the
 +   parent surface, in surface local coordinates.
/description

arg name=seat type=object interface=wl_seat summary=the
wl_seat whose pointer is used/
 @@ -860,6 +864,9 @@

 The client is free to dismiss all but the last configure
 event it received.
 +
 +   The width and height arguments specify the size of the window
 +   in surface local coordinates.
/description

arg name=edges type=uint/
 @@ -876,11 +883,16 @@
  /event
/interface

 -  interface name=wl_surface version=2
 +  interface name=wl_surface version=3
  description summary=an onscreen surface
A surface is a rectangular area that is displayed on the screen.
It has a location, size and pixel contents.

 +  The size of a surface (and relative positions on it) is described
 +  in surface local coordinates, which 

Re: [PATCH 2/2] protocol: Support scaled outputs and surfaces

2013-05-16 Thread Bill Spitzak

al...@redhat.com wrote:


Coordinates
in other parts of the protocol, like input events, relative window
positioning and output positioning are still in the compositor space
rather than the scaled space. However, input has subpixel precision
so you can still get input at full resolution.


If I understand this correctly, this means that a client that is aware 
of the high-dpi is still unable to make a surface with a size that is 
not a multiple of the scale, or to move the x/y by an amount that is not 
a multiple of the scale, or position subsurfaces at this level of accuracy.


The only way I can see to make it work is that all protocol must be in 
buffer space (or perhaps in buffer space after the rotation/reflection 
defined by buffer_transform). This also has the advantage (imho) of 
getting rid of one of the coordinate spaces a client has to think about.

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 2/2] protocol: Support scaled outputs and surfaces

2013-05-16 Thread Bill Spitzak

Jason Ekstrand wrote:

I still think we can solve this problem better if the clients, instead 
of providing some sort of pre-scaled buffer that matches the output's 
arbitrary scale factor, simply told the compositor which output they 
rendered for.


That is equivalent to providing a scale factor, except that the scale 
factor has to match one of the outputs.


A client will not be able to make a low-dpi surface if there are only 
high-dpi outputs, which seems pretty limiting.


You could say that the scaler api would be used in that case, but this 
brings up the big question of why this api and the scaler are different 
when they serve the same purpose?

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 2/2] protocol: Support scaled outputs and surfaces

2013-05-16 Thread Jason Ekstrand
On May 16, 2013 1:11 PM, Bill Spitzak spit...@gmail.com wrote:

 Jason Ekstrand wrote:

 I still think we can solve this problem better if the clients, instead
of providing some sort of pre-scaled buffer that matches the output's
arbitrary scale factor, simply told the compositor which output they
rendered for.


 That is equivalent to providing a scale factor, except that the scale
factor has to match one of the outputs.

What I didn't mention here but did before is that this could be combined
with an integer scale factor in case you want to render at a multiple.  If
you throw that in, I think it covers all of the interesting cases.


 A client will not be able to make a low-dpi surface if there are only
high-dpi outputs, which seems pretty limiting.

If you want a low DPI surface you can just not specify the scale/output at
all. Then it will just assume something like 100dpi and scale.


 You could say that the scaler api would be used in that case, but this
brings up the big question of why this api and the scaler are different
when they serve the same purpose?

The point of this soi is to allow surfaces to render the same size on
different density outputs. The point of the scaler api is to allow a
surface to render at a different resolution than its specified size. The
two are orthogonal.

--Jason Ekstrand
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel