Re: EFL/Wayland and xdg-shell

2015-04-16 Thread Jan Arne Petersen

Hi,

On 16.04.2015 00:51, Carsten Haitzler (The Rasterman) wrote:

actually the other way around... clients know where the vkbd region(s) are
so client can shuffle content to be visible. :)


In a VKB (rather than overlay-helper, as used for complex composition)
scenario, I would expect xdg-shell to send a configure event to resize
the window and allow for the VKB. If this isn't sufficient - I
honestly don't know what the behaviour is under X11 - then a potential
version bump of wl_text could provide for this.


no - resizing is a poorer solution. tried that in x11. first obvious port of
call. imagine vkbd is partly translucent... you want it to still be over window
content. imagine a kbd split onto left and right halves, one in the middle of
the left and right edges of the screen (because screen is bigger). :)


Yes on the Nokia N9 we also solved the problem with widget relocation 
instead of resizing (see 
http://www.jonnor.com/files/maliit/maliit-lmt-technical-overview-widget-reloc.pdf)


I will add an event to the text protocol to support that.

Regards
Jan Arne

--
Jan Arne Petersen | jan.peter...@kdab.com | Senior Software Engineer
KDAB (Deutschland) GmbHCo KG, a KDAB Group company
Tel: +49-30-521325470
KDAB - The Qt Experts
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: EFL/Wayland and xdg-shell

2015-04-16 Thread Daniel Stone
Hi,

On 15 April 2015 at 23:51, Carsten Haitzler ras...@rasterman.com wrote:
 On Wed, 15 Apr 2015 20:29:32 +0100 Daniel Stone dan...@fooishbar.org said:
 On 15 April 2015 at 02:39, Carsten Haitzler ras...@rasterman.com wrote:
  not esoteric - an actual request from people making products.

 The reason I took that as 'esoteric' was that I assumed it was about
 free window rotation inside Weston: a feature which is absolutely
 pointless but as a proof-of-concept for a hidden global co-ordinate
 space. Makes a lot more sense for whole-display rotation. More below.

 not just whole display - but now imagine a table with a screen and touch and 4
 people around it one along each side and multiple windows floating about like
 scraps of paper... just an illustration where you'd want window-by-window
 rotation done by compositor as well.

Sure, but that's complex enough - and difficult enough to reason even
about the desired UI semantics - that it really wants a prototype
first, or even a mockup. How do you define orientation in a table
scenario? If you're doing gesture-based/constant rotation (rather than
quantised to 90°), how do you animate that, and where does the
threshold for relayout lie? Without knowing what to design for, it's
hard to come up with a protocol which makes sense.

Luckily, writing extensions is infinitely less difficult than under
X11, so the general approach has been to farm these out to separate
extensions and then bring them in later if they turn out to make sense
in a global context. The most relevant counter-example (and
anti-pattern) I can think of is XI2 multitouch, where changing things
was so difficult that we had to design from the moon from the get-go.
The result, well, was XI2 multitouch. Not my finest moment, but
luckily I stepped out before it was merged so can just blame Peter.

  actually the other way around... clients know where the vkbd region(s) are
  so client can shuffle content to be visible. :)

 In a VKB (rather than overlay-helper, as used for complex composition)
 scenario, I would expect xdg-shell to send a configure event to resize
 the window and allow for the VKB. If this isn't sufficient - I
 honestly don't know what the behaviour is under X11 - then a potential
 version bump of wl_text could provide for this.

 no - resizing is a poorer solution. tried that in x11. first obvious port of
 call. imagine vkbd is partly translucent... you want it to still be over 
 window
 content. imagine a kbd split onto left and right halves, one in the middle of
 the left and right edges of the screen (because screen is bigger). :)

Yeah, gotcha; it does fall apart after more than about a minute's
thought. Seems like this has been picked up though, so happy days.

  (pretend a phone with 4 external monitors attached).

 Hell of a phone. More seriously, yes, a display-management API could
 expose this, however if the aim is for clients to communicate intent
 ('this is a presentation') rather than for compositors to communicate
 situation ('this is one of the external monitors'), then we probably
 don't need this. wl_output already provides the relative geometry, so
 all that is required for this is a way to communicate output type.

 i was thinking a simplified geometry. then again client toolkits can figure
 that out and present a simplified enum or what not to the app too. but yes -
 some enumerated type attached to the output would be very nice. smarter
 clients can decide their intent based on what is listed as available - adapt 
 to
 the situation. dumber ones will just ask  for a fixed type and deal with it
 if they don't get it.

I think exposing an output type would be relatively uncontroversial.
The fullscreen request already takes a target output; would that cover
your uses, or do you really need to request initial presentation of
non-fullscreen windows on particular outputs? (Actually, I can see
that: you'd want your PDF viewer's primary view to tack to your
internal output, and its presentation view aimed at the external
output. Jasper/Manuel - any thoughts?)

  surfaces should be
  able to hint at usage - eg i want to be on the biggest tv. i want to be
  wherever you have a small mobile touch screen etc. compositor deals with
  deciding where they would go based on the current state of the world
  screen-wise and app hints.

 Right. So if we do have this client-intent-led interface (which would
 definitely be the most Wayland-y approach), then we don't need to
 advertise output types and wl_output already deals with the rest, so
 no change required here?

 well the problem here is the client is not aware of the current situation. is
 that output on the right a tv on the other side of the room, ore a projector, 
 or
 perhaps an internal lcd panel? is it far from the user or touchable (touch
 surface). if it's touchable the app may alter ui (make buttons bigger - remove
 scrollbars to go into a touch ui mode as opposed ro mouse driven...). maybe 
 app
 is written 

Re: EFL/Wayland and xdg-shell

2015-04-16 Thread The Rasterman
On Thu, 16 Apr 2015 15:32:31 +0100 Daniel Stone dan...@fooishbar.org said:

 Hi,
 
 On 15 April 2015 at 23:51, Carsten Haitzler ras...@rasterman.com wrote:
  On Wed, 15 Apr 2015 20:29:32 +0100 Daniel Stone dan...@fooishbar.org said:
  On 15 April 2015 at 02:39, Carsten Haitzler ras...@rasterman.com wrote:
   not esoteric - an actual request from people making products.
 
  The reason I took that as 'esoteric' was that I assumed it was about
  free window rotation inside Weston: a feature which is absolutely
  pointless but as a proof-of-concept for a hidden global co-ordinate
  space. Makes a lot more sense for whole-display rotation. More below.
 
  not just whole display - but now imagine a table with a screen and touch
  and 4 people around it one along each side and multiple windows floating
  about like scraps of paper... just an illustration where you'd want
  window-by-window rotation done by compositor as well.
 
 Sure, but that's complex enough - and difficult enough to reason even
 about the desired UI semantics - that it really wants a prototype
 first, or even a mockup. How do you define orientation in a table
 scenario? If you're doing gesture-based/constant rotation (rather than
 quantised to 90°), how do you animate that, and where does the
 threshold for relayout lie? Without knowing what to design for, it's
 hard to come up with a protocol which makes sense.

sure - let's leave this until later.

 Luckily, writing extensions is infinitely less difficult than under
 X11, so the general approach has been to farm these out to separate
 extensions and then bring them in later if they turn out to make sense
 in a global context. The most relevant counter-example (and
 anti-pattern) I can think of is XI2 multitouch, where changing things
 was so difficult that we had to design from the moon from the get-go.
 The result, well, was XI2 multitouch. Not my finest moment, but
 luckily I stepped out before it was merged so can just blame Peter.

well on 2 levels. extensions were basically almost a no-go area. but with x
client message events, properties you could extend x (well wm/comp/clients -
clients) fairly trivially. :) at this stage wayland is missing such a trivial
ad-hoc extending api. i actually might add one just for the purposes of
prototyping ideas - like just shovel some strings/ints etc. with some string
message name/id. anyway...

   actually the other way around... clients know where the vkbd region(s)
   are so client can shuffle content to be visible. :)
 
  In a VKB (rather than overlay-helper, as used for complex composition)
  scenario, I would expect xdg-shell to send a configure event to resize
  the window and allow for the VKB. If this isn't sufficient - I
  honestly don't know what the behaviour is under X11 - then a potential
  version bump of wl_text could provide for this.
 
  no - resizing is a poorer solution. tried that in x11. first obvious port of
  call. imagine vkbd is partly translucent... you want it to still be over
  window content. imagine a kbd split onto left and right halves, one in the
  middle of the left and right edges of the screen (because screen is
  bigger). :)
 
 Yeah, gotcha; it does fall apart after more than about a minute's
 thought. Seems like this has been picked up though, so happy days.
 
   (pretend a phone with 4 external monitors attached).
 
  Hell of a phone. More seriously, yes, a display-management API could
  expose this, however if the aim is for clients to communicate intent
  ('this is a presentation') rather than for compositors to communicate
  situation ('this is one of the external monitors'), then we probably
  don't need this. wl_output already provides the relative geometry, so
  all that is required for this is a way to communicate output type.
 
  i was thinking a simplified geometry. then again client toolkits can figure
  that out and present a simplified enum or what not to the app too. but yes -
  some enumerated type attached to the output would be very nice. smarter
  clients can decide their intent based on what is listed as available -
  adapt to the situation. dumber ones will just ask  for a fixed type and
  deal with it if they don't get it.
 
 I think exposing an output type would be relatively uncontroversial.
 The fullscreen request already takes a target output; would that cover
 your uses, or do you really need to request initial presentation of
 non-fullscreen windows on particular outputs? (Actually, I can see
 that: you'd want your PDF viewer's primary view to tack to your
 internal output, and its presentation view aimed at the external
 output. Jasper/Manuel - any thoughts?)

yes - any surface, anywhere, any time. :)

   surfaces should be
   able to hint at usage - eg i want to be on the biggest tv. i want to
   be wherever you have a small mobile touch screen etc. compositor deals
   with deciding where they would go based on the current state of the world
   screen-wise and app hints.
 
  Right. 

Re: EFL/Wayland and xdg-shell

2015-04-15 Thread Christian Stroetmann

On the 15th of 2015 21:31, Daniel Stone wrote:

On 14 April 2015 at 04:19, Jasper St. Pierrejstpie...@mecheye.net  wrote:

Boo hoo.

you're the only ones who want physically-based rendering raytraced
desktops.

Enlightenment is absolutely nothing like my desktop environment of
choice either, but this is staggeringly unnecessary. If you want
xdg_shell to actually just be gtk_shell_base, and be totally
unencumbered by the shackles of ever having to work with anyone else,
this is definitely the way to go about it. But core protocol
development is not that.

Be nice.



Who knows what the future brings? At least I can give these informations:

OSGLand [1] and Ontologics OntoLix officially started on the 9th of 
November 2014 [2].


Kind regards
C.S.

[1] OSGLand 
www.ontolinux.com/technology/ontographics/ontographics.htm#osgland

[2] Ontologics OntoLix officially
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: EFL/Wayland and xdg-shell

2015-04-15 Thread Daniel Stone
On 14 April 2015 at 04:19, Jasper St. Pierre jstpie...@mecheye.net wrote:
 Boo hoo.

 you're the only ones who want physically-based rendering raytraced
 desktops.

Enlightenment is absolutely nothing like my desktop environment of
choice either, but this is staggeringly unnecessary. If you want
xdg_shell to actually just be gtk_shell_base, and be totally
unencumbered by the shackles of ever having to work with anyone else,
this is definitely the way to go about it. But core protocol
development is not that.

Be nice.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: EFL/Wayland and xdg-shell

2015-04-15 Thread Daniel Stone
Hi,
Replies to both here ...

On 15 April 2015 at 02:39, Carsten Haitzler ras...@rasterman.com wrote:
 On Tue, 14 Apr 2015 01:31:56 +0100 Daniel Stone dan...@fooishbar.org said:
 On 14 April 2015 at 01:02, Bryce Harrington br...@osg.samsung.com wrote:
  While window rotation was used more as an example of how built-in
  assumptions in the API could unintentionally constrain D-E's, than as a
  seriously needed feature, they did describe a number of ideas for rather
  elaborate window behaviors:
 
* Rotation animations with frame updates to allow widget re-layouts
  while the window is rotating.

 So not just animating the transition, but requesting the client
 animate the content as well? That's extremely esoteric, and seems like
 it belongs in a separate extension - which is possible.

 not esoteric - an actual request from people making products.

The reason I took that as 'esoteric' was that I assumed it was about
free window rotation inside Weston: a feature which is absolutely
pointless but as a proof-of-concept for a hidden global co-ordinate
space. Makes a lot more sense for whole-display rotation. More below.

* Non-linear surface movement/resizing animations and transition
  effects.

 This seems like a compositor thing?

 it is.

i.e. no bearing on actual xdg_shell

  There was lots of interest in hearing more about Wayland's plans for
  text-cursor-position and input-method, which are necessary for Asian
  languages.

 It's sadly not been unmaintained for a while.

  A particular question was how clients could coordinate with
  the virtual keyboard input window so that it doesn't overlay where text
  is being inserted.

 See the text_cursor_position protocol.

 actually the other way around... clients know where the vkbd region(s) are so
 client can shuffle content to be visible. :)

In a VKB (rather than overlay-helper, as used for complex composition)
scenario, I would expect xdg-shell to send a configure event to resize
the window and allow for the VKB. If this isn't sufficient - I
honestly don't know what the behaviour is under X11 - then a potential
version bump of wl_text could provide for this.

 RandR is a disaster of an API to expose to clients. I would suggest
 that anything more than a list of monitors (not outputs/connectors)
 with their resolutions, relative monitor positioning, and the ability
 to change/disable the above, is asking for trouble.

 agreed - exposing randr is not sane. it's an internal compositor matter at 
 this
 level of detail (if compositor chooses to have a protocol, do it al itself
 internally etc. is up to it, but any tool to configure screen output at this
 level would be compositor specific).

 what i do think is needed is a list of screens with some kind of types 
 attached
 and rough metadata like one screen is left or right of another (so clients 
 like
 flight simulators could ask to have special surface on the left/right screens
 showing views out the sides of the cockpit and middle screen is out the 
 front).
 something like:

 0 desktop primary
 1 desktop left_of primary
 2 desktop right_of primary
 3 mobile detached
 4 tv above primary

 (pretend a phone with 4 external monitors attached).

Hell of a phone. More seriously, yes, a display-management API could
expose this, however if the aim is for clients to communicate intent
('this is a presentation') rather than for compositors to communicate
situation ('this is one of the external monitors'), then we probably
don't need this. wl_output already provides the relative geometry, so
all that is required for this is a way to communicate output type.

 perhaps listing
 resolution, rotation and dpi as well (pick the screen with the biggest 
 physical
 size or highest dpi - adjust window contents based on screen rotation - eg so
 some controls are always facing the bottom edge of the screen where some
 button controls are - the screen shows the legend text).

 apps should not be configuring any of this. it's read-only.

This already exists today - wl_output's geometry (DPI, rotation,
subpixel information, position within global co-ordinate space) and
mode (mode) events. So, no problem.

 surfaces should be
 able to hint at usage - eg i want to be on the biggest tv. i want to be
 wherever you have a small mobile touch screen etc. compositor deals with
 deciding where they would go based on the current state of the world
 screen-wise and app hints.

Right. So if we do have this client-intent-led interface (which would
definitely be the most Wayland-y approach), then we don't need to
advertise output types and wl_output already deals with the rest, so
no change required here?

  One area we could improve on X for output configuration is in how
  displays are selected for a given application's surface.  A suggestion
  was type descriptors for outputs, such as laptop display,
  television, projector, etc. so that surfaces could express an output
  type affinity.  Then a movie application could 

Re: EFL/Wayland and xdg-shell

2015-04-15 Thread Jasper St. Pierre
Yeah, that was extremely uncalled for. Was a difficult day at work,
and I was already cranky. I messed up, that was my fault, and I
apologize.

On Wed, Apr 15, 2015 at 12:31 PM, Daniel Stone dan...@fooishbar.org wrote:
 On 14 April 2015 at 04:19, Jasper St. Pierre jstpie...@mecheye.net wrote:
 Boo hoo.

 you're the only ones who want physically-based rendering raytraced
 desktops.

 Enlightenment is absolutely nothing like my desktop environment of
 choice either, but this is staggeringly unnecessary. If you want
 xdg_shell to actually just be gtk_shell_base, and be totally
 unencumbered by the shackles of ever having to work with anyone else,
 this is definitely the way to go about it. But core protocol
 development is not that.

 Be nice.



-- 
  Jasper
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: EFL/Wayland and xdg-shell

2015-04-15 Thread The Rasterman
On Wed, 15 Apr 2015 20:29:32 +0100 Daniel Stone dan...@fooishbar.org said:

 Hi,
 Replies to both here ...
 
 On 15 April 2015 at 02:39, Carsten Haitzler ras...@rasterman.com wrote:
  On Tue, 14 Apr 2015 01:31:56 +0100 Daniel Stone dan...@fooishbar.org said:
  On 14 April 2015 at 01:02, Bryce Harrington br...@osg.samsung.com wrote:
   While window rotation was used more as an example of how built-in
   assumptions in the API could unintentionally constrain D-E's, than as a
   seriously needed feature, they did describe a number of ideas for rather
   elaborate window behaviors:
  
 * Rotation animations with frame updates to allow widget re-layouts
   while the window is rotating.
 
  So not just animating the transition, but requesting the client
  animate the content as well? That's extremely esoteric, and seems like
  it belongs in a separate extension - which is possible.
 
  not esoteric - an actual request from people making products.
 
 The reason I took that as 'esoteric' was that I assumed it was about
 free window rotation inside Weston: a feature which is absolutely
 pointless but as a proof-of-concept for a hidden global co-ordinate
 space. Makes a lot more sense for whole-display rotation. More below.

not just whole display - but now imagine a table with a screen and touch and 4
people around it one along each side and multiple windows floating about like
scraps of paper... just an illustration where you'd want window-by-window
rotation done by compositor as well.

   There was lots of interest in hearing more about Wayland's plans for
   text-cursor-position and input-method, which are necessary for Asian
   languages.
 
  It's sadly not been unmaintained for a while.
 
   A particular question was how clients could coordinate with
   the virtual keyboard input window so that it doesn't overlay where text
   is being inserted.
 
  See the text_cursor_position protocol.
 
  actually the other way around... clients know where the vkbd region(s) are
  so client can shuffle content to be visible. :)
 
 In a VKB (rather than overlay-helper, as used for complex composition)
 scenario, I would expect xdg-shell to send a configure event to resize
 the window and allow for the VKB. If this isn't sufficient - I
 honestly don't know what the behaviour is under X11 - then a potential
 version bump of wl_text could provide for this.

no - resizing is a poorer solution. tried that in x11. first obvious port of
call. imagine vkbd is partly translucent... you want it to still be over window
content. imagine a kbd split onto left and right halves, one in the middle of
the left and right edges of the screen (because screen is bigger). :)

  RandR is a disaster of an API to expose to clients. I would suggest
  that anything more than a list of monitors (not outputs/connectors)
  with their resolutions, relative monitor positioning, and the ability
  to change/disable the above, is asking for trouble.
 
  agreed - exposing randr is not sane. it's an internal compositor matter at
  this level of detail (if compositor chooses to have a protocol, do it al
  itself internally etc. is up to it, but any tool to configure screen output
  at this level would be compositor specific).
 
  what i do think is needed is a list of screens with some kind of types
  attached and rough metadata like one screen is left or right of another (so
  clients like flight simulators could ask to have special surface on the
  left/right screens showing views out the sides of the cockpit and middle
  screen is out the front). something like:
 
  0 desktop primary
  1 desktop left_of primary
  2 desktop right_of primary
  3 mobile detached
  4 tv above primary
 
  (pretend a phone with 4 external monitors attached).
 
 Hell of a phone. More seriously, yes, a display-management API could
 expose this, however if the aim is for clients to communicate intent
 ('this is a presentation') rather than for compositors to communicate
 situation ('this is one of the external monitors'), then we probably
 don't need this. wl_output already provides the relative geometry, so
 all that is required for this is a way to communicate output type.

i was thinking a simplified geometry. then again client toolkits can figure
that out and present a simplified enum or what not to the app too. but yes -
some enumerated type attached to the output would be very nice. smarter
clients can decide their intent based on what is listed as available - adapt to
the situation. dumber ones will just ask  for a fixed type and deal with it
if they don't get it.

  perhaps listing
  resolution, rotation and dpi as well (pick the screen with the biggest
  physical size or highest dpi - adjust window contents based on screen
  rotation - eg so some controls are always facing the bottom edge of the
  screen where some button controls are - the screen shows the legend text).
 
  apps should not be configuring any of this. it's read-only.
 
 This already exists today - 

Re: EFL/Wayland and xdg-shell

2015-04-15 Thread Bill Spitzak

On 04/15/2015 03:51 PM, Carsten Haitzler (The Rasterman) wrote:


i was thinking a simplified geometry. then again client toolkits can figure
that out and present a simplified enum or what not to the app too. but yes -
some enumerated type attached to the output would be very nice. smarter
clients can decide their intent based on what is listed as available - adapt to
the situation. dumber ones will just ask  for a fixed type and deal with it
if they don't get it.

well the problem here is the client is not aware of the current situation. is
that output on the right a tv on the other side of the room, ore a projector, or
perhaps an internal lcd panel? is it far from the user or touchable (touch
surface). if it's touchable the app may alter ui (make buttons bigger - remove
scrollbars to go into a touch ui mode as opposed ro mouse driven...). maybe app
is written for multitouch controls specifically and thus a display far from the
user with a single mouse only will make the app useless? app should be able
to know what TYPE of display it is on - what types are around and be able to
ask for a type (may or may not get it). important thing is introducing the
concept of a type and attaching it to outputs (and hints on surfaces).


Another reason for the client to know the type is so it can remember 
it and use it to place a window later.


For instance the user may move the window to where she wants it and then 
do a remember where the window is command to the client. Then when the 
client is run next time, it puts the window on the same output as 
before. So the client must be able to query the type of the output the 
surface is on. For I hope obvious reasons it is not acceptable for the 
user to have to choose the type manually, thus there has to be a query 
to determine the type of the output a surface is on.


This may also mean that all outputs have to produce a different type 
(ie if the user has two projectors they are not going to be happy if 
software can't remember which, so they must be different type values), 
and there have to be rules for matching types (so if it is run on a 
system with only one projector then both types end up on that one 
projector).

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: EFL/Wayland and xdg-shell

2015-04-14 Thread Giulio Camuffo
2015-04-14 6:33 GMT+03:00 Derek Foreman der...@osg.samsung.com:
 On 13/04/15 07:31 PM, Daniel Stone wrote:
 Hi,

 On 14 April 2015 at 01:02, Bryce Harrington br...@osg.samsung.com wrote:
 For purposes of discussion, an example might be rotated windows.  The
 set geometry api takes x, y, height, and width.  How would you specify
 rotation angle?

 The window doesn't know it's rotated. The rotation occurs as part of
 the translation into global co-ordinate space (perhaps more than one,
 perhaps multiple instances, etc), which the client is unaware of. If
 it wasn't unaware of it, things like input would also be broken.

 It's intended to allow for the usual level of EFL/Enlightenment eye candy.

 I don't think having a window perpetually at a 35 degree angle is the
 intended use case - having the app know that the screen has been rotated
 so it can participate in a glorious animation sequence while its window
 is transitioning from 0 to 90 degree rotation is.

When the output is rotated you get a geometry event with the transform
so you can animate all you want, so that use case is already covered.



 While window rotation was used more as an example of how built-in
 assumptions in the API could unintentionally constrain D-E's, than as a
 seriously needed feature, they did describe a number of ideas for rather
 elaborate window behaviors:

   * Rotation animations with frame updates to allow widget re-layouts
 while the window is rotating.

 So not just animating the transition, but requesting the client
 animate the content as well? That's extremely esoteric, and seems like
 it belongs in a separate extension - which is possible.

 Nod, that may be the way we go.  Raster's already expressed an interest
 in having our own efl-shell protocol for all our needs.  Bryce and I are
 trying to sift through internal requirements and figure out what, if
 anything, can be shared.

   * Arbitrary shaped (non-rectangular) windows.  Dynamically shaped
 windows.

 Entirely possible: input/opaque regions can be of arbitrary shape.

   * Non-linear surface movement/resizing animations and transition
 effects.

 This seems like a compositor thing?

 There was lots of interest in hearing more about Wayland's plans for
 text-cursor-position and input-method, which are necessary for Asian
 languages.

 It's sadly not been unmaintained for a while.

 I've been poking at it a bit lately and have a stack of bug fixes.  I
 also think the compositor should be involved in the hide/show decision
 instead of leaving it entirely up to the client (give the client
 hide/show/auto settings...).  I've got some work done in that direction
 but it's not ready for the light of day yet.

 I suspect this could end up in a thread of its own though.  Assuming
 text protocols are still something we want in core wayland - Jasper
 seemed to indicate gnome's going its own way in this regard...

 A particular question was how clients could coordinate with
 the virtual keyboard input window so that it doesn't overlay where text
 is being inserted.

 See the text_cursor_position protocol.

 More than that there was talk of wanting a way for a client such as a
 vkbd to get occlusion regions for the whole screen to help it lay things
 out.

 And for clients in general to be able to request their own occluded
 regions so they could attempt to layout things around occluded areas.

 Security is also a top concern here, to ensure
 unauthorized clients can't steal keyboard input if (when) the virtual
 keyboard client crashes.

 The compositor is responsible for starting the input method client
 (VKB) and directing input. Weston is already very good at isolating
 these clients from the rest; any other compositor should follow
 Weston's model here.

 I'm not sure this is a solved problem.  If I read the code correctly,
 only one VKB can exist, and it would have to provide both overlay and
 keyboard functionality.  It may be that we want different installable
 keyboards and overlays that can be configured at run time.

 I think the details of authenticating the correct keyboard should
 probably be left out of the protocol, but limiting to a single input
 provider that must do both overlay and keyboard may be worth changing at
 the protocol level?

 Similarly, it may be that each seat wants a different VKB provider?

 Regarding splitting out libweston, they suggested looking if a
 finer-grained split could be done.  For example, they would be
 interested in utilizing a common monitor configuration codebase rather
 than maintaining their own.  OTOH, I suspect we can do a better
 (simpler) API than RANDR, that doesn't expose quite so many moving parts
 to the D-E's, which may better address the crux of the problem here...

 RandR is a disaster of an API to expose to clients. I would suggest
 that anything more than a list of monitors (not outputs/connectors)
 with their resolutions, relative monitor positioning, and the ability
 to change/disable the 

Re: EFL/Wayland and xdg-shell

2015-04-14 Thread The Rasterman
On Tue, 14 Apr 2015 01:31:56 +0100 Daniel Stone dan...@fooishbar.org said:

 Hi,
 
 On 14 April 2015 at 01:02, Bryce Harrington br...@osg.samsung.com wrote:
  For purposes of discussion, an example might be rotated windows.  The
  set geometry api takes x, y, height, and width.  How would you specify
  rotation angle?
 
 The window doesn't know it's rotated. The rotation occurs as part of
 the translation into global co-ordinate space (perhaps more than one,
 perhaps multiple instances, etc), which the client is unaware of. If
 it wasn't unaware of it, things like input would also be broken.

see my reply to jasper. imagine the app has to be aware of rotation. :)

  While window rotation was used more as an example of how built-in
  assumptions in the API could unintentionally constrain D-E's, than as a
  seriously needed feature, they did describe a number of ideas for rather
  elaborate window behaviors:
 
* Rotation animations with frame updates to allow widget re-layouts
  while the window is rotating.
 
 So not just animating the transition, but requesting the client
 animate the content as well? That's extremely esoteric, and seems like
 it belongs in a separate extension - which is possible.

not esoteric - an actual request from people making products.

* Arbitrary shaped (non-rectangular) windows.  Dynamically shaped
  windows.
 
 Entirely possible: input/opaque regions can be of arbitrary shape.
 
* Non-linear surface movement/resizing animations and transition
  effects.
 
 This seems like a compositor thing?

it is.

  There was lots of interest in hearing more about Wayland's plans for
  text-cursor-position and input-method, which are necessary for Asian
  languages.
 
 It's sadly not been unmaintained for a while.
 
  A particular question was how clients could coordinate with
  the virtual keyboard input window so that it doesn't overlay where text
  is being inserted.
 
 See the text_cursor_position protocol.

actually the other way around... clients know where the vkbd region(s) are so
client can shuffle content to be visible. :)

  Regarding splitting out libweston, they suggested looking if a
  finer-grained split could be done.  For example, they would be
  interested in utilizing a common monitor configuration codebase rather
  than maintaining their own.  OTOH, I suspect we can do a better
  (simpler) API than RANDR, that doesn't expose quite so many moving parts
  to the D-E's, which may better address the crux of the problem here...
 
 RandR is a disaster of an API to expose to clients. I would suggest
 that anything more than a list of monitors (not outputs/connectors)
 with their resolutions, relative monitor positioning, and the ability
 to change/disable the above, is asking for trouble.

agreed - exposing randr is not sane. it's an internal compositor matter at this
level of detail (if compositor chooses to have a protocol, do it al itself
internally etc. is up to it, but any tool to configure screen output at this
level would be compositor specific).

what i do think is needed is a list of screens with some kind of types attached
and rough metadata like one screen is left or right of another (so clients like
flight simulators could ask to have special surface on the left/right screens
showing views out the sides of the cockpit and middle screen is out the front).
something like:

0 desktop primary
1 desktop left_of primary
2 desktop right_of primary
3 mobile detached
4 tv above primary

(pretend a phone with 4 external monitors attached). perhaps listing
resolution, rotation and dpi as well (pick the screen with the biggest physical
size or highest dpi - adjust window contents based on screen rotation - eg so
some controls are always facing the bottom edge of the screen where some
button controls are - the screen shows the legend text).

apps should not be configuring any of this. it's read-only. surfaces should be
able to hint at usage - eg i want to be on the biggest tv. i want to be
wherever you have a small mobile touch screen etc. compositor deals with
deciding where they would go based on the current state of the world
screen-wise and app hints.

  One area we could improve on X for output configuration is in how
  displays are selected for a given application's surface.  A suggestion
  was type descriptors for outputs, such as laptop display,
  television, projector, etc. so that surfaces could express an output
  type affinity.  Then a movie application could request its full screen
  playback surface be preferentially placed on a TV-type output, while
  configuration tools would request being shown on a laptop-screen-type
  output.
 
 A neat idea, but a separate extension to be sure. Flipping things
 around a bit, you might say that the application declares its type,
 and the compositor applies smart placement depending on its type.

well at least which screen to go on imho would be a core hint/feature for
xdg-shell. listing screens like 

Re: EFL/Wayland and xdg-shell

2015-04-14 Thread The Rasterman
On Mon, 13 Apr 2015 17:24:26 -0700 Jasper St. Pierre jstpie...@mecheye.net
said:

 On Mon, Apr 13, 2015 at 5:02 PM, Bryce Harrington br...@osg.samsung.com
 
  While window rotation was used more as an example of how built-in
  assumptions in the API could unintentionally constrain D-E's, than as a
  seriously needed feature, they did describe a number of ideas for rather
  elaborate window behaviors:
 
* Rotation animations with frame updates to allow widget re-layouts
  while the window is rotating.
 
 Why does xdg-shell / Wayland impede this?

ok. hmmm. this is an actual business use case (requirement) not theory. i have
been asked for this. let us pretend now we have a desktop ui where some
windows are rotated, but some are not. when a window rotates 90 degrees is must
ALSO resize to a new size at the same time. imagine 800x480 - 480x800 WHILE
rotating. but the widgets within the window have to re-layout AS the rotation
animation happens. let us also have the compositor doing the rotation (so it's
rotating the output buffer from the app whilst it resizes from size A to size
B). the client would need to be told the current rotation at any point during
this animation so it can render its layout differences accordingly as well.
either that or the rotate animation is totally client driven and it is
informing the compositor of the current rotation needed. imagine this. :)

  There was lots of interest in hearing more about Wayland's plans for
  text-cursor-position and input-method, which are necessary for Asian
  languages.  A particular question was how clients could coordinate with
  the virtual keyboard input window so that it doesn't overlay where text
  is being inserted.  Security is also a top concern here, to ensure
  unauthorized clients can't steal keyboard input if (when) the virtual
  keyboard client crashes.
 
 The solution GNOME takes, which is admittedly maybe too unrealistic,
 is that IBus is our input method framework, and thus our compositor
 has somewhat tight integration with IBus. I don't think input methods
 need to be part of the core Wayland protocol.

i would disagree. input methods should be a core part of the protocol as they
are input. vkbd's, or cjk or anything else is a complex case of input, but
still input. input should come from the compsitor.

in x11 we use properties to advertise to clients which area(s) of the screen
have been obscured by the vkbd, so clients can scroll/move/re-layout content to
ensure the entry you are typing into isnt hidden under your on-screen keyboard.
this is needed in wayland eventually.

  For screensaver inhibition, it was suggested that this be tracked
  per-surface, so that when the surface terminates the inhibition is
  removed (this is essentially what xdg-screensaver tries to do, although
  is specific to the client process rather than window iirc).
 
 It's similar to
 http://standards.freedesktop.org/idle-inhibit-spec/latest/re01.html
 
 It would require careful semantics that I'm not so sure about. Why is
 it being tied to the surface rather than the process important?

because inhibition is about an app saying my window content is important -
keep it visible and don't blank/dim etc.. people brute-force use a global
control because that is all they have had, but it is really about content. that
surface is playing a movie - when doing so - tag that surface as please don't
blank me! (and likely another flag like please don't dim me!). if the
surface is destroyed, hidden, moved to another virtual desktop - the inhibition
can be lifted by the compositor as the content's desire to be always visible is
now irrelevant due to context. and no-fullscreen mode does not equate to
inhibit screensaver.

  They also
  suggested per-output blanking support so e.g. laptop lvds could be
  blanked but the projector be inhibited, or when watching a movie on
  dual-head, hyst the non-movie head powersaves off.  They also suggested
  having a 'dim' functionality which would put the display to maximum
  dimness rather than blanking it completely; I'm not sure on the use case
  here or how easy it'd be to implement.
 
 This is stuff I would hope would be provided and implemented by the
 DE. As a multimonitor user who quite often watches things on one
 monitor while coding on another, I'd turn this feature off.

thus why it should be on a surface. if my movie playback is on my external tv
monitor and it asks to inhibit - the compositor will stop blanking the external
screen, but it may happily blank the internal screen on the laptop - which is
exactly what we really would want. if the information is per-surface, this kind
of smart behavior is possible. there is no need for apps to know about multiple
monitors and which to inhibit if the inhibition is tied to a surface.

  I had hoped to discuss collaboration on testing, but without specifics
  there didn't seem to be strong interest.  One question was about
  collecting protocol dumps for doing stability 

Re: EFL/Wayland and xdg-shell

2015-04-13 Thread Derek Foreman
On 13/04/15 07:24 PM, Jasper St. Pierre wrote:
 On Mon, Apr 13, 2015 at 5:02 PM, Bryce Harrington br...@osg.samsung.com 
 wrote:
 A couple weeks ago I gave a talk on Wayland to the EFL folks at the
 Enlightenment Developer Day in San Jose.  They've already implemented a
 Wayland compositor backend, so my talk mainly speculated on Wayland's
 future and on collecting feature requests and feedback from the EFL
 developers.  Below is my summary of some of this feedback; hopefully if
 I've mischaracterized anything they can jump in and correct me.


 For the presentation, I identified Wayland-bound bits currently
 gestating in Weston, listed features currently under review in
 patchwork, itemized feature requests in our bug tracker, and summarized
 wishlists from other desktop environments.  Emphasis was given to
 xdg-shell and the need for EFL's feedback and input to help ensure
 Wayland's first revision of it truly is agreed-on cross-desktop.


 Considering KDE's feedback, a common theme was that xdg-shell should not
 get in the way of allowing for advanced configuration, since
 configurability is one of that D-E's key attributes.  The theme I picked
 up talking with EFL emphasized being able to provide extreme UI
 features.  For them, the important thing was that the protocol should
 not impose constraints or assumptions that would limit this.

 For purposes of discussion, an example might be rotated windows.  The
 set geometry api takes x, y, height, and width.  How would you specify
 rotation angle?
 
 I'm confused by this. set_window_geometry takes an x, y, width and
 height in surface-local coordinates, describing the parts of the
 surface that are the user-understood effective bounds of the surface.
 
 In a client-decorated world, I might have shadows that extend outside
 of my surface, or areas outside of the solid frame for resizing
 purposes, but these should not be counted as part of the window for
 edge resistance, maximization, tiling, etc.
 
 So, if I have a surface that is 120x120 big, and 20px of it are
 shadows or resize borders, then the client should call
 set_window_geometry(20, 20, 100, 100)
 
 Additionally, the size in the configure event is in window geometry
 coordinates, meaning that window borders are excluded. So, if I
 maximize the window and get configure(200, 200), then I am free to
 attach a 300x300 surface as long as I call set_window_geometry(50, 50,
 200, 200).
 
 If the compositor rotates the window, the window geometry remains the
 same, but the compositor has the responsibility of properly rotating
 the window geometry rectangle for the purposes of e.g. edge
 resistance.
 
 So, if I have a window with a window geometry of 0,0,100,100, and the
 user rotates it by 45 degrees, then the effective window geometry the
 compositor snaps to -42,-42,142,142. Correct me if my high school trig
 is wrong :)
 
 The window is never aware of its local rotation transformation.
 
 Does that make sense? Is this not explained correctly?

That all makes sense - set_window_geometry() was a bit of a red herring
here.

Some EFL developers want the application to have a way to know its
rotation so it can, for example, render drop shadows correctly.

Take a window in weston and rotate it 90 or 180 degrees and the drop
shadow is wrong.

Drop shadows are just the easy example, there's also a desire to have
the applications react to the rotation in some way so the usual EFL
bling can take place during a a smooth rotation from 0-90 degrees, for
example.

I do wonder if drop shadows really should be the client's responsibility
at all.  If completely non-interactive eye-candy was left to the
compositor, would we still need set_window_geometry() at all?

 While window rotation was used more as an example of how built-in
 assumptions in the API could unintentionally constrain D-E's, than as a
 seriously needed feature, they did describe a number of ideas for rather
 elaborate window behaviors:

   * Rotation animations with frame updates to allow widget re-layouts
 while the window is rotating.
 
 Why does xdg-shell / Wayland impede this?

It's not directly impeded I suppose - it's just not currently possible?
 The client doesn't know it's being rotated (at least when talking about
arbitrary rotations, as opposed to the right angle transforms)

   * Arbitrary shaped (non-rectangular) windows.  Dynamically shaped
 windows.
 
 I don't understand this. Any shape always has an axis-aligned bounding
 box. Using ARGB, you can craft windows of any shape.

   * Non-linear surface movement/resizing animations and transition
 effects.
 
 Why does xdg-shell / Wayland impede this?
 
 There was lots of interest in hearing more about Wayland's plans for
 text-cursor-position and input-method, which are necessary for Asian
 languages.  A particular question was how clients could coordinate with
 the virtual keyboard input window so that it doesn't overlay where text
 is being inserted.  Security is also 

EFL/Wayland and xdg-shell

2015-04-13 Thread Bryce Harrington
A couple weeks ago I gave a talk on Wayland to the EFL folks at the
Enlightenment Developer Day in San Jose.  They've already implemented a
Wayland compositor backend, so my talk mainly speculated on Wayland's
future and on collecting feature requests and feedback from the EFL
developers.  Below is my summary of some of this feedback; hopefully if
I've mischaracterized anything they can jump in and correct me.


For the presentation, I identified Wayland-bound bits currently
gestating in Weston, listed features currently under review in
patchwork, itemized feature requests in our bug tracker, and summarized
wishlists from other desktop environments.  Emphasis was given to
xdg-shell and the need for EFL's feedback and input to help ensure
Wayland's first revision of it truly is agreed-on cross-desktop.


Considering KDE's feedback, a common theme was that xdg-shell should not
get in the way of allowing for advanced configuration, since
configurability is one of that D-E's key attributes.  The theme I picked
up talking with EFL emphasized being able to provide extreme UI
features.  For them, the important thing was that the protocol should
not impose constraints or assumptions that would limit this.

For purposes of discussion, an example might be rotated windows.  The
set geometry api takes x, y, height, and width.  How would you specify
rotation angle?

While window rotation was used more as an example of how built-in
assumptions in the API could unintentionally constrain D-E's, than as a
seriously needed feature, they did describe a number of ideas for rather
elaborate window behaviors:

  * Rotation animations with frame updates to allow widget re-layouts
while the window is rotating.
  * Arbitrary shaped (non-rectangular) windows.  Dynamically shaped
windows.
  * Non-linear surface movement/resizing animations and transition
effects.

There was lots of interest in hearing more about Wayland's plans for
text-cursor-position and input-method, which are necessary for Asian
languages.  A particular question was how clients could coordinate with
the virtual keyboard input window so that it doesn't overlay where text
is being inserted.  Security is also a top concern here, to ensure
unauthorized clients can't steal keyboard input if (when) the virtual
keyboard client crashes.

Regarding splitting out libweston, they suggested looking if a
finer-grained split could be done.  For example, they would be
interested in utilizing a common monitor configuration codebase rather
than maintaining their own.  OTOH, I suspect we can do a better
(simpler) API than RANDR, that doesn't expose quite so many moving parts
to the D-E's, which may better address the crux of the problem here...

One area we could improve on X for output configuration is in how
displays are selected for a given application's surface.  A suggestion
was type descriptors for outputs, such as laptop display,
television, projector, etc. so that surfaces could express an output
type affinity.  Then a movie application could request its full screen
playback surface be preferentially placed on a TV-type output, while
configuration tools would request being shown on a laptop-screen-type
output.

For screensaver inhibition, it was suggested that this be tracked 
per-surface, so that when the surface terminates the inhibition is
removed (this is essentially what xdg-screensaver tries to do, although
is specific to the client process rather than window iirc).  They also
suggested per-output blanking support so e.g. laptop lvds could be
blanked but the projector be inhibited, or when watching a movie on
dual-head, hyst the non-movie head powersaves off.  They also suggested
having a 'dim' functionality which would put the display to maximum
dimness rather than blanking it completely; I'm not sure on the use case
here or how easy it'd be to implement.

I had hoped to discuss collaboration on testing, but without specifics
there didn't seem to be strong interest.  One question was about
collecting protocol dumps for doing stability testing or performance
comparison/optimization with; while we're not doing that currently, that
sounded straightforward and like it could be useful to investigate more.


There was some confusion over what the purpose of xdg-shell really is;
it was looked at as a reference implementation rather than as a
lowest-common denominator that should be built *onto*.  So it seems that
Wayland have some messaging to do to ensure xdg-shell really is
'cross-desktop'.

Bryce
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: EFL/Wayland and xdg-shell

2015-04-13 Thread Jasper St. Pierre
On Mon, Apr 13, 2015 at 5:02 PM, Bryce Harrington br...@osg.samsung.com wrote:
 A couple weeks ago I gave a talk on Wayland to the EFL folks at the
 Enlightenment Developer Day in San Jose.  They've already implemented a
 Wayland compositor backend, so my talk mainly speculated on Wayland's
 future and on collecting feature requests and feedback from the EFL
 developers.  Below is my summary of some of this feedback; hopefully if
 I've mischaracterized anything they can jump in and correct me.


 For the presentation, I identified Wayland-bound bits currently
 gestating in Weston, listed features currently under review in
 patchwork, itemized feature requests in our bug tracker, and summarized
 wishlists from other desktop environments.  Emphasis was given to
 xdg-shell and the need for EFL's feedback and input to help ensure
 Wayland's first revision of it truly is agreed-on cross-desktop.


 Considering KDE's feedback, a common theme was that xdg-shell should not
 get in the way of allowing for advanced configuration, since
 configurability is one of that D-E's key attributes.  The theme I picked
 up talking with EFL emphasized being able to provide extreme UI
 features.  For them, the important thing was that the protocol should
 not impose constraints or assumptions that would limit this.

 For purposes of discussion, an example might be rotated windows.  The
 set geometry api takes x, y, height, and width.  How would you specify
 rotation angle?

I'm confused by this. set_window_geometry takes an x, y, width and
height in surface-local coordinates, describing the parts of the
surface that are the user-understood effective bounds of the surface.

In a client-decorated world, I might have shadows that extend outside
of my surface, or areas outside of the solid frame for resizing
purposes, but these should not be counted as part of the window for
edge resistance, maximization, tiling, etc.

So, if I have a surface that is 120x120 big, and 20px of it are
shadows or resize borders, then the client should call
set_window_geometry(20, 20, 100, 100)

Additionally, the size in the configure event is in window geometry
coordinates, meaning that window borders are excluded. So, if I
maximize the window and get configure(200, 200), then I am free to
attach a 300x300 surface as long as I call set_window_geometry(50, 50,
200, 200).

If the compositor rotates the window, the window geometry remains the
same, but the compositor has the responsibility of properly rotating
the window geometry rectangle for the purposes of e.g. edge
resistance.

So, if I have a window with a window geometry of 0,0,100,100, and the
user rotates it by 45 degrees, then the effective window geometry the
compositor snaps to -42,-42,142,142. Correct me if my high school trig
is wrong :)

The window is never aware of its local rotation transformation.

Does that make sense? Is this not explained correctly?

 While window rotation was used more as an example of how built-in
 assumptions in the API could unintentionally constrain D-E's, than as a
 seriously needed feature, they did describe a number of ideas for rather
 elaborate window behaviors:

   * Rotation animations with frame updates to allow widget re-layouts
 while the window is rotating.

Why does xdg-shell / Wayland impede this?

   * Arbitrary shaped (non-rectangular) windows.  Dynamically shaped
 windows.

I don't understand this. Any shape always has an axis-aligned bounding
box. Using ARGB, you can craft windows of any shape.

   * Non-linear surface movement/resizing animations and transition
 effects.

Why does xdg-shell / Wayland impede this?

 There was lots of interest in hearing more about Wayland's plans for
 text-cursor-position and input-method, which are necessary for Asian
 languages.  A particular question was how clients could coordinate with
 the virtual keyboard input window so that it doesn't overlay where text
 is being inserted.  Security is also a top concern here, to ensure
 unauthorized clients can't steal keyboard input if (when) the virtual
 keyboard client crashes.

The solution GNOME takes, which is admittedly maybe too unrealistic,
is that IBus is our input method framework, and thus our compositor
has somewhat tight integration with IBus. I don't think input methods
need to be part of the core Wayland protocol.

 Regarding splitting out libweston, they suggested looking if a
 finer-grained split could be done.  For example, they would be
 interested in utilizing a common monitor configuration codebase rather
 than maintaining their own.  OTOH, I suspect we can do a better
 (simpler) API than RANDR, that doesn't expose quite so many moving parts
 to the D-E's, which may better address the crux of the problem here...

 One area we could improve on X for output configuration is in how
 displays are selected for a given application's surface.  A suggestion
 was type descriptors for outputs, such as laptop display,
 television, projector, 

Re: EFL/Wayland and xdg-shell

2015-04-13 Thread Derek Foreman
On 13/04/15 07:31 PM, Daniel Stone wrote:
 Hi,
 
 On 14 April 2015 at 01:02, Bryce Harrington br...@osg.samsung.com wrote:
 For purposes of discussion, an example might be rotated windows.  The
 set geometry api takes x, y, height, and width.  How would you specify
 rotation angle?
 
 The window doesn't know it's rotated. The rotation occurs as part of
 the translation into global co-ordinate space (perhaps more than one,
 perhaps multiple instances, etc), which the client is unaware of. If
 it wasn't unaware of it, things like input would also be broken.

It's intended to allow for the usual level of EFL/Enlightenment eye candy.

I don't think having a window perpetually at a 35 degree angle is the
intended use case - having the app know that the screen has been rotated
so it can participate in a glorious animation sequence while its window
is transitioning from 0 to 90 degree rotation is.

 While window rotation was used more as an example of how built-in
 assumptions in the API could unintentionally constrain D-E's, than as a
 seriously needed feature, they did describe a number of ideas for rather
 elaborate window behaviors:

   * Rotation animations with frame updates to allow widget re-layouts
 while the window is rotating.
 
 So not just animating the transition, but requesting the client
 animate the content as well? That's extremely esoteric, and seems like
 it belongs in a separate extension - which is possible.

Nod, that may be the way we go.  Raster's already expressed an interest
in having our own efl-shell protocol for all our needs.  Bryce and I are
trying to sift through internal requirements and figure out what, if
anything, can be shared.

   * Arbitrary shaped (non-rectangular) windows.  Dynamically shaped
 windows.
 
 Entirely possible: input/opaque regions can be of arbitrary shape.
 
   * Non-linear surface movement/resizing animations and transition
 effects.
 
 This seems like a compositor thing?
 
 There was lots of interest in hearing more about Wayland's plans for
 text-cursor-position and input-method, which are necessary for Asian
 languages.
 
 It's sadly not been unmaintained for a while.

I've been poking at it a bit lately and have a stack of bug fixes.  I
also think the compositor should be involved in the hide/show decision
instead of leaving it entirely up to the client (give the client
hide/show/auto settings...).  I've got some work done in that direction
but it's not ready for the light of day yet.

I suspect this could end up in a thread of its own though.  Assuming
text protocols are still something we want in core wayland - Jasper
seemed to indicate gnome's going its own way in this regard...

 A particular question was how clients could coordinate with
 the virtual keyboard input window so that it doesn't overlay where text
 is being inserted.
 
 See the text_cursor_position protocol.

More than that there was talk of wanting a way for a client such as a
vkbd to get occlusion regions for the whole screen to help it lay things
out.

And for clients in general to be able to request their own occluded
regions so they could attempt to layout things around occluded areas.

 Security is also a top concern here, to ensure
 unauthorized clients can't steal keyboard input if (when) the virtual
 keyboard client crashes.
 
 The compositor is responsible for starting the input method client
 (VKB) and directing input. Weston is already very good at isolating
 these clients from the rest; any other compositor should follow
 Weston's model here.

I'm not sure this is a solved problem.  If I read the code correctly,
only one VKB can exist, and it would have to provide both overlay and
keyboard functionality.  It may be that we want different installable
keyboards and overlays that can be configured at run time.

I think the details of authenticating the correct keyboard should
probably be left out of the protocol, but limiting to a single input
provider that must do both overlay and keyboard may be worth changing at
the protocol level?

Similarly, it may be that each seat wants a different VKB provider?

 Regarding splitting out libweston, they suggested looking if a
 finer-grained split could be done.  For example, they would be
 interested in utilizing a common monitor configuration codebase rather
 than maintaining their own.  OTOH, I suspect we can do a better
 (simpler) API than RANDR, that doesn't expose quite so many moving parts
 to the D-E's, which may better address the crux of the problem here...
 
 RandR is a disaster of an API to expose to clients. I would suggest
 that anything more than a list of monitors (not outputs/connectors)
 with their resolutions, relative monitor positioning, and the ability
 to change/disable the above, is asking for trouble.

I do like the idea of hiding the concept of outputs and connectors very
much.  :)

This still gets ugly in a hurry.  Someone's obviously going to want to
add a mode that doesn't show up in their EDID 

Re: EFL/Wayland and xdg-shell

2015-04-13 Thread Jasper St. Pierre
On Mon, Apr 13, 2015 at 7:59 PM, Derek Foreman der...@osg.samsung.com wrote:

... snip ...

 That all makes sense - set_window_geometry() was a bit of a red herring
 here.

 Some EFL developers want the application to have a way to know its
 rotation so it can, for example, render drop shadows correctly.

 Take a window in weston and rotate it 90 or 180 degrees and the drop
 shadow is wrong.

Boo hoo. While you're at it, why don't you also write a proper
implementation of shadows that takes the alpha channel into account,
so every single letter on your transparent terminal casts a shadow on
the underlying content, leaving it an unreadable mess?

 Drop shadows are just the easy example, there's also a desire to have
 the applications react to the rotation in some way so the usual EFL
 bling can take place during a a smooth rotation from 0-90 degrees, for
 example.

Use a custom protocol for this. I don't think any other client has
ever cared. UI elements are caricatures for clarity, drop shadows are
used to establish focus and layering, and you're the only ones who
want physically-based rendering raytraced desktops.

 I do wonder if drop shadows really should be the client's responsibility
 at all.  If completely non-interactive eye-candy was left to the
 compositor, would we still need set_window_geometry() at all?

Yes. First of all, invisible borders are still a use case, unless you
think those should be provided by the compositor as well (which means
that the compositor must be more complex and handle fancy events
itself). And when using subsurfaces, you need an understanding of
whether it's the union of the surfaces or one specific surface that's
the main window.

 While window rotation was used more as an example of how built-in
 assumptions in the API could unintentionally constrain D-E's, than as a
 seriously needed feature, they did describe a number of ideas for rather
 elaborate window behaviors:

   * Rotation animations with frame updates to allow widget re-layouts
 while the window is rotating.

 Why does xdg-shell / Wayland impede this?

 It's not directly impeded I suppose - it's just not currently possible?
  The client doesn't know it's being rotated (at least when talking about
 arbitrary rotations, as opposed to the right angle transforms)

Good. Window rotation is a gimmick feature designed to show off that
Wayland can transform input directly without having to send a
transformation mesh to the X server.

 The solution GNOME takes, which is admittedly maybe too unrealistic,
 is that IBus is our input method framework, and thus our compositor
 has somewhat tight integration with IBus. I don't think input methods
 need to be part of the core Wayland protocol.

 That may be in line with the current thinking in the EFL camp.

 Does that mean the input-method and text protocol files in weston are of
 no use at all to gnome?

These are currently unused by GNOME. They were written by Openismus,
the company that wrote Maliit, that has shut down now. In my opinion,
it's too complicated, mandates a split where the keyboard needs to be
replaceable.

I'm not sure the text-protocol has any value at all, but I'll let Rui
Matos, our input method expert, answer.

 It's similar to
 http://standards.freedesktop.org/idle-inhibit-spec/latest/re01.html

 It would require careful semantics that I'm not so sure about. Why is
 it being tied to the surface rather than the process important?

 It's an idea that's being thrown around.  We would like to have the
 ability to put outputs to sleep independently... If the surface is
 dragged from one output to another the blanker inhibition would move
 with it without the client having to do anything.

Ah, that's an interesting idea. I'd be fine with that, but I'd much
prefer experimentation in a private protocol first. There's plenty of
usage issues I can imagine running into (e.g. I press Alt-Tab, and
since the surface appears on the other output for a split-second, it
wakes back up.  Or my mouse is suddenly invisible, etc. Or I have my
presentation running and the presentation window is on the monitor
with inhibit, and my notes suddenly fall asleep).

 Is gtk-shell intended to be a test bed for things that will eventually
 be in xdg-shell?  Or are divergent standards a guarantee at this point?

Either that, or, more hopefully, for stuff that is so GTK+ specific
that there's no value in standardization. The equivalent of a _GTK_*
property on an X11 window.

-- 
  Jasper
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel