Re: Input and games.

2013-05-03 Thread Pekka Paalanen
On Thu, 2 May 2013 19:28:41 +0100
Daniel Stone  wrote:

> Hi,
> 
> On 2 May 2013 10:44, Pekka Paalanen  wrote:
> > On Tue, 30 Apr 2013 09:14:48 -0400
> > Todd Showalter  wrote:
> >> The question is, is a gamepad an object, or is a *set* of
> >> gamepads an object?
> >
> > Both, just like a wl_pointer can be one or more physical mice.
> > Whether a wl_pointer is backed by several mice, the clients have no
> > way to know, or separate events by the physical device.
> >
> > The interfaces are abstract in that sense.
> 
> There's one crucial difference though, and one that's going to come up
> when we address graphics tablets / digitisers too.  wl_pointer works
> as a single interface because no matter how many mice are present, you
> can aggregate them together and come up with a sensible result: they
> all move the sprite to one location.  wl_touch fudges around this by
> essentially asserting that not only will you generally only have one
> direct touchscreen, but it provides for multiple touches, so you can
> pretend one touch each on multiple screens, are multiple touches on a
> single screen.

Right. Could we just say that each such non-aggregatable device must be
put into a wl_seat that does not already have such a device?
Or make that an implementors guideline rather than a hard requirement
in the protocol spec.

> The gamepad interaction doesn't have this luxury, and neither do
> tablets.  I don't think splitting them out to separate seats is the
> right idea though: what if (incoming stupid hypothetical alert) you
> had four people on a single system, each with their own keyboards and
> gamepads.  Kind of like consoles are today, really.  Ideally, you'd
> want an association between the keyboards and gamepads, which would be
> impossible if every gamepad had one separate wl_seat whose sole job
> was to nest it.

So... what's wrong in putting each keyboard into the wl_seat where it
belongs, along with the gamepad?

> I think it'd be better to, instead of wl_seat::get_gamepad returning a
> single new_id wl_gamepad, as wl_pointer/etc do it today, have
> wl_seat::get_gamepads, which would send one wl_seat::gamepad event
> with a new_id wl_gamepad, for every gamepad which was there or
> subsequently added.  That way we keep the seat association, but can
> still deal with every gamepad individually.

It would be left for the client to decide which gamepad it wants from
which wl_seat, right?

Do we want to force all clients to choose every non-aggregatable device
this way?

Essentially, that would mean that wl_seat are just for the traditional
keyboard & mouse (and touchscreen so far) association, and then
everything else would be left for each client to assign to different
wl_seats on their own. This seems strange. Why do we need a wl_seat
then, why not do the same with keyboards and mice?

Oh right, focus. You want to be able to control keyboard focus with a
pointer. Why is a gamepad focus different? Would all gamepads follow
the keyboard focus? If there are several wl_seats with kbd & ptr, which
keyboard focus do they follow? What if the same gamepad is left active
in more than one wl_seat? What if there is no keyboard or pointer, e.g.
you had only a touchscreen and two gamepads (say, IVI)?

And then replace every "gamepad" with "digitizer", and all other
non-aggregatable input devices, and also all raw input devices via
evdev fd passing. The fd passing I believe has similar problems: who
gets the events, which wl_seat do they follow.

This is a new situation, and so many open questions... I just continued
on the exitisting pattern.


Cheers,
pq
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Input and games.

2013-05-03 Thread Pekka Paalanen
On Thu, 2 May 2013 10:46:56 -0400
Todd Showalter  wrote:

> On Thu, May 2, 2013 at 5:44 AM, Pekka Paalanen 
> wrote:
> > On Tue, 30 Apr 2013 09:14:48 -0400
> > Todd Showalter  wrote:
> >
...
> >> The question is, is a gamepad an object, or is a *set* of
> >> gamepads an object?
> >
> > Both, just like a wl_pointer can be one or more physical mice.
> > Whether a wl_pointer is backed by several mice, the clients have no
> > way to know, or separate events by the physical device.
> >
> > The interfaces are abstract in that sense.
> 
> Right.  From a game point of view, we don't want to do the
> conflated-device thing; it makes some sense to have two mice
> controlling a single pointer on a single device (the thinkpad nub
> mouse + usb mouse case), but it never makes sense to have multiple
> gamepads generating events for a single virtual gamepad.  The game
> needs to be able to tell them apart.

Indeed, that's why I proposed to put them in separate wl_seats. It
doesn't make a difference on the protocol level.

...
> > If just a gamepad goes away and later comes back, the wl_seat could
> > even stay around in between. There can also be seats without a
> > gamepad, so it is still the game's responsibility to decide which
> > wl_seats it takes as players.
> 
> This is the icky problem for whoever handles it.  If a gamepad
> disappears and then appears again attached to a different usb port, or
> if a gamepad disappears and a different pad appears at the port where
> the old one was, is it the same wl_seat?

Yup. Whatever we do, we get it wrong for someone, so there needs to be
a GUI to fix it. But should that GUI be all games' burden, or servers'
burden...

Along with the GUI is the burden of implementing the default
heuristics, which may require platform specific information.

> > Which reminds me: maybe we should add a name string event to wl_seat
> > interface? This way a game, if need be, can list the seats by name
> > given by the user, and the user can then pick which ones are actual
> > players. (It is a standard procedure to send initial state of an
> > object right after binding/creating it.) I imagine it might be
> > useful for other apps, too.
> >
> > Unless it's enough to just pick the wl_seats that have a gamepad?
> >
> > Hmm, is this actually any better than just handing all gamepads
> > individually without any wl_seats, and let the game sort sort them
> > out? How far can we assume that a wl_seat == a player, for *every*
> > existing wl_seat? And which player is which wl_seat?
> 
> That's why I was assuming originally that gamepads would all be
> attached to a single wl_seat and come in with pad_index values.
> However it winds up getting wrapped in protocol, what the game is
> interested in (if it cares about more than one gamepad, which it may
> not) is figuring out when those gamepads appear and disappear, how
> they map to players, and what input each player is generating.

Right.

I can summarize my question to this:

Which one is better for the end user: have device assingment to seats
heuristics and GUI in the server, and seats to players mapping GUI
in every game; or have it all in every game?

Or can or should we design the protocol to allow both ways? Even if
gamepads are in different wl_seats, the game is still free to mix and
match.

I have come to a point where I can only ask more questions, without
good suggestions.


Thanks,
pq
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Input and games.

2013-05-03 Thread Todd Showalter
On Fri, May 3, 2013 at 3:34 AM, Pekka Paalanen  wrote:

> Yup. Whatever we do, we get it wrong for someone, so there needs to be
> a GUI to fix it. But should that GUI be all games' burden, or servers'
> burden...
>
> Along with the GUI is the burden of implementing the default
> heuristics, which may require platform specific information.

I don't know that you need a GUI to fix it as long as you're
willing to lay down some policy.  We could go with basic heuristics:

- if a gamepad unplugs from a specific usb port and some other gamepad
re-plugs in the same port before any other gamepads appear, it's the
same player

- if a gamepad unplugs from a specific usb port and then appears in
another before any other gamepads appear, it's the same player

- otherwise, you get whatever mad order falls out of the code

I think that covers the common case; if people start swapping
multiple controllers around between ports, they might have to re-jack
things to get the gamepad->player mapping they like, but that's going
to be rare.

> I can summarize my question to this:
>
> Which one is better for the end user: have device assingment to seats
> heuristics and GUI in the server, and seats to players mapping GUI
> in every game; or have it all in every game?

Heuristics mean less work for the player and behaviour the player
can learn to anticipate.  I say go with that.  I think the moment you
present people with a gui plugboard and ask them to patch-cable
controllers to player IDs, you're in a bad place.

I could see it being an advanced option that a savvy player could
bring up to fix things without rejacking the hardware, but the less
technically savvy are going to have a far easier time just physically
unplugging and replugging gamepads than they are figuring out a GUI
they've never (or rarely) seen before.

   Todd.

--
 Todd Showalter, President,
 Electron Jump Games, Inc.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [RFC] wl_surface scale and crop protocol extension

2013-05-03 Thread Pekka Paalanen
On Thu, 02 May 2013 12:16:23 -0700
Bill Spitzak  wrote:

> Jason Ekstrand wrote:
> 
> > Agreed.  Sending transform matrices or the like has HUGE rounding 
> > problems.  Particularly when we're using wl_fixed which is 24.8.
> > Other methods would require adding rounding conventions etc.
> 
> I was assuming IEEE 32-bit floating point would be in the api for at 
> least the ABCD of the matrix (the XY could be wl_fixed I suppose).

If you looked at the Wayland specification, you'd see that there are no
floats.

> However what you call "rounding conventions" are needed even for the 
> proposed integer api, and this is also why a source rectangle is
> needed. It has nothing to do with inaccuracy. It is because filtered
> needs to know it, since the filter will extend outside the source
> region (it will for down-scales for any reasonable filter, and even
> for up-scaling for mitchell/sync style filters). Samples outside this
> region must be treated specially and this "rounding convention" must
> be defined by wayland (almost certainly you want to define it as
> using the nearest pixel inside the region).

That's not a rounding convention. Rounding convention is about
converting real numbers into integers, and I really meant that. Not
filtering.

And I really do *not* want to specify a filtering method at *any* level,
because whatever we specify, there will always be hardware that does it
differently. I will not write down any guarantees of the resulting
quality, or the scaling method used, because I do not want to exclude
any scaling hardware.

> So yes a source rectangle must be provided in the api, but not for
> the reason you think. The destination rectangle is then a nice way to 
> produce rational fractions for scale and also provide a wl_surface
> size that is different than the buffer size.

My reason for using a source rectangle is because I want to be able to
crop.

> I would make one change to the proposal because I think arbitrary 
> transforms will need to be considered some day. The source rectangle
> has to remain orthogonal in source space (otherwise it is useless for 
> filtering), while the destination has to be orthogonal in "surface 
> space" (since it also controls a lot of other wl_surface details), 
> therefore any future arbitrary transform must be between these. I
> think the current "buffer transform" should be considered the start
> of any future arbitrary transform, so for that reason the source
> rectangle should be in actual buffer pixels, not in "buffer transform
> space".

What chip do you have in mind, that can do arbitrary
matrix-based transforms during an overlay scanout?

- pq
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [RFC] wl_surface scale and crop protocol extension

2013-05-03 Thread Pekka Paalanen
On Thu, 2 May 2013 09:42:42 -0500
Jason Ekstrand  wrote:

> On Thu, May 2, 2013 at 5:51 AM, Pekka Paalanen 
> wrote:
> 
> > On Tue, 30 Apr 2013 10:54:25 -0500
> > Jason Ekstrand  wrote:
...
> > > On Tue, Apr 30, 2013 at 2:33 PM, Pekka Paalanen
> > >  wrote:
> > >
...
> > > >   This interface allows to define the source rectangle
> > > > (src_x, src_y, src_width, src_height) from where to take the
> > > > wl_buffer contents, and scale that to destination size
> > > > (dst_width, dst_height). This state is double-buffered, and is
> > > > applied on the next wl_surface.commit.
> > > >
> > > >   Before the first set request, the wl_surface still
> > > > behaves as if there was no crop and scale state. That is, no
> > > > scaling is applied, and the surface size is the buffer size.
> > > >
> > > >   On compositing, source rectangle coordinates are evaluated
> > > > after wl_surface.set_buffer_transform is evaluated. This means
> > > > that changing the buffer transform and correspondingly the
> > > > client rendering does not require sending new source rectangle
> > > >   coordinates to keep the exact same image source
> > > > rectangle. In other words, the source rectangle is given in the
> > > >   not-scaled-and-cropped surface coordinates, not buffer
> > > > data coordinates.
> > > >
> > >
> > > I agree with Zhi, this needs to be re-worded
> >
> > Yeah, did you understand what I was trying to explain? Any
> > suggestions?
> >
> 
> If I understood correctly, you meant the sensible thing.  i.e.,
> buffers are in (possibly transformed) buffer coordinates while
> surfaces are in surface coordinates.  In other words, the crop/scale
> is applied after the buffer transform.  How about this (actually a
> paragraph higher):
> 
> The source rectangle is specified in (possibly transformed) buffer
> coordinates.  This means that changing the buffer transform and
> correspondingly the client rendering does not require sending new
> source rectangle coordinates.  In other words, the source rectangle
> is given in the coordinates that the surface would have without the
> scaled-and-cropped transformation.

This is a little better, avoids "evaluated", but still I wonder if we
could separate buffer coordinates and orientation-transformed buffer
coordinates better. We have:

A. Buffer pixel coordinates, which are essentially equivalent to byte
addresses of pixels in the buffer.

apply buffer transform to get:

B. oriented buffer coordinates (I wish I would have suggested the term
"orientation" when the buffer transform was proposed, now we have
several "transformations".)

apply crop & scale to get:

C. surface coordinates, i.e. the local coordinates where all window
management and input happens

And the rest is private to a compositor.

Before crop & scale, C was equal to B, and now it is not. If we just
had a term to use for B consistently, describing everything would be a
lot easier.


Thanks,
pq
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [RFC] wl_surface scale and crop protocol extension

2013-05-03 Thread Pekka Paalanen
On Thu, 2 May 2013 14:43:38 -0500
Jason Ekstrand  wrote:

> On Thu, May 2, 2013 at 1:22 PM, Daniel Stone 
> wrote:
> 
> > Hi,
> >
> > On 2 May 2013 15:42, Jason Ekstrand  wrote:
> > > Ok, I see it now. Sorry, but I missed it on my first
> > > read-through.  Yes,
> > it
> > > fixes the problem, but in an extremely confusing way.  The reason
> > > I say
> > it
> > > is confusing is because it inherently mixes buffer and surface
> > > coordinate systems.  I really think we need to isolate buffer
> > > coordinates from
> > surface
> > > coordinates more.  Perhaps what we need is two requests:
> > > set_source_rect
> > and
> > > set_dest_rect and completely ignore the x and y from attach.
> > > This both provides clarity to the coordinate systems and provides
> > > a little
> > separation
> > > between crop and scale.

Yes, this little rough corner bothered me, too. I tried to weasel out
of it by saying that wl_surface_scaler and wl_surface.attach must be
used together, so you can use the right values.

While we still use x,y from wl_surface.attach, nothing else really
makes sense. The x,y must be in the surface coordinate frame.

Since surface coordinates were equal to oriented buffer coordinates
(see my email sent just before this one), I think we could still change
the wording for wl_surface.attach specification to talk about surface
coordinates instead of buffer coordinates. Would that solve your
concern?

> > Ideally, when wl_surface::commit was added, wl_surface::attach
> > should've been broken out into wl_surface::attach and
> > wl_surface::set_position.  Oh well.
> >
> 
> Exactly.  That's what my suggestion was trying to fix (at least in the
> transformed surface case).

Would that really have made any difference? Clients would still be able
to use the x,y regardless, and we need to specify how they interact with
wl_surface_scaler.

> > That being said, we can't ignore the x and y from attach, because
> > that _moves the window_ on screen (think resizing from top left),
> > whereas this is all about how we map the contents of a buffer into
> > that window
> > - totally unrelated to moving.
> >
> 
> Yes, I realize that the x and y from attach are used for moving or
> scaling from top or left.  My suggestion was to replace the x and y
> in attach with an x and y in set_dest_rect.  We wouldn't be losing
> functionality, just moving it in a certain case.  This way surface
> coordinates are kept with surfaces and buffer coordinates are kept
> with buffers.
> 
> In this case, the defined behavior would be that if you create a
> scaler for a surface, the x and y in attach are disabled and
> set_dest_rect takes over.  This way older clients can just use attach
> like they used to and clients that use surface scalers use the
> destination rectangle.  It's not a perfect fix, but I think it moves
> in the right direction.  And, for what it's worth, it doesn't make
> things significantly more complicated because anything that's going
> to scale from the top-left will have to mess with both the surface
> and the scaler anyway.

Yes, that would be one option, and it would support the "I am forcing
that other component's surface size" better, since nothing in
wl_surface would affect what is set in wl_surface_scaler anymore.
Except the wl_buffer size, but if we *really* want to go that way, we
could say that areas outside of a buffer are transparent.


Thanks,
pq
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Input and games.

2013-05-03 Thread Pekka Paalanen
On Fri, 3 May 2013 03:51:33 -0400
Todd Showalter  wrote:

> On Fri, May 3, 2013 at 3:34 AM, Pekka Paalanen 
> wrote:
> 
> > Yup. Whatever we do, we get it wrong for someone, so there needs to
> > be a GUI to fix it. But should that GUI be all games' burden, or
> > servers' burden...
> >
> > Along with the GUI is the burden of implementing the default
> > heuristics, which may require platform specific information.
> 
> I don't know that you need a GUI to fix it as long as you're
> willing to lay down some policy.  We could go with basic heuristics:
> 
> - if a gamepad unplugs from a specific usb port and some other gamepad
> re-plugs in the same port before any other gamepads appear, it's the
> same player
> 
> - if a gamepad unplugs from a specific usb port and then appears in
> another before any other gamepads appear, it's the same player
> 
> - otherwise, you get whatever mad order falls out of the code
> 
> I think that covers the common case; if people start swapping
> multiple controllers around between ports, they might have to re-jack
> things to get the gamepad->player mapping they like, but that's going
> to be rare.

Sure, the heuristics can cover a lot, but there is still the mad case,
and also the initial setup (system started with 3 new gamepads hooked
up), where one may want to configure manually. The GUI is just my
reminder, that sometimes it is necessary to configure manually, and
there must be some way to do it when wanted.

Even if it's just "press the home button in one gamepad at a time, to
assing players 1 to N."

> > I can summarize my question to this:
> >
> > Which one is better for the end user: have device assingment to
> > seats heuristics and GUI in the server, and seats to players
> > mapping GUI in every game; or have it all in every game?
> 
> Heuristics mean less work for the player and behaviour the player
> can learn to anticipate.  I say go with that.  I think the moment you
> present people with a gui plugboard and ask them to patch-cable
> controllers to player IDs, you're in a bad place.
> 
> I could see it being an advanced option that a savvy player could
> bring up to fix things without rejacking the hardware, but the less
> technically savvy are going to have a far easier time just physically
> unplugging and replugging gamepads than they are figuring out a GUI
> they've never (or rarely) seen before.

Well, yes. But the question was not whether we should have heuristics
or a GUI. The question is, do we want the heuristics *and* the GUI in
the server or the games? The GUI is a fallback, indeed, for those who
want it, and so is also the wl_seat-player mapping setup in a game.

If we do the heuristics in the server, there is very little we have to
do in the protocol for it. Maybe just allow to have human-readable
names for wl_seats. The "press home button to assign players" would be
easy to implement. The drawback is that the server's player 1 might not
be the game's player 1, so we need some thought to make them match.

If we do the heuristics in the games, we have to think about what
meta data of the gamepads we need to transmit. You said something about
a hash of some things before. If we have just a single hash, we cannot
implement the heuristics you described above, so it will need some
thought. Also, if we want to drive things like player id lights in
gamepads, that needs to be considered in the protocol.

Maybe there could be some scheme, where we would not need to have the
wl_seat<->player mapping configurable in games after all, if one goes
with server side heuristics. There are also the things Daniel wrote
about, which link directly to what we can do.


Thanks,
pq
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH] Add initial color management framework code

2013-05-03 Thread Pekka Paalanen
On Thu,  2 May 2013 15:16:09 +0100
Richard Hughes  wrote:

> ICC profiles can now be specified in weston.ini for each output, or a CMS
> implementation can optionally loaded from a pluggable module.
> ---
>  configure.ac |   7 ++
>  src/Makefile.am  |  13 +++-
>  src/cms-static.c | 211 
> +++
>  src/cms.c| 143 ++
>  src/cms.h|  65 
>  src/compositor-drm.c |   2 +
>  src/compositor.c |   2 +
>  src/compositor.h |  11 +++
>  weston.ini   |   3 +-
>  9 files changed, 454 insertions(+), 3 deletions(-)
>  create mode 100644 src/cms-static.c
>  create mode 100644 src/cms.c
>  create mode 100644 src/cms.h
> 

> diff --git a/src/cms.c b/src/cms.c
> new file mode 100644
> index 000..588fb7e
> --- /dev/null
> +++ b/src/cms.c
> @@ -0,0 +1,143 @@
> +/*
> + * Copyright © 2013 Richard Hughes
> + *
> + * Permission to use, copy, modify, distribute, and sell this software and
> + * its documentation for any purpose is hereby granted without fee, provided
> + * that the above copyright notice appear in all copies and that both that
> + * copyright notice and this permission notice appear in supporting
> + * documentation, and that the name of the copyright holders not be used in
> + * advertising or publicity pertaining to distribution of the software
> + * without specific, written prior permission.  The copyright holders make
> + * no representations about the suitability of this software for any
> + * purpose.  It is provided "as is" without express or implied warranty.
> + *
> + * THE COPYRIGHT HOLDERS DISCLAIM ALL WARRANTIES WITH REGARD TO THIS
> + * SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
> + * FITNESS, IN NO EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY
> + * SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER
> + * RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF
> + * CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
> + * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
> + */
> +
> +#ifdef HAVE_CONFIG_H
> +#include 
> +#endif
> +
> +#include 
> +#include 
> +#include 
> +
> +#ifdef HAVE_LCMS
> +#include 
> +#endif
> +
> +#include "compositor.h"
> +#include "cms.h"
> +
> +static void
> +weston_cms_gamma_clear(struct weston_output *o)
> +{
> + int i;
> + uint16_t *red;
> +
> + if (!o->set_gamma)
> + return;
> +
> + red = calloc(sizeof(uint16_t), o->gamma_size);
> + for (i = 0; i < o->gamma_size; i++)
> + red[i] = (uint32_t) 0x * (uint32_t) i / (uint32_t) 
> (o->gamma_size - 1);
> + o->set_gamma(o, o->gamma_size, red, red, red);
> + free(red);
> +}
> +
> +static void
> +weston_cms_gamma_update(struct weston_output *o)
> +{
> +#ifdef HAVE_LCMS
> + cmsFloat32Number in;
> + const cmsToneCurve **vcgt;
> + int i;
> + int size;
> + uint16_t *red = NULL;
> + uint16_t *green = NULL;
> + uint16_t *blue = NULL;
> +
> + if (!o->set_gamma)
> + return;
> + if (!o->color_profile) {
> + weston_cms_gamma_clear(o);
> + return;
> + }
> +
> + weston_log("Using ICC profile %s\n", o->color_profile->filename);
> + vcgt = cmsReadTag (o->color_profile->lcms_handle, cmsSigVcgtTag);
> + if (vcgt == NULL || vcgt[0] == NULL) {
> + weston_cms_gamma_clear(o);
> + return;
> + }
> +
> + size = o->gamma_size;
> + red = calloc(sizeof(uint16_t), size);
> + green = calloc(sizeof(uint16_t), size);
> + blue = calloc(sizeof(uint16_t), size);
> + for (i = 0; i < size; i++) {
> + in = (cmsFloat32Number) i / (cmsFloat32Number) (size - 1);
> + red[i] = cmsEvalToneCurveFloat(vcgt[0], in) * (double) 0x;
> + green[i] = cmsEvalToneCurveFloat(vcgt[1], in) * (double) 0x;
> + blue[i] = cmsEvalToneCurveFloat(vcgt[2], in) * (double) 0x;
> + }
> + o->set_gamma(o, size, red, red, red);

Three times red? :-)


> + free(red);
> + free(green);
> + free(blue);
> +#endif
> +}
> +
> +WL_EXPORT void
> +weston_cms_set_color_profile(struct weston_output *o,
> +  struct weston_color_profile *p)
> +{
> + if (o->color_profile == p)
> + return;
> + if (o->color_profile)
> + weston_cms_destroy_profile(o->color_profile);
> + o->color_profile = p;
> + weston_cms_gamma_update(o);
> +}
> +
> +WL_EXPORT void
> +weston_cms_destroy_profile(struct weston_color_profile *p)
> +{
> + if (!p)
> + return;
> +#ifdef HAVE_LCMS
> + cmsCloseProfile(p->lcms_handle);
> +#endif
> + free(p->filename);
> + free(p);
> +}
> +
> +WL_EXPORT struct weston_color_profile *
> +weston_cms_create_profile(const char *filename,
> +   void *lcms_profile)
> +{
> + struct weston_color_profil

Re: [PATCH 0/5] Improve text protocol

2013-05-03 Thread Daniel Stone
Hi,

On 2 May 2013 20:56, Kristian Høgsberg  wrote:
> On Tue, Apr 16, 2013 at 06:19:47PM -0700, Bill Spitzak wrote:
>> I would remember the positions of styling as bytes. However the
>> renderer can render as though they are moved left to the first break
>> between glyphs (ie it will preedit-highlight the character the first
>> byte is in, and if the preedit region ends in the middle of a glyph
>> then that glyph will not be preedited). Again this problem needs to
>> be solved for combining characters anyway so this is not any more
>> difficult.
>>
>> In all cases the client can potentially detect that the input method
>> is screwing up, and perhaps report this as a warning message.
>
> I think consensus is that we leave the offsets as bytes.  I agree with
> that, considering that: 1) it shouldn't happen, 2) when it does, the
> toolkit will have deal with it.

The other unintended consequence of character positions rather than
bytes is that you end up with the D-Bus approach of spending half your
life just validating that the strings you're passing around are valid
UTF-8.  No point doing that in the compositor really.

Cheers,
Daniel
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [RFC] wl_surface scale and crop protocol extension

2013-05-03 Thread Daniel Stone
Hi,

On 2 May 2013 20:33, Bill Spitzak  wrote:
> Daniel Stone wrote:
>>> I also think all of wl_shell should be a core requirement.
>>
>> Not all compositors are user sessions.  Think about nested compositors
>> for browsers, or capture, or also very stripped-down usecases where
>> they really don't want to have to deal with this kind of thing.
>
> For simple displays like one that is a single full-screen window always, the
> api "works" in that raises determine which window is visible, and attempts
> to resize or turn off full-screen are ignored.

Sure, it's a thing you can do, but it's a lot more typing and
intrinsically discourages people from making nested or even just
simple compositors.  We want to lower the barrier to entry here,
rather than essentially just forcing everyone to use Weston always.

> Though nested compositors are an interesting idea, it is clear from how the
> subsurfaces are being designed that it is not felt that nested compositors
> are not really useful. The biggest problem I see is that the nested
> compositor probably loses any high-speed optimizations that only the real
> compositor can do. I really suspect the only use of a nested compositor will
> be to test a wayland server inside a window on another one.

Subsurfaces are being designed for in-process cases, such as a media
player inside a browser.  Foreign surfaces are intended for
cross-process buffer/surface sharing, but I think nested compositors
is actually better for the usecase I had in mind, which is WebKit2.
So it would be nice if they weren't unnecessarily complex to
implement.

>> I don't think it's an unreasonable requirement, and really like the
>> design it has at the moment, where attaching the scaler object
>> suppresses the resize-on-attach behaviour, and destroying it reverts
>> to previous.  It's pretty elegant, and totally in the vein of
>> wl_shell_surface stacking on top of wl_surface.  I don't see how
>> inventing more elaborate extension mechanisms on top of our existing
>> extension mechanism helps anything.
>
> No what I propose is to figure out how to document it so that the wayland
> api documentation is easier to follow. Make the rules for these sub-api
> objects match. Then I would expect to see the set-scaling listed under the
> wl_surface api. There would be a small indication that this api is through
> the wl_scaler subapi and the rest of the details would be  defined in a
> single section of the wayland documentation.

I don't think having 'sub-extensions' makes anything more clear: in
fact, exactly the opposite.

Cheers,
Daniel
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH] Add initial color management framework code

2013-05-03 Thread Richard Hughes
On 3 May 2013 11:53, Pekka Paalanen  wrote:
>> + o->set_gamma(o, size, red, red, red);
> Three times red? :-)

Good catch, thanks.

>> + p = calloc(sizeof(struct weston_color_profile), 1);
> Calloc arguments are swapped.

All fixed.

>> + weston_cms_destroy_profile(output->color_profile);
> Was this call supposed to be replaced by an output destroy signal
> listener? IIRC there were some such comments before.

I think krh meant the cms plugin should listen for the devices being
created and destroyed. I think the color_profile structure is small
enough not to need the complexity of a callback, and I'm not sure we
want to teach profiles about devices if you see what I mean. It's
really not much different to calling any of the other functions that
deallocate per-output state. The CMS plugin implements all of the
signals we talked about.

New patch attached. Thanks for the review.

Richard


0001-Add-initial-color-management-framework-code.patch
Description: Binary data
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


[PATCH weston] compositor-drm: Differentiate between two similar error paths

2013-05-03 Thread Rob Bradford
From: Rob Bradford 

---
 src/compositor-drm.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/src/compositor-drm.c b/src/compositor-drm.c
index f39096e..1f55271 100644
--- a/src/compositor-drm.c
+++ b/src/compositor-drm.c
@@ -598,7 +598,7 @@ drm_output_repaint(struct weston_output *output_base,
if (drmModePageFlip(compositor->drm.fd, output->crtc_id,
output->next->fb_id,
DRM_MODE_PAGE_FLIP_EVENT, output) < 0) {
-   weston_log("queueing pageflip failed: %m\n");
+   weston_log("queueing pageflip failed (repaint): %m\n");
return;
}
 
@@ -669,7 +669,7 @@ drm_output_start_repaint_loop(struct weston_output 
*output_base)
 
if (drmModePageFlip(compositor->drm.fd, output->crtc_id, fb_id,
DRM_MODE_PAGE_FLIP_EVENT, output) < 0) {
-   weston_log("queueing pageflip failed: %m\n");
+   weston_log("queueing pageflip failed (start repaint loop): 
%m\n");
return;
}
 }
-- 
1.8.1.4

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Input and games.

2013-05-03 Thread Todd Showalter
On Fri, May 3, 2013 at 6:42 AM, Pekka Paalanen  wrote:

> Sure, the heuristics can cover a lot, but there is still the mad case,
> and also the initial setup (system started with 3 new gamepads hooked
> up), where one may want to configure manually. The GUI is just my
> reminder, that sometimes it is necessary to configure manually, and
> there must be some way to do it when wanted.
>
> Even if it's just "press the home button in one gamepad at a time, to
> assing players 1 to N."

If there's going to be a gamepad setup gui, my preference would be
for it to be a system thing rather than a game thing.  Partly because
I'm lazy/cheap and don't want to have to do themed versions of it for
every game I do, but also partly because otherwise it's something else
that someone can half-ass or get wrong.

> Well, yes. But the question was not whether we should have heuristics
> or a GUI. The question is, do we want the heuristics *and* the GUI in
> the server or the games? The GUI is a fallback, indeed, for those who
> want it, and so is also the wl_seat-player mapping setup in a game.
>
> If we do the heuristics in the server, there is very little we have to
> do in the protocol for it. Maybe just allow to have human-readable
> names for wl_seats. The "press home button to assign players" would be
> easy to implement. The drawback is that the server's player 1 might not
> be the game's player 1, so we need some thought to make them match.
>
> If we do the heuristics in the games, we have to think about what
> meta data of the gamepads we need to transmit. You said something about
> a hash of some things before. If we have just a single hash, we cannot
> implement the heuristics you described above, so it will need some
> thought. Also, if we want to drive things like player id lights in
> gamepads, that needs to be considered in the protocol.
>
> Maybe there could be some scheme, where we would not need to have the
> wl_seat<->player mapping configurable in games after all, if one goes
> with server side heuristics. There are also the things Daniel wrote
> about, which link directly to what we can do.

I vote do it on the server, however it winds up being done.  It
means the client is isolated from a whole bunch of things it would
otherwise need to explicitly support, and it means that things happen
consistently between games.  It also means that any bugs in the
process will be addressable without shipping a new build of the game.

 Todd.

--
 Todd Showalter, President,
 Electron Jump Games, Inc.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Input and games.

2013-05-03 Thread Pekka Paalanen
On Fri, 3 May 2013 09:12:20 -0400
Todd Showalter  wrote:

> On Fri, May 3, 2013 at 6:42 AM, Pekka Paalanen  wrote:
> 
> > Sure, the heuristics can cover a lot, but there is still the mad case,
> > and also the initial setup (system started with 3 new gamepads hooked
> > up), where one may want to configure manually. The GUI is just my
> > reminder, that sometimes it is necessary to configure manually, and
> > there must be some way to do it when wanted.
> >
> > Even if it's just "press the home button in one gamepad at a time, to
> > assing players 1 to N."
> 
> If there's going to be a gamepad setup gui, my preference would be
> for it to be a system thing rather than a game thing.  Partly because
> I'm lazy/cheap and don't want to have to do themed versions of it for
> every game I do, but also partly because otherwise it's something else
> that someone can half-ass or get wrong.
> 
> > Well, yes. But the question was not whether we should have heuristics
> > or a GUI. The question is, do we want the heuristics *and* the GUI in
> > the server or the games? The GUI is a fallback, indeed, for those who
> > want it, and so is also the wl_seat-player mapping setup in a game.
> >
> > If we do the heuristics in the server, there is very little we have to
> > do in the protocol for it. Maybe just allow to have human-readable
> > names for wl_seats. The "press home button to assign players" would be
> > easy to implement. The drawback is that the server's player 1 might not
> > be the game's player 1, so we need some thought to make them match.
> >
> > If we do the heuristics in the games, we have to think about what
> > meta data of the gamepads we need to transmit. You said something about
> > a hash of some things before. If we have just a single hash, we cannot
> > implement the heuristics you described above, so it will need some
> > thought. Also, if we want to drive things like player id lights in
> > gamepads, that needs to be considered in the protocol.
> >
> > Maybe there could be some scheme, where we would not need to have the
> > wl_seat<->player mapping configurable in games after all, if one goes
> > with server side heuristics. There are also the things Daniel wrote
> > about, which link directly to what we can do.
> 
> I vote do it on the server, however it winds up being done.  It
> means the client is isolated from a whole bunch of things it would
> otherwise need to explicitly support, and it means that things happen
> consistently between games.  It also means that any bugs in the
> process will be addressable without shipping a new build of the game.

Cool, I agree with that. :-)


Thanks,
pq
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Input and games.

2013-05-03 Thread Daniel Stone
Hi,

On 20 April 2013 22:13, Nick Kisialiou  wrote:
> Generic device input may be too complicated to put it into Wayland protocol.
> For example, take Razer Hydra controller:
> http://www.engadget.com/2011/06/08/razer-totes-hydra-sticks-and-6400dpi-dual-sensor-mice-to-e3-2011/
>
> There are 2 USB connected controllers for each hand, each with 6 DOF
> information for 3D position and 3D rotation information. I programmed it for
> a 3D environment rather than games. Each controller sends you a quaternion
> to extract the data. On top of it, the output is noisy, so you'd want to add
> filters to integrate the noise out.
>
> The last thing I'd want is to have a middleman between the USB port and my
> processing code that messes around with rotation matrices and introduces
> delays. I think it is reasonable to limit the protocol to mice like devices
> only. As long as the protocol allows 2 mice simultaneously in the system
> (which they do), IMHO, the rest of the processing is better placed within
> your own code.

I think with 6DoF-type devices, we really shouldn't try to do anything
clever with them, and pretty much just pass evdev input through.  The
only reason we created wl_pointer and wl_keyboard as they are is that
the compositor needs to interpret and intercept them, and clients
would all be doing more or less the same interpretation too.  For
complex devices where it's of no benefit to have the compositor
rewrite the events, I think we just shouldn't even try.

If the gamepad proposal was any more complex than it is now, I'd lean
towards just shuttling the raw data to clients rather than having our
own protocol.  But the proposal I've seen is pretty nice and it
definitely helps our gaming story (which is really quite poor now), so
that helps.

The one thing I think it's missing so far is physical controller gyro
measurements, e.g. for new PS3/PS4 controllers and the Wiimote.

Cheers,
Daniel

> On Sat, Apr 20, 2013 at 9:38 AM, Todd Showalter 
> wrote:
>>
>> On Sat, Apr 20, 2013 at 12:20 PM, Daniel  wrote:
>>
>> > This is useful for desktop software too. I'm thinking of Stellarium or
>> > Google Earth, where moving the mouse is expected to move the
>> > environment, not the pointer itself.
>>
>> "Games" is really perhaps shorthand here; there are a lot of tools
>> and so forth that have similar behavior and operating requirements to
>> games, but aren't strictly games per se.  If you have an architectural
>> walkthrough program that lets you navigate a building and make
>> alterations, that's not really something you'd call a game, but it is
>> operating under many of the same constraints.  It's more obvious in
>> things using 3D, but even the 2D side can use it in places.
>>
>> I could easily see (for example) wanting to be able to do drag &
>> drop within a window on a canvas larger than the window can display;
>> say it's something like dia or visio or the like.  I drag an icon from
>> the sidebar into the canvas, and if it gets to the edge of the canvas
>> window the canvas scrolls and the dragged object (and the pointer)
>> parks at the window edge.
>>
>> It's useful behavior.  I can definitely see why adding it to the
>> protocol makes things more annoying, but I've a strong suspicion it's
>> one of those things that if you leave it out you'll find that down the
>> road there's a lot of pressure to find a way to hack it in.
>>
>> Todd.
>>
>> --
>>  Todd Showalter, President,
>>  Electron Jump Games, Inc.
>> ___
>> wayland-devel mailing list
>> wayland-devel@lists.freedesktop.org
>> http://lists.freedesktop.org/mailman/listinfo/wayland-devel
>
>
>
> ___
> wayland-devel mailing list
> wayland-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/wayland-devel
>
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Input and games.

2013-05-03 Thread Daniel Stone
Hi,

On 19 April 2013 10:18, Pekka Paalanen  wrote:
> Keyboards already have extensive mapping capabilities. A Wayland server
> sends keycodes (I forget in which space exactly) and a keymap, and
> clients feed the keymap and keycodes into libxkbcommon, which
> translates them into something actually useful. Maybe something similar
> could be invented for game controllers? But yes, this is off-topic for
> Wayland, apart from the protocol of what event codes and other data to
> pass.

It's worth noting that the only reason libxkbcommon exists is because
there's just no way to express it generically.  People want to have
Cyrillic and US keymaps active where Ctrl + W triggers 'close window'
regardless of which keymap's active.  But if they have Cyrillic, US
and Icelandic Dvorak active, they want Ctrl + W to trigger for Ctrl +
(wherever W is in Icelandic Dvorak) when in Icelandic Dvorak, and Ctrl
+ W when it's in Cyrillic and US.  And so on, and so forth.

If it was possible to just use wl_text all the way, I would never have
written that bastard library.  But you can't win them all.

Cheers,
Daniel

>> Event Driven vs. Polling
>>
>> Modern gui applications tend to be event-driven, which makes
>> sense; most modern desktop applications spend most of their time doing
>> nothing and waiting for the user to generate input.  Games are
>> different, in that they tend to be simulation-based, and things are
>> happening regardless of whether the player is providing input.
>>
>> In most games, you have to poll input between simulation ticks.
>> If you accept and process an input event in the middle of a simulation
>> tick, your simulation will likely be internally inconsistent.  Input
>> in games typically moves or changes in-game objects, and if input
>> affects an object mid-update, part of the simulation tick will have
>> been calculated based on the old state of the object, and the rest
>> will be based on the new state.
>>
>> To deal with this on event-driven systems, games must either
>> directly poll the input system, or else accumulate events and process
>> them between simulation ticks.  Either works, but being able to poll
>> means the game needs to do less work.
>
> Wayland protocol in event driven. Polling does not make sense, since it
> would mean a synchronous round-trip to the server, which for something
> like this is just far too expensive, and easily (IMHO) worked around.
>
> So, you have to maintain input state yourself, or by a library you use.
> It could even be off-loaded to another thread.
>
> There is also a huge advantage over polling: in an event driven design,
> it is impossible to miss very fast, transient actions, which polling
> would never notice. And whether you need to know if such a transient
> happened, or how many times is happened, or how long time each
> transient took between two game ticks, is all up to you and available.
>
> I once heard about some hardcore gamer complaining, that in some
> systems or under some conditions, probably related to the
> ridiculous framerates gamers usually demand, the button sequence he hits
> in a fraction of a second is not registered properly, and I was
> wondering how is it possible for it to not register properly. Now I
> realised a possible cause: polling.
>
> Event driven is a little more work for the "simple" games, but it gives
> you guarantees. Would you not agree?
>
>> Input Sources & Use
>>
>> Sometimes games want desktop-style input (clicking buttons,
>> entering a name with the keyboard), but often games want to treat all
>> the available input data as either digital values (mouse buttons,
>> keyboard keys, gamepad buttons...), constrained-axis "analog" (gamepad
>> triggers, joysticks) or unconstrained axis "analog" (mouse/trackball).
>>  Touch input is a bit of a special case, since it's nearly without
>> context.
>
> Is this referring to the problem of "oops, my mouse left the Quake
> window when I tried to turn"? Or maybe more of "oops, the pointer hit
> the monitor edge and I cannot turn any more?" I.e. absolute vs.
> relative input events?
>
> There is a relative motion events proposal for mice:
> http://lists.freedesktop.org/archives/wayland-devel/2013-February/007635.html
>
> Clients cannot warp the pointer, so there is no way to hack around it.
> We need to explicitly support it.
>
>> Games usually care about all of:
>>
>> - the state of buttons/keys -- whether they are currently down or up
>> -- think WASD here
>> - edge detection of buttons/keys -- trigger, release and state change
>> - the value of each input axis -- joystick deflection, screen position
>> of the cursor, etc
>> - the delta of each input axis
>>
>> From what I've seen, SDL does not give us the button/key state
>> without building a layer on top of it; we only get edge detection.
>> Likewise, as far as I understand nothing does deltas.
>
> Ah yes, deltas are the relative motion events, see above.
>
>> Input Capture
>>
>> It would be

Re: Input and games.

2013-05-03 Thread Daniel Stone
Hi,

On 21 April 2013 06:28, Todd Showalter  wrote:
> On Fri, Apr 19, 2013 at 7:08 PM, Bill Spitzak  wrote:
>> I think this is going to require pointer warping. At first I thought it
>> could be done by hiding the pointer and faking it's position, but that would
>> not stop the invisible pointer from moving out of the window and becoming
>> visible, or moving into a hot-spot and triggering an unexpected effect.
>
> I think edge resistance/edge snapping really wants pointer warping as 
> well.

It's really difficult to achieve a nicely responsive and fluid UI
(i.e. doing this without jumps) when you're just warping the pointer.
To be honest, I'd prefer to see an interface where, upon a click, you
could set an acceleration (deceleration) factor which was valid for
the duration of that click/drag only.  We already have drag & drop
working kind of like this, so it's totally possible to do for relative
(i.e. wl_pointer) devices.  The only two usecases I've seen come up
for pointer warping are this and pointer confinement, which I'd rather
do specifically than through warping - which is a massive minefield I
really, really want to avoid.

But it's also a totally orthogonal discussion. :)

Cheers,
Daniel
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Input and games.

2013-05-03 Thread Daniel Stone
Hi,

On 29 April 2013 18:44, Bill Spitzak  wrote:
> Has anybody thought about pens (ie wacom tablets)? These have 5 degrees of
> freedom (most cannot distinguish rotation about the long axis of the pen).
> There are also spaceballs with full 6 degrees of freedom.

As Todd said, these really need to be their own interface.  From a
purely abstract point of view, they kind of look the same, but if you
have one interface to represent everything, that interface ends up
looking a lot like XI2, which no-one uses because it's a monumental
pain in the behind.

The biggest blocker though, is that the compositor addresses gamepads
and tablets completely differently.  Tablets have particular focus,
and their co-ordinates need to be interpreted and scaled to
surface-local, whereas gamepads are going to have co-ordinates in a
totally different space, which is sometimes angular rather than
positional in a 2D space.

So, again, a tablet interface is a very useful thing to have - and
it's a discussion we absolutely need to have at some point - but it
has no bearing at all on this one.  A good starting point would be to
look at the X.Org Wacom driver and its capability set.

> Another idea was that buttons had the same api as analog controls, it's just
> that they only reported 0 or +1, never any fractions (and since it sounds
> like some controls have pressure-sensitive buttons this may make it easier
> to use the same code on different controls).

I think this falls into the same over-generalising trap, where you
look like a smart alec but don't produce anything near like a usable
API.

Cheers,
Daniel
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Input and games.

2013-05-03 Thread Daniel Stone
Hi,

On 3 May 2013 08:17, Pekka Paalanen  wrote:
> On Thu, 2 May 2013 19:28:41 +0100
> Daniel Stone  wrote:
>> There's one crucial difference though, and one that's going to come up
>> when we address graphics tablets / digitisers too.  wl_pointer works
>> as a single interface because no matter how many mice are present, you
>> can aggregate them together and come up with a sensible result: they
>> all move the sprite to one location.  wl_touch fudges around this by
>> essentially asserting that not only will you generally only have one
>> direct touchscreen, but it provides for multiple touches, so you can
>> pretend one touch each on multiple screens, are multiple touches on a
>> single screen.
>
> Right. Could we just say that each such non-aggregatable device must be
> put into a wl_seat that does not already have such a device?
> Or make that an implementors guideline rather than a hard requirement
> in the protocol spec.

*shrug*, it really depends.  If we're going to say that gamepads are
associated with focus, then they have to go in a _specific_ wl_seat:
the focus is per-seat, so if we're saying that clicking on a window
with your mouse to focus it (or Alt-Tabbing to it) also redirects
gamepad events there, then the gamepad needs to be part of _that_ seat
which changed the focus.  Remember that we can have multiple focii per
the protocol (though the UI for that gets very interesting very
quickly).

If they're unaffected by the focus - which they would be if they're
just going into random new wl_seats - then they shouldn't be in
wl_seat just because it's the container we have for input devices
right now, they should have their own interfaces.  Which really means
wl_gamepad_manager which, when bound to, advertises new_id
wl_gamepads.

tl;dr: wl_seat has a very specific meaning of a set of devices with
one focus, please don't abuse it.

>> The gamepad interaction doesn't have this luxury, and neither do
>> tablets.  I don't think splitting them out to separate seats is the
>> right idea though: what if (incoming stupid hypothetical alert) you
>> had four people on a single system, each with their own keyboards and
>> gamepads.  Kind of like consoles are today, really.  Ideally, you'd
>> want an association between the keyboards and gamepads, which would be
>> impossible if every gamepad had one separate wl_seat whose sole job
>> was to nest it.
>
> So... what's wrong in putting each keyboard into the wl_seat where it
> belongs, along with the gamepad?

In that case, yes, we would have wl_seats with one wl_keyboard and
multiple wl_gamepads.

>> I think it'd be better to, instead of wl_seat::get_gamepad returning a
>> single new_id wl_gamepad, as wl_pointer/etc do it today, have
>> wl_seat::get_gamepads, which would send one wl_seat::gamepad event
>> with a new_id wl_gamepad, for every gamepad which was there or
>> subsequently added.  That way we keep the seat association, but can
>> still deal with every gamepad individually.
>
> It would be left for the client to decide which gamepad it wants from
> which wl_seat, right?
>
> Do we want to force all clients to choose every non-aggregatable device
> this way?

No idea. :)

> Essentially, that would mean that wl_seat are just for the traditional
> keyboard & mouse (and touchscreen so far) association, and then
> everything else would be left for each client to assign to different
> wl_seats on their own. This seems strange. Why do we need a wl_seat
> then, why not do the same with keyboards and mice?
>
> Oh right, focus. You want to be able to control keyboard focus with a
> pointer. Why is a gamepad focus different? Would all gamepads follow
> the keyboard focus? If there are several wl_seats with kbd & ptr, which
> keyboard focus do they follow? What if the same gamepad is left active
> in more than one wl_seat? What if there is no keyboard or pointer, e.g.
> you had only a touchscreen and two gamepads (say, IVI)?
>
> And then replace every "gamepad" with "digitizer", and all other
> non-aggregatable input devices, and also all raw input devices via
> evdev fd passing. The fd passing I believe has similar problems: who
> gets the events, which wl_seat do they follow.
>
> This is a new situation, and so many open questions... I just continued
> on the exitisting pattern.

Yeah, it really all depends on these questions.  But intuitively, I'd
say that gamepads should follow a seat's focus, which means expanding
wl_seat to be able to advertise multiple gamepads.  Even on touch, we
still have wl_touch as part of wl_seat, driving the focus.  And I
don't think a gamepad could ever be a part of multiple seats; perhaps
it could be shifted between seats if necessary, but this is a problem
we already have with keyboard, pointer and touch today.  And you don't
need to deal with that in the protocol: just have the compositor
destroy the device and create a new one in the new seat.

Cheers,
Daniel
___
wayland-devel mail

Re: Input and games.

2013-05-03 Thread Todd Showalter
On Fri, May 3, 2013 at 12:06 PM, Daniel Stone  wrote:

> I think with 6DoF-type devices, we really shouldn't try to do anything
> clever with them, and pretty much just pass evdev input through.  The
> only reason we created wl_pointer and wl_keyboard as they are is that
> the compositor needs to interpret and intercept them, and clients
> would all be doing more or less the same interpretation too.  For
> complex devices where it's of no benefit to have the compositor
> rewrite the events, I think we just shouldn't even try.
>
> If the gamepad proposal was any more complex than it is now, I'd lean
> towards just shuttling the raw data to clients rather than having our
> own protocol.  But the proposal I've seen is pretty nice and it
> definitely helps our gaming story (which is really quite poor now), so
> that helps.
>
> The one thing I think it's missing so far is physical controller gyro
> measurements, e.g. for new PS3/PS4 controllers and the Wiimote.

In my experience, the accelerometers on these devices are all over
the map in terms of precision and accuracy; if you have a MotionPlus
and a Nunchuck controller plugged into your Wiimote, IIRC you have
three different 3-axis accelerometers active, each with a different
resolution.  The weakest is in the nunchuck, the wiimote one is
better, and the motion plus one is better still.

Between manufacturers, I'm not even sure if the accelerometer
coordinate system is the same *handed*, let alone what the reference
orientation is.

In my experience, the 3vec coming out of the accelerometer is
pretty close to normal, at least in the cases I've seen.  Not always,
obviously; if the player is swinging the thing around, it may be
wildly out of normal, but if you put the thing on the table and leave
it, then (ignoring the wicked instability the accelerometers seem to
have) it gives you something approaching a unit vector.  Which means
(once again) that the 24.8 fixed format really isn't suitable,

It also starts to lead to questions about things like the Move
controller, the Kinect, the Leap Motion, the sensors in the Oculus
Rift and that crazy thing from razer that dumps a torrent of quats at
you.  Unless (as you say) you're going to get into self-describing
protocols, the axe has to come down somewhere.

So, tackling accelerometers as a protocol is a bit of an
interesting balancing act, much harder (or at least, with more
potentially annoying decisions about tradeoffs) than the gamepad one.

  Todd.

--
 Todd Showalter, President,
 Electron Jump Games, Inc.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Input and games.

2013-05-03 Thread Todd Showalter
On Fri, May 3, 2013 at 12:12 PM, Daniel Stone  wrote:

>> I think edge resistance/edge snapping really wants pointer warping as 
>> well.
>
> It's really difficult to achieve a nicely responsive and fluid UI
> (i.e. doing this without jumps) when you're just warping the pointer.
> To be honest, I'd prefer to see an interface where, upon a click, you
> could set an acceleration (deceleration) factor which was valid for
> the duration of that click/drag only.  We already have drag & drop
> working kind of like this, so it's totally possible to do for relative
> (i.e. wl_pointer) devices.  The only two usecases I've seen come up
> for pointer warping are this and pointer confinement, which I'd rather
> do specifically than through warping - which is a massive minefield I
> really, really want to avoid.

Decelerate/accelerate would cover all the cases I can think of.

> But it's also a totally orthogonal discussion. :)

True enough.  :)

  Todd.

--
 Todd Showalter, President,
 Electron Jump Games, Inc.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [RFC] wl_surface scale and crop protocol extension

2013-05-03 Thread Bill Spitzak

Daniel Stone wrote:


Subsurfaces are being designed for in-process cases, such as a media
player inside a browser.  Foreign surfaces are intended for
cross-process buffer/surface sharing, but I think nested compositors
is actually better for the usecase I had in mind, which is WebKit2.
So it would be nice if they weren't unnecessarily complex to
implement.


The final compositor is able to do the fully optimized compositing of 
surfaces into the output frame buffer, including all the work various 
people have done to not composite obscured areas. I fail to see how a 
nested compositor can take advantage of this except by transmitting all 
the individual surfaces to the final compositor.


This means that buffers and buffer allocation have to be transparently 
passed through the intermediate compositors. I don't think this has been 
addressed, and it seems really difficult compared to just reusing the 
existing single connection to the server.


The proposal where the parent task creates subsurfaces and then does 
some setup so the child task can draw them seems far more like what 
webkit is going to want and use.



I don't think having 'sub-extensions' makes anything more clear: in
fact, exactly the opposite.


I'm not describing this right. What I want is to be able to read the 
documentation, and see a list of "all the things you can do to a 
wl_surface".


Currently one of the things you can do to a wl_surface is that you can 
find the wl_shell object and ask it to create a wl_shell_surface object 
given a wl_surface id, and then you can do things to this 
wl_shell_surface object. However this is not listed under wl_surface, 
instead it is listed under two other sections with no links to them 
(there are backward links but that is pretty useless): wl_shell and 
wl_shell_surface. I think this is extremely confusing and was probably 
one of the biggest obstacles I had trying to figure wayland out.


What I would like to see is that under wl_surface you see a section that 
says something like "wl_scaler:". Listed under that are the methods that 
are currently listed under wl_surface_scaler. The title "wl_scaler:" 
implies all this information: the actual calling convention is to find 
the global wl_scaler object, call a method called get_scaler_surface 
with the wl_surface id, and that returns a new object called a 
wl_scaler_surface, and you call the listed methods on *this* object. The 
documentation is now where a user can find it, and a lot of boilerplate 
is elided.


One thing that would help a lot is for there to be a standard as to 
whether a wl_foo global object, when asked to create an object for a 
wl_bar object, that the object is called wl_foo_bar and not wl_bar_foo. 
Currently the wl_shell and wl_scaler disagree about this. Since 
wl_shell_surface is in much more than wl_surface_scaler use I recommend 
standardizing on putting the global object's type first.


___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Input and games.

2013-05-03 Thread Rick Yorgason
Pekka Paalanen  writes:
> > > Maybe there could be some scheme, where we would not need to have the
> > > wl_seat<->player mapping configurable in games after all, if one goes
> > > with server side heuristics. There are also the things Daniel wrote
> > > about, which link directly to what we can do.
> > 
> > I vote do it on the server, however it winds up being done.  It
> > means the client is isolated from a whole bunch of things it would
> > otherwise need to explicitly support, and it means that things happen
> > consistently between games.  It also means that any bugs in the
> > process will be addressable without shipping a new build of the game.
> 
> Cool, I agree with that. 

In console-land, all three major consoles allow you to forcibly change your
controller numbers by pressing or holding the home button and choosing some
option to reconfigure your controller, so there's certainly good precedent
for it being handled by the OS.

Also worth remembering is that all three major consoles have controller
number LEDs from 1 to 4. It would be nice if we could assume that the
controller indicator LED matched the pad_index whenever possible.

-Rick-

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [RFC] wl_surface scale and crop protocol extension

2013-05-03 Thread Daniel Stone
Hi,
Again I fear we're drifting massively off-topic, but here we go ...

On 3 May 2013 18:50, Bill Spitzak  wrote:
> Daniel Stone wrote:
>> Subsurfaces are being designed for in-process cases, such as a media
>> player inside a browser.  Foreign surfaces are intended for
>> cross-process buffer/surface sharing, but I think nested compositors
>> is actually better for the usecase I had in mind, which is WebKit2.
>> So it would be nice if they weren't unnecessarily complex to
>> implement.
>
> The final compositor is able to do the fully optimized compositing of
> surfaces into the output frame buffer, including all the work various people
> have done to not composite obscured areas. I fail to see how a nested
> compositor can take advantage of this except by transmitting all the
> individual surfaces to the final compositor.

If that's the goal, then yes. For some usecases however, doing the
compositing before and sending one large buffer to the compositor is
desirable.

> This means that buffers and buffer allocation have to be transparently
> passed through the intermediate compositors. I don't think this has been
> addressed, and it seems really difficult compared to just reusing the
> existing single connection to the server.

Again, yes, sometimes.  For most stacks, passing the allocation
through is fairly trivial, as the allocation is client-driven anyway:
the client creates the buffer and then just tells the compositor how
to access it.  EGL implementations can already proxy buffer allocation
with zero interference from us, however: they already have a
connection to the parent compositor's display, so they can just pass
all allocation requests they might get, up the chain.

We do need a little bit more API to support this, but nothing too
catastrophic, and nothing more implementationally onerous than the
stream-of-buffers foreign surface proposal demanded.

> The proposal where the parent task creates subsurfaces and then does some
> setup so the child task can draw them seems far more like what webkit is
> going to want and use.

I really don't think it is.

>> I don't think having 'sub-extensions' makes anything more clear: in
>> fact, exactly the opposite.
>
> I'm not describing this right. What I want is to be able to read the
> documentation, and see a list of "all the things you can do to a
> wl_surface".
>
> Currently one of the things you can do to a wl_surface is that you can find
> the wl_shell object and ask it to create a wl_shell_surface object given a
> wl_surface id, and then you can do things to this wl_shell_surface object.
> However this is not listed under wl_surface, instead it is listed under two
> other sections with no links to them (there are backward links but that is
> pretty useless): wl_shell and wl_shell_surface. I think this is extremely
> confusing and was probably one of the biggest obstacles I had trying to
> figure wayland out.
>
> What I would like to see is that under wl_surface you see a section that
> says something like "wl_scaler:". Listed under that are the methods that are
> currently listed under wl_surface_scaler. The title "wl_scaler:" implies all
> this information: the actual calling convention is to find the global
> wl_scaler object, call a method called get_scaler_surface with the
> wl_surface id, and that returns a new object called a wl_scaler_surface, and
> you call the listed methods on *this* object. The documentation is now where
> a user can find it, and a lot of boilerplate is elided.
>
> One thing that would help a lot is for there to be a standard as to whether
> a wl_foo global object, when asked to create an object for a wl_bar object,
> that the object is called wl_foo_bar and not wl_bar_foo. Currently the
> wl_shell and wl_scaler disagree about this. Since wl_shell_surface is in
> much more than wl_surface_scaler use I recommend standardizing on putting
> the global object's type first.

OK, so it pretty much looks like just cosmetic/documentation changes -
none of which I'm against at all.  Obviously we can't break existing
protocol though (such as renaming wl_shell_surface to
wl_surface_shell), but we can improve the documentation.  I'm sure
patches for that would be gratefully accepted.

Cheers,
Daniel
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Input and games.

2013-05-03 Thread Rick Yorgason
Daniel Stone and Pekka Paalanen wrote:
> ...a bunch of stuff about per-player keyboards and wl_seats...

Okay, let's go over some typical situations:

* It's common for controllers to have keyboard and/or headset attachments,
and built-in touch screens are becoming more common. These are clearly
intended to be associated with the gamepad, rather than the computer.

* Advanced users may want to emulate this connection by plugging extra
devices straight into their computer instead of into their gamepad, and
would need some way to associate those devices with the gamepad.

* A gamepad keyboard may be used in a different way than the system
keyboard. For instance, you could have four local players playing
split-screen against a bunch of other players online. Each player should be
able to use their own keyboard attachment to send their own chat messages.

So perhaps all HIDs should have a pad_index (or player_index?). Anything
plugged directly into a controller will get the same pad_index as the
controller, but an advanced configuration screen could allow you to force a
certain pad_index for each device.

Any app which is not controller-aware can blissfully ignore the pad_index,
in which case they'll treat the keyboards or touch screens as aggregate devices.

-Rick-

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [RFC] wl_surface scale and crop protocol extension

2013-05-03 Thread Bill Spitzak

Pekka Paalanen wrote:


What chip do you have in mind, that can do arbitrary
matrix-based transforms during an overlay scanout?


That just means the surface cannot use the overlay, or the compositor 
has to use an intermediate image. There are lots of other reasons the 
surface does not use the overlay, such as not being the top one.

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Input and games.

2013-05-03 Thread Bill Spitzak

Todd Showalter wrote:


Decelerate/accelerate would cover all the cases I can think of.


I thought you said the speed of mouse movement controlled whether it 
slowed down or not. Ie if the user quickly dragged the slider to the 
bottom then the scrollbar was at the bottom, but if they moved slowly 
then it moved by lines. So it is not just "slow the mouse down" but 
"slow the mouse down only if it is being moved this speed".


For the proposed scrollbar the amount of deceleration depends on the 
document size, so the client has to control it. And an extremely large 
document is going to give you problems by making the mouse movements 
approach the fixed point fraction size. If this is avoided by returning 
the mouse movements in full-size pieces like the pointer-lock does then 
they have to be marked as to whether they are decelerated or not.


If the threshold between fast/slow is going to be built into the server, 
I really recommend some other annoying things be built into the server 
such as decisions about whether something is double-click and whether 
the user holding the mouse down and moving it a bit is "clicking" or 
"dragging", and at what point the user not holding the mouse down but 
moving it tiny bits is "hovering". Inconsistency between applications in 
these is incredibly annoying to the user.


The server is also going to have to pointer-warp back the cursor when it 
receives the decelerate request, and serial numbers have to be used so 
the client can ignore the pre-decelerate mouse movements.


All in all I see this as being enormously more complex than pointer warp.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Input and games.

2013-05-03 Thread Rick Yorgason
Pekka Paalanen  writes:
> Uh oh, yuk...
> 
> I wonder if one would have serious trouble achieving the same on
> Wayland. X is so much more liberal on what one can do wrt. protocol and
> the C API. For instance, in X I believe one can query a lot of stuff
> from the server, in Wayland nothing. In X a window reference is just an
> integer, and if you get something wrong, I think you get an error that
> you can choose to handle non-fatally. In Wayland, you have a pointer,
> that means you are susceptible to use-after-free and segfaults, and if
> you do something wrong, the server disconnects the whole client on the
> spot.

That would be a problem. Steam was designed so that the content creators
wouldn't have to recompile their games for the distribution platform, which
is why they always do the overlay with hooking.

We're obviously stepping outside of the original topic here, but it's
probably worth looking into whether embedded compositors are up to the task
here, or whether some new hooking extension will be required.

The requirements would be:

* The game should "feel" like any other game, in that task-switching, window
decorations, etc, should not be affected.

* If the game has its own launcher (in Linux it would typically be a Qt or
GTK app that spawns the main game) that should appear to be completely
unaffected.

* Any OpenGL window created by the launched app or any of its spawned apps
needs to be able to draw the overlay and intercept input.

* This should be done with minimal performance overhead.

There are other applications that use similar functionality. One common one
on Windows is called Fraps, which is for recording games. It overlays an FPS
counter, and intercepts the keyboard to turn recording on/off. There's a
Linux clone called "Faps", which I haven't used.

Another one in Windows-land is Afterburner, which is used by overclockers.
It has an option to show GPU temperature, clock speed, etc, in an overlay
over your game.

-Rick-

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 2/2] Move the EDID parsing to its own file

2013-05-03 Thread Kai-Uwe Behrmann

Am 02.05.2013 23:33, schrieb Richard Hughes:

---
  src/Makefile.am  |   2 +
  src/compositor-drm.c | 180 +--
  src/compositor.c |  21 ++
  src/compositor.h |   5 ++
  src/edid.c   | 175 +
  src/edid.h   |  48 ++
  6 files changed, 254 insertions(+), 177 deletions(-)
  create mode 100644 src/edid.c
  create mode 100644 src/edid.h

+   /* get primaries and whitepoint */
+   edid->primary_red.Y = 1.0f;
+   edid->primary_red.x = edid_decode_fraction(data[0x1b], 
edid_get_bits(data[0x19], 6, 7));
+   edid->primary_red.y = edid_decode_fraction(data[0x1c], 
edid_get_bits(data[0x19], 5, 4));
+   edid->primary_green.Y = 1.0f;
+   edid->primary_green.x = edid_decode_fraction(data[0x1d], 
edid_get_bits(data[0x19], 2, 3));
+   edid->primary_green.y = edid_decode_fraction(data[0x1e], 
edid_get_bits(data[0x19], 0, 1));
+   edid->primary_blue.Y = 1.0f;
+   edid->primary_blue.x = edid_decode_fraction(data[0x1f], 
edid_get_bits(data[0x1a], 6, 7));
+   edid->primary_blue.y = edid_decode_fraction(data[0x20], 
edid_get_bits(data[0x1a], 4, 5));
+   edid->whitepoint.Y = 1.0f;
+   edid->whitepoint.x = edid_decode_fraction(data[0x21], 
edid_get_bits(data[0x1a], 2, 3));
+   edid->whitepoint.y = edid_decode_fraction(data[0x22], 
edid_get_bits(data[0x1a], 0, 1));
+



+#ifndef _WESTON_EDID_H_
+#define _WESTON_EDID_H_
+
+struct weston_edid_color_Yxy {
+   double Y;
+   double x;
+   double y;
+};


Why is the Y value set when it is all about primaries. No one will ever 
use that other than for assuming 1.0 .


kind regards
Kai-Uwe
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel