Re: Enabling multitouch in input-evdev

2010-01-18 Thread Bradley T. Hughes
On 01/18/2010 04:33 AM, ext Peter Hutterer wrote:
> On Thu, Jan 14, 2010 at 11:23:23AM +0100, Bradley T. Hughes wrote:
>> On 01/12/2010 12:03 PM, ext Peter Hutterer wrote:
 So, first question: is my behavior the good one? (not being
 compliant with Windows or MacOS)
>>>
>>> Short answer - no. Long answer - sort-of.
>>>
>>> Multitouch in X is currently limited by the lack of multitouch events in the
>>> protocol. What you put into evdev is a way around it to get multitouch-like
>>> features through a multipointer system. As Bradley said, it is likely better
>>> for the client-side to include the lot in a single event.  Since X
>>> essentially exists to make GUI applications easier (this may come as a
>>> surprise to many), I'd go with his stance.
>>>
>>> However, this is the harder bit and would require changing the driver, parts
>>> of the X servers's input system, the protocol and the libraries. It'd be
>>> about as wide-reaching as MPX though I hope that there is significantly less
>>> rework needed in the input subsystem now.
>>
>> Why do you think it would require protocol changes? For the new
>> event type? If I understand it correctly, events for devices can
>> contain any number of valuators... is it possible to have x1,y1
>> x2,y2 x3,y3 and so-on?
>
> correct, there's a valuator limit of 36 but even that should be fine for a
> single device. with axis labelling it's now even possible to simply claim:
> hey, here's 24 axes but they represent 12 different touchpoints.

> I hadn't really thought about this approach yet because IMO touch is more
> than just simply pairs of coordinates and that's what I'd eventually like to
> get in.

Understood.

 > As an intermediate option your approach would definitely work, it'd
> be easy to integrate and should hold up with the current system.

That was my thinking as well.

> bonus point is also that the core emulation automatically works on the first
> touchpoint only, without any extra help.

This was my thinking as well, as well as not having to wrap my head around 
how to deal with multiple implicit grabs in the presence of multiple events.

> And whether more is needed (i.e. complex touch events) is something that can
> be decided later when we have a bit more experience on what apps may need.
> Stephane, do you have opinions on this?

I agree. I do know of people that have built their own multi-touch tables 
and are interested in multi-user interactions, so I suspect that we will see 
interest in this eventually.

>>  From what I can tell, there are a number of challenges that would
>> require driver and server changes. In particular, being able to do
>> collaborative work on a large multi-touch surface requires the
>> driver (or server, not sure which makes most sense) to be able to
>> somehow split the touch points between multiple windows. This is
>> something that we had to do in Qt at least.
>
> packing all coordinates into one event essentially makes it a single
> point with auxiliary touchpoints. This is useful for a number of things
> (most gestures that we see today like pinch and rotate should work in this
> approach) but not useful once you have truly independent data from multiple
> hands or users. that's when you have to do more flexible picking in the
> server and that requires more work.
>
> given the easy case of a single user interacting with a surface:
> with Qt as it is now, if you get all the coordinates in a single XI2 event,
> would that work for you? or is there extra information you need?

That should work. Ideally I would like to also get some kind of indication 
from the device that it is a touch device, and what kind of touch device it 
is (it is a touchscreen or a touchpad? for example, we treat them slightly 
differently).

-- 
Bradley T. Hughes (Nokia-D-Qt/Oslo), bradley.hughes at nokia.com
Sandakervn. 116, P.O. Box 4332 Nydalen, 0402 Oslo, Norway
___
xorg-devel mailing list
xorg-devel@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-devel


Re: Enabling multitouch in input-evdev

2010-01-18 Thread Peter Hutterer
On Mon, Jan 18, 2010 at 01:02:54PM +0100, Benjamin Tissoires wrote:
> Peter Hutterer wrote:
> >First of all, thanks for your efforts here. I really appreciate that you
> >pushed it that far and I'm sorry about the delay in my comments.
> >
> >On Fri, Jan 08, 2010 at 12:55:21PM +0100, Benjamin Tissoires wrote:
> >>First, I try to dynamically create/detroy such subdevices. However,
> >>it infers very often freezes in the Xserver due to some mallocs. So
> >>the idea to solve this problem is to staticaly allocate the
> >>subdevices.
> >
> >Do you have any logs or other data on where these freezes occured? they'd be
> >useful for others as well since chances are someone is going to run into
> >this deadlock.
> >
> Sorry, when this freezes occurs I only had backtracks from the
> driver (a xmalloc) and not the other threads (I forgot to check
> them). I add this to my todo list.

Right, as Julien said you can't malloc in evdev. Any path that's hit during
ReadInput cannot malloc anything.

> >The evdev driver is first and foremost a generic mouse driver. Which means
> >that devices that provide x and y are the most important to work, regardless
> >of relative or absolute. The need for x/y is also implied by the core
> >protocol. However, there's no reason that you couldn't fake up x/y
> >and simply leave it as a mute axis. IMO the kernel should not report axes
> >that don't exist.
> >The axis mapping evdev is a convention not a rule, so you could just
> >reshuffle the axis checking to cater for the multitouch situation.
> >That goes for the rest of the driver too, if something seems wrong to cater
> >for a new situation then the driver can be rewritten to suit this.
> >
> As I had to rewrite the patch to keep all the events in one device,
> I will have to play with axes... and tackle this issue.

right, you could have an 0/1 axis for this type of things, that's about the
best approach. there's also the chance of using a button but that's more
likely to be misinterpreted.
the correct approach is that you only send those axes with a value, thus
explicitly specifying the on/off. This is possible with XI2 but the input
driver APIs aren't in place for that (no need so far).

> One last question I forgot to ask:
> Do we need to dynamically change the number of touch that can be detected ?
> I.e. do we need to have only 10 touches (as in Windows I think) or
> do we let the choice to the user (the toolkit) if the device can
> handle more ?

same again, you can change the number of axes in XI2, it's just that the
input driver APIs are missing that feature so far.
 
Cheers,
  Peter
___
xorg-devel mailing list
xorg-devel@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH:xev] Add -root option to specify monitoring root window events.

2010-01-18 Thread Peter Hutterer
On Sat, Jan 16, 2010 at 07:09:28PM +0100, Kim Woelders wrote:
> 
> Signed-off-by: Kim Woelders 
> 
> diff --git a/xev.c b/xev.c
> index 16d45aa..b422aa2 100644
> --- a/xev.c
> +++ b/xev.c
> @@ -690,6 +690,7 @@ usage (void)
>  "-bw pixels  border width in pixels",
>  "-bs {NotUseful,WhenMapped,Always}   backingstore attribute",
>  "-id windowiduse existing window",
> +"-root   use root window",
>  "-s  set save-unders attribute",
>  "-name stringwindow name",
>  "-rv reverse video",
> @@ -739,6 +740,7 @@ main (int argc, char **argv)
>  int done;
>  char *name = "Event Tester";
>  Bool reverse = False;
> +Bool use_root = False;
>  unsigned long back, fore;
>  XIM xim;
>  XIMStyles *xim_styles;
> @@ -793,8 +795,17 @@ main (int argc, char **argv)
>   if (++i >= argc) usage ();
>   name = argv[i];
>   continue;
> -   case 'r': /* -rv */
> - reverse = True;
> +   case 'r':
> + switch (arg[2]) {
> +   case 'o': /* -root */
> + use_root = True;
> + continue;
> +   case 'v': /* -rv */
> + reverse = True;
> + continue;
> +   default:
> + usage ();
> + }
>   continue;
> case 's': /* -s */
>   attr.save_under = True;
> @@ -865,6 +876,9 @@ main (int argc, char **argv)
>  FocusChangeMask | PropertyChangeMask |
>  ColormapChangeMask | OwnerGrabButtonMask;
>  
> +if (use_root)
> + w = RootWindow(dpy, screen);
> +
>  if (w) {
>   XGetWindowAttributes(dpy, w, &wattr);
>   if (wattr.all_event_masks & ButtonPressMask)
> diff --git a/xev.man b/xev.man
> index dcb1e3a..136a01a 100644
> --- a/xev.man
> +++ b/xev.man
> @@ -9,7 +9,7 @@ xev - print contents of X events
>  .B "xev"
>  [\-display \fIdisplayname\fP] [\-geometry \fIgeom\fP]
>  [\-bw \fIpixels\fP] [\-bs \fI{NotUseful,WhenMapped,Always}\fP]
> -[\-id \fIwindowid\fP] [\-s] [\-name \fIstring\fP] [\-rv]
> +[\-id \fIwindowid\fP] [\-root] [\-s] [\-name \fIstring\fP] [\-rv]
>  .SH DESCRIPTION
>  .PP
>  \fIXev\fP creates a window and then asks the X server to send it
> @@ -40,6 +40,10 @@ means that the xev process will redraw its contents 
> itself, as necessary.
>  This option specifies that the window with the given id should be
>  monitored, instead of creating a new window.
>  .TP 8
> +.B \-root
> +This option specifies that the root window should be
> +monitored, instead of creating a new window.
> +.TP 8
>  .B \-s
>  This option specifies that save-unders should be enabled on the window. Save
>  unders are similar to backing store, but they refer rather to the saving of
> -- 
> 1.6.6

pushed, thanks.
 
Cheers,
  Peter
___
xorg-devel mailing list
xorg-devel@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-devel


[PULL] misc fixes and XKB cleanup

2010-01-18 Thread Peter Hutterer
Keith,

Please pull the changes below into master. Included are misc. bug fixes and
the first round of reviewed xkb cleanup patches.

The following changes since commit 6f6a99abc12ddee82898fdabfb50c17e90e094b9:
  Jeremy Huddleston (1):
XQuartz: Don't FatalError in x_hook_run if the list is empty

are available in the git repository at:

  git://people.freedesktop.org/~whot/xserver.git master

Dan Nicholson (1):
  Don't use AC_CHECK_FILE for fontpath checks when cross compiling

Gaetan Nadon (1):
  packaging: provide a default README file #24206

Horst Wente (1):
  xkb: make ctrl+alt+keypad + / ctrl+alt+keypad - work again (#25743)

Julien Cristau (1):
  main: move config_init() after InitInput()

Oldřich Jedlička (2):
  Fix typo in updateSlaveDeviceCoords
  Allow driver to call DeleteInputDeviceRequest during UnInit

Peter Hutterer (10):
  dix: EventToCore needs to copy the root window too.
  xfree86: replace True/False with TRUE/FALSE.
  xkb: Add XKM file format description.
  xkb: remove _XkbTyped*alloc
  xkb: remove _XkbClearElems, a memset will do.
  xkb: remove XkbAtomGetString, replace with NameForAtom.
  xkb: unexport xkbDevicePrivateKey and xkbUnwrapProc.
  xkb: remove IsKeypadKey define, only used in two places.
  xkb: remove unused _XkbIsPressEvent and _XkbIsReleaseEvent defines
  xkb: sed True -> TRUE and False -> FALSE

Tiago Vignatti (1):
  dix: move cursor realize code to its own function

 README |   37 +++
 configure.ac   |   22 +-
 dix/cursor.c   |  203 +---
 dix/devices.c  |   44 ++-
 dix/eventconvert.c |1 +
 dix/getevents.c|2 +-
 dix/main.c |2 +-
 hw/xfree86/common/xf86Xinput.c |   20 +-
 include/xkbsrv.h   |   23 +--
 test/input.c   |4 +-
 xkb/XKBAlloc.c |   58 ++--
 xkb/XKBGAlloc.c|   34 +-
 xkb/XKBMAlloc.c|   71 ++--
 xkb/XKBMisc.c  |   52 ++--
 xkb/XKM_file_format.txt|  684 
 xkb/ddxBeep.c  |2 +-
 xkb/ddxList.c  |8 +-
 xkb/ddxLoad.c  |   18 +-
 xkb/maprules.c |  102 +++---
 xkb/xkb.c  |  101 +++---
 xkb/xkbAccessX.c   |   30 +-
 xkb/xkbActions.c   |   45 ++--
 xkb/xkbEvents.c|8 +-
 xkb/xkbInit.c  |   30 +-
 xkb/xkbLEDs.c  |   50 ++--
 xkb/xkbUtils.c |   20 +-
 xkb/xkbfmisc.c |   48 ++--
 xkb/xkbout.c   |   70 ++--
 xkb/xkbtext.c  |   46 ++--
 xkb/xkmread.c  |   28 +-
 30 files changed, 1284 insertions(+), 579 deletions(-)
 create mode 100644 README
 create mode 100644 xkb/XKM_file_format.txt
___
xorg-devel mailing list
xorg-devel@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-devel


multitouch

2010-01-18 Thread The Rasterman
hey guys (sorry for starting a new thread - i only just subscribed - lurking on
xorg as opposed to xorg-devel).

interesting that this topic comes up now... multitouch. i'm here @ samsung and
got multi-touch capable hardware - supports up to 10 touchpoints, so need
support.

now... i read the thread. i'm curious. brad - why do u think a single event (vs
multiple) means less context switches (and thus less power consumption, cpu
used etc.)?

as such your event is delivered along with possibly many others in a buffer - x
protocol is buffered and thus read will read as much data as it can into the
buffer and process it. this means your 2, 3, 4, 5 or more touch events should
get read (and written from the server side) pretty much all at once and get put
into a single buffer, then XNextEvent will just walk the buffer processing the
events. even if by some accident thet dont end up in the same read and buffer
and you do context switch, you wont save battery as the cpu will have never
gone idle enough to go into any low power mode. but as such you should be
seeing all these events alongside other events (core mousepress/release/motion
etc. etc. etc.). so i think the argument for all of it in 1 event from a
power/cpu side i think is a bit specious. but... do you have actual data to
show that such events actually dont get buffered in x protocol as they should
be and dont end up getting read all at once? (i know that my main loop will
very often read several events from a single select wakeup before going back to
sleep, as long as the events come in faster than they can be acted on as they
also get processed and batched into yet another queue before any rendering
happens at the end of that queue processing).

but - i do see that if osx and windows deliver events as a single blob for
multiple touches, then if we do something different, we are just creating work
for developers to adapt to something different. i also see the arguument for
wanting multiple valuators deliver the coords of multiple fingers for things
like pinch, zoom, etc. etc. BUT this doesnt work for other uses - eg virtual
keyboard where i am typing with 2 thumbs - my presses are actually independent
presses like 2 core pointers in mpx.

so... i think the multiple valuators vs multiple devices for mt events is moot
as you can argue it both ways and i dont think either side has specifically a
stronger case... except doing multiple events from multiple devices works
better with mpx-aware apps/toolkits, and it works better for the more complex
touch devices that deliver not just x,y but x, y, width, height, angle,
pressure, etc. etc. per point (so each point may have a dozen or more valuators
attached to it), and thus delivering a compact set of points in a single event
makes life harder for getting all the extra data for the separate touch events.

so i'd vote for how tissoires did it as it allows for more information per
touch point to be sanely delivered. as such thats how we have it working right
now. yes - the hw can deliver all points at once but we produce n events. but
what i'm wondering is.. should we

1. have 1, 2, 3, 4 or more (10) core devices, each one is a touch point.
2. have 1 core with 9 slave devices (core is first touch and core pointer)
3. have 1 core for first touch and 9 floating devices for the other touches.

they have their respective issues. right now we do #3, but #2 seems very
logical. #1 seems a bit extreme.

remember - need to keep compatibility with single touch (mouse only) events and
apps as well as expand to be able to get the multi-touch events if wanted.

-- 
- Codito, ergo sum - "I code, therefore I am" --
The Rasterman (Carsten Haitzler)ras...@rasterman.com

___
xorg-devel mailing list
xorg-devel@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-devel


multitouch

2010-01-18 Thread The Rasterman
hey guys (sorry for starting a new thread - i only just subscribed - lurking on
xorg as opposed to xorg-devel).

interesting that this topic comes up now... multitouch. i'm here @ samsung and
got multi-touch capable hardware - supports up to 10 touchpoints, so need
support.

now... i read the thread. i'm curious. brad - why do u think a single event (vs
multiple) means less context switches (and thus less power consumption, cpu
used etc.)?

as such your event is delivered along with possibly many others in a buffer - x
protocol is buffered and thus read will read as much data as it can into the
buffer and process it. this means your 2, 3, 4, 5 or more touch events should
get read (and written from the server side) pretty much all at once and get put
into a single buffer, then XNextEvent will just walk the buffer processing the
events. even if by some accident thet dont end up in the same read and buffer
and you do context switch, you wont save battery as the cpu will have never
gone idle enough to go into any low power mode. but as such you should be
seeing all these events alongside other events (core mousepress/release/motion
etc. etc. etc.). so i think the argument for all of it in 1 event from a
power/cpu side i think is a bit specious. but... do you have actual data to
show that such events actually dont get buffered in x protocol as they should
be and dont end up getting read all at once? (i know that my main loop will
very often read several events from a single select wakeup before going back to
sleep, as long as the events come in faster than they can be acted on as they
also get processed and batched into yet another queue before any rendering
happens at the end of that queue processing).

but - i do see that if osx and windows deliver events as a single blob for
multiple touches, then if we do something different, we are just creating work
for developers to adapt to something different. i also see the arguument for
wanting multiple valuators deliver the coords of multiple fingers for things
like pinch, zoom, etc. etc. BUT this doesnt work for other uses - eg virtual
keyboard where i am typing with 2 thumbs - my presses are actually independent
presses like 2 core pointers in mpx.

so... i think the multiple valuators vs multiple devices for mt events is moot
as you can argue it both ways and i dont think either side has specifically a
stronger case... except doing multiple events from multiple devices works
better with mpx-aware apps/toolkits, and it works better for the more complex
touch devices that deliver not just x,y but x, y, width, height, angle,
pressure, etc. etc. per point (so each point may have a dozen or more valuators
attached to it), and thus delivering a compact set of points in a single event
makes life harder for getting all the extra data for the separate touch events.

so i'd vote for how tissoires did it as it allows for more information per
touch point to be sanely delivered. as such thats how we have it working right
now. yes - the hw can deliver all points at once but we produce n events. but
what i'm wondering is.. should we

1. have 1, 2, 3, 4 or more (10) core devices, each one is a touch point.
2. have 1 core with 9 slave devices (core is first touch and core pointer)
3. have 1 core for first touch and 9 floating devices for the other touches.

they have their respective issues. right now we do #3, but #2 seems very
logical. #1 seems a bit extreme.

remember - need to keep compatibility with single touch (mouse only) events and
apps as well as expand to be able to get the multi-touch events if wanted.

-- 
- Codito, ergo sum - "I code, therefore I am" --
The Rasterman (Carsten Haitzler)ras...@rasterman.com

___
xorg-devel mailing list
xorg-devel@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH v2] xrandr: display gamma and brightness

2010-01-18 Thread Yang Zhao
A few small things:

2010/1/17 Éric Piel :

> +/* Returns the index of the last value in an array < 65535 */
> +static int
> +find_last_non_clamped(CARD16 array[], int size)...

This is purely stylistic, but I generally think hexadecimal form for
values like this convey the meaning much more clearly. ie: 0x
instead of 65535.  YMMV.


> @@ -1225,25 +1311,25 @@ set_gamma(void)
>
>   for (i = 0; i < size; i++) {
>   if (output->gamma.red == 1.0 && output->brightness == 1.0)
> - gamma->red[i] = i << 8;
> + gamma->red[i] = (i << 8) + i;
>   else
>   gamma->red[i] = dmin(pow((double)i/(double)(size - 1),
> - (double)output->gamma.red) * (double)(size - 1)
> - * (double)output->brightness * 256, 65535.0);
> +  output->gamma.red) * 
> output->brightness,
> +  1.0) * 65535.0;
> ...

These changes weren't part of the original patch, and seems to subtly
change the semantics of setting the gamma correction curve.  Included
by accident?

Otherwise, including what Matthias has already pointed out,

Reviewed-By: Yang Zhao 

-- 
Yang Zhao
http://yangman.ca
___
xorg-devel mailing list
xorg-devel@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH v2] xrandr: display gamma and brightness

2010-01-18 Thread Matthias Hopf
On Jan 18, 10 00:17:51 +0100, Éric Piel wrote:
> Even in verbose query mode, gamma and brigthness were not displayed.
> That's because they are not stored in the server the same way they are
> specified on the command line: they are stored as 256 * 3 u16 while
> the command line is 3 + 1 floats.  Still, this is useful info for the
> users, and they don't care about how it's stored in the server.

Looks good, except for that with RandR 1.1 drivers (like NVidia) xrandr -q
would fail (output default cannot get gamma size). Both error cases
should print out warnings and continue without printing gamma and
brightness.

Also, these error messages are pretty bad english (from my non-native
point of view :-] ).

With those two changes:

Reviewed-By: Matthias Hopf 


Disconnected outputs print out gamma 0.0:0.0:0.0...
Maybe disconnected outputs shouldn't print any gamma?

Thanks

Matthias

-- 
Matthias Hopf   ____   __
Maxfeldstr. 5 / 90409 Nuernberg   (_   | |  (_   |__  m...@mshopf.de
Phone +49-911-74053-715   __)  |_|  __)  |__  R & D   www.mshopf.de
___
xorg-devel mailing list
xorg-devel@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-devel


Re: Enabling multitouch in input-evdev

2010-01-18 Thread Julien Cristau
On Mon, Jan 18, 2010 at 13:02:54 +0100, Benjamin Tissoires wrote:

> Peter Hutterer wrote:
> >First of all, thanks for your efforts here. I really appreciate that you
> >pushed it that far and I'm sorry about the delay in my comments.
> >
> >On Fri, Jan 08, 2010 at 12:55:21PM +0100, Benjamin Tissoires wrote:
> >>First, I try to dynamically create/detroy such subdevices. However,
> >>it infers very often freezes in the Xserver due to some mallocs. So
> >>the idea to solve this problem is to staticaly allocate the
> >>subdevices.
> >
> >Do you have any logs or other data on where these freezes occured? they'd be
> >useful for others as well since chances are someone is going to run into
> >this deadlock.
> >
> Sorry, when this freezes occurs I only had backtracks from the
> driver (a xmalloc) and not the other threads (I forgot to check
> them). I add this to my todo list.

The server is single threaded.  But if you're doing malloc in the SIGIO
handler (anything called from EvdevReadInput), then yeah, you can't do
that.

Cheers,
Julien
___
xorg-devel mailing list
xorg-devel@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-devel


[PATCH] xfree86: vgaarb: remove useless debug

2010-01-18 Thread Tiago Vignatti
This is RAC's remnant. Any sane person would use a more wise method of
debugging instead.

X.Org Bug 26074 

Signed-off-by: Tiago Vignatti 
---
 hw/xfree86/common/xf86VGAarbiter.c |   65 
 1 files changed, 0 insertions(+), 65 deletions(-)

diff --git a/hw/xfree86/common/xf86VGAarbiter.c 
b/hw/xfree86/common/xf86VGAarbiter.c
index b240998..cd45cd1 100644
--- a/hw/xfree86/common/xf86VGAarbiter.c
+++ b/hw/xfree86/common/xf86VGAarbiter.c
@@ -38,14 +38,6 @@
 #include "xf86Priv.h"
 #include "pciaccess.h"
 
-#ifdef DEBUG
-#error "no, really, you dont want to do this"
-#define DPRINT_S(x,y) ErrorF(x ": %i\n",y);
-#define DPRINT(x) ErrorF(x "\n");
-#else
-#define DPRINT_S(x,y)
-#define DPRINT(x)
-#endif
 
 static GCFuncs VGAarbiterGCFuncs = {
 VGAarbiterValidateGC, VGAarbiterChangeGC, VGAarbiterCopyGC,
@@ -187,8 +179,6 @@ xf86VGAarbiterWrapFunctions(void)
 pScrn = xf86Screens[pScreen->myNum];
 PointPriv = dixLookupPrivate(&pScreen->devPrivates, 
miPointerScreenKey);
 
-DPRINT_S("VGAarbiterWrapFunctions",pScreen->myNum);
-
 if (!dixRequestPrivate(VGAarbiterGCKey, sizeof(VGAarbiterGCRec)))
 return FALSE;
 
@@ -244,7 +234,6 @@ VGAarbiterCloseScreen (int i, ScreenPtr pScreen)
 PictureScreenPtrps = GetPictureScreenIfSet(pScreen);
 #endif
 
-DPRINT_S("VGAarbiterCloseScreen",pScreen->myNum);
 UNWRAP_SCREEN(CreateGC);
 UNWRAP_SCREEN(CloseScreen);
 UNWRAP_SCREEN(GetImage);
@@ -311,7 +300,6 @@ VGAarbiterGetImage (
 )
 {
 ScreenPtr pScreen = pDrawable->pScreen;
-DPRINT_S("VGAarbiterGetImage",pScreen->myNum);
 SCREEN_PROLOG(GetImage);
 //if (xf86Screens[pScreen->myNum]->vtSema) {
 VGAGet();
@@ -334,7 +322,6 @@ VGAarbiterGetSpans (
 {
 ScreenPtr   pScreen = pDrawable->pScreen;
 
-DPRINT_S("VGAarbiterGetSpans",pScreen->myNum);
 SCREEN_PROLOG (GetSpans);
 VGAGet();
 (*pScreen->GetSpans) (pDrawable, wMax, ppt, pwidth, nspans, pdstStart);
@@ -348,7 +335,6 @@ VGAarbiterSourceValidate (
 int x, int y, int width, int height )
 {
 ScreenPtr   pScreen = pDrawable->pScreen;
-DPRINT_S("VGAarbiterSourceValidate",pScreen->myNum);
 SCREEN_PROLOG (SourceValidate);
 VGAGet();
 if (pScreen->SourceValidate)
@@ -365,7 +351,6 @@ VGAarbiterCopyWindow(
 {
 ScreenPtr pScreen = pWin->drawable.pScreen;
 
-DPRINT_S("VGAarbiterCopyWindow",pScreen->myNum);
 SCREEN_PROLOG (CopyWindow);
 VGAGet();
 (*pScreen->CopyWindow) (pWin, ptOldOrg, prgnSrc);
@@ -382,7 +367,6 @@ VGAarbiterClearToBackground (
 {
 ScreenPtr pScreen = pWin->drawable.pScreen;
 
-DPRINT_S("VGAarbiterClearToBackground",pScreen->myNum);
 SCREEN_PROLOG ( ClearToBackground);
 VGAGet();
 (*pScreen->ClearToBackground) (pWin, x, y, w, h, generateExposures);
@@ -395,7 +379,6 @@ VGAarbiterCreatePixmap(ScreenPtr pScreen, int w, int h, int 
depth, unsigned usag
 {
 PixmapPtr pPix;
 
-DPRINT_S("VGAarbiterCreatePixmap",pScreen->myNum);
 SCREEN_PROLOG ( CreatePixmap);
 VGAGet();
 pPix = (*pScreen->CreatePixmap) (pScreen, w, h, depth, usage_hint);
@@ -410,7 +393,6 @@ VGAarbiterSaveScreen(ScreenPtr pScreen, Bool unblank)
 {
 Bool val;
 
-DPRINT_S("VGAarbiterSaveScreen",pScreen->myNum);
 SCREEN_PROLOG (SaveScreen);
 VGAGet();
 val = (*pScreen->SaveScreen) (pScreen, unblank);
@@ -428,7 +410,6 @@ VGAarbiterStoreColors (
 {
 ScreenPtr pScreen = pmap->pScreen;
 
-DPRINT_S("VGAarbiterStoreColors",pScreen->myNum);
 SCREEN_PROLOG (StoreColors);
 VGAGet();
 (*pScreen->StoreColors) (pmap,ndef,pdefs);
@@ -444,7 +425,6 @@ VGAarbiterRecolorCursor (
 Bool displayed
 )
 {
-DPRINT_S("VGAarbiterRecolorCursor",pScreen->myNum);
 SCREEN_PROLOG (RecolorCursor);
 VGAGet();
 (*pScreen->RecolorCursor) (pDev, pScreen, pCurs, displayed);
@@ -461,7 +441,6 @@ VGAarbiterRealizeCursor (
 {
 Bool val;
 
-DPRINT_S("VGAarbiterRealizeCursor",pScreen->myNum);
 SCREEN_PROLOG (RealizeCursor);
 VGAGet();
 val = (*pScreen->RealizeCursor) (pDev, pScreen,pCursor);
@@ -479,7 +458,6 @@ VGAarbiterUnrealizeCursor (
 {
 Bool val;
 
-DPRINT_S("VGAarbiterUnrealizeCursor",pScreen->myNum);
 SCREEN_PROLOG (UnrealizeCursor);
 VGAGet();
 val = (*pScreen->UnrealizeCursor) (pDev, pScreen, pCursor);
@@ -497,7 +475,6 @@ VGAarbiterDisplayCursor (
 {
 Bool val;
 
-DPRINT_S("VGAarbiterDisplayCursor",pScreen->myNum);
 SCREEN_PROLOG (DisplayCursor);
 VGAGet();
 val = (*pScreen->DisplayCursor) (pDev, pScreen, pCursor);
@@ -515,7 +492,6 @@ VGAarbiterSetCursorPosition (
 {
 Bool val;
 
-DPRINT_S("VGAarbiterSetCursorPosition",pScreen->myNum);
 SCREEN_PROLOG (SetCursorPosition);
 VGAGet();
 val = (*pScreen->SetCursorPosition) (pDev, pScreen, x, y, generateEvent);
@@ -531,7 +507,6 @@ VGAarbiterAdjustFrame(int index, int x, int y, int flags)
 

Re: Enabling multitouch in input-evdev

2010-01-18 Thread Benjamin Tissoires

Peter Hutterer wrote:

First of all, thanks for your efforts here. I really appreciate that you
pushed it that far and I'm sorry about the delay in my comments.

On Fri, Jan 08, 2010 at 12:55:21PM +0100, Benjamin Tissoires wrote:
  

First, I try to dynamically create/detroy such subdevices. However,
it infers very often freezes in the Xserver due to some mallocs. So
the idea to solve this problem is to staticaly allocate the
subdevices. 



Do you have any logs or other data on where these freezes occured? they'd be
useful for others as well since chances are someone is going to run into
this deadlock.

  
Sorry, when this freezes occurs I only had backtracks from the driver (a 
xmalloc) and not the other threads (I forgot to check them). I add this 
to my todo list.

Finally, let's talk about the internal detection and behavior of multitouch.
The multitouch-enabled evdev driver (since kernel 2.6.30) has some
more events (SYN_MT_REPORT, ABS_MT_TOUCH_MAJOR, ABS_MT_TOUCH_MINOR,
ABS_MT_WIDTH_MAJOR, ABS_MT_WIDTH_MINOR, ABS_MT_ORIENTATION,
ABS_MT_POSITION_X, ABS_MT_POSITION_Y, ABS_MT_TOOL_TYPE,
ABS_MT_BLOB_ID and  ABS_MT_TRACKING_ID). The only necessary event
given by a multitouch device (correct me if I'm wrong) is the
SYN_MT_REPORT. So I activate multitouch only when I received this
event.



Do you activate it when you receive the event or based on this bit? It seems
checking for the bit set is the better approach. (I haven't yet looked at
your patch)

  
I detect the multitouch capability in the init basing on this bit. 
However, I pack the multitouch event when I receive the SYN_MT_REPORT. 
Sorry for not being clear.


The evdev driver is first and foremost a generic mouse driver. Which means
that devices that provide x and y are the most important to work, regardless
of relative or absolute. The need for x/y is also implied by the core
protocol. However, there's no reason that you couldn't fake up x/y
and simply leave it as a mute axis. IMO the kernel should not report axes
that don't exist.
The axis mapping evdev is a convention not a rule, so you could just
reshuffle the axis checking to cater for the multitouch situation.
That goes for the rest of the driver too, if something seems wrong to cater
for a new situation then the driver can be rewritten to suit this.

  
As I had to rewrite the patch to keep all the events in one device, I 
will have to play with axes... and tackle this issue.

The second problem was concerning trackpads:

How can we handle modern-multitouch trackpads (Broadcom 5974,
DiamondTouch). We excluded synaptics trackpads as they don't send
ABS_MT events but special tools events (325, 330, 333, 334 for
ToolFinger, Touch, Tool Doubletap, Tool Tripletap).

From Stéphane's point of view, they should be transparent of my
patched version of input-evdev as they are giving quite the same
events than multitouch screens.

Finally, I was wondering what is your position concerning the use of
XI-properties to notify the client of the starting/ending of touch
events.



Property events are neither designed nor particularly well suited for this.
Their data delivery is partially out-of-band with the input events and
frequent property changes reduce the load on both the server and the client
significantly. As with the multitouch data, this should ideally be included
in a new type of event.

Cheers,
  Peter
  
As I send in the previous mail I send, maybe we should use valuators to 
send such events ?


One last question I forgot to ask:
Do we need to dynamically change the number of touch that can be detected ?
I.e. do we need to have only 10 touches (as in Windows I think) or do we 
let the choice to the user (the toolkit) if the device can handle more ?



Cheers,

Benjamin
___
xorg-devel mailing list
xorg-devel@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-devel


Re: Enabling multitouch in input-evdev

2010-01-18 Thread Benjamin Tissoires

Peter Hutterer wrote:

On Thu, Jan 14, 2010 at 11:23:23AM +0100, Bradley T. Hughes wrote:
  

Why do you think it would require protocol changes? For the new
event type? If I understand it correctly, events for devices can
contain any number of valuators... is it possible to have x1,y1
x2,y2 x3,y3 and so-on?



correct, there's a valuator limit of 36 but even that should be fine for a
single device. with axis labelling it's now even possible to simply claim:
hey, here's 24 axes but they represent 12 different touchpoints.
I hadn't really thought about this approach yet because IMO touch is more
than just simply pairs of coordinates and that's what I'd eventually like to
get in. As an intermediate option your approach would definitely work, it'd
be easy to integrate and should hold up with the current system.

bonus point is also that the core emulation automatically works on the first
touchpoint only, without any extra help.

And whether more is needed (i.e. complex touch events) is something that can
be decided later when we have a bit more experience on what apps may need.
Stephane, do you have opinions on this?

  
I agree we can have multiple XI2 valuators to keep the touches packed. 
However, how can we tell the toolkit a touch started/ended or has been 
aborted (I think it is the behavior in MacOS, but I'm not sure) ? I 
don't think we could rely on button events as the other layers will 
receive buttons up/down that has no meaning.


Maybe a solution would be to have a third valuator for each touch with 
the tracking id, for instance:

-1 means no value
-2 means error (or aborted)
> 0 means started and active


From what I can tell, there are a number of challenges that would
require driver and server changes. In particular, being able to do
collaborative work on a large multi-touch surface requires the
driver (or server, not sure which makes most sense) to be able to
somehow split the touch points between multiple windows. This is
something that we had to do in Qt at least.



packing all coordinates into one event essentially makes it a single
point with auxiliary touchpoints. This is useful for a number of things
(most gestures that we see today like pinch and rotate should work in this
approach) but not useful once you have truly independent data from multiple
hands or users. that's when you have to do more flexible picking in the
server and that requires more work.

given the easy case of a single user interacting with a surface:
with Qt as it is now, if you get all the coordinates in a single XI2 event,
would that work for you? or is there extra information you need?

Cheers,
  Peter
___
xorg-devel mailing list
xorg-devel@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-devel
  
For the question of multiple users, if we consider that the toolkit has 
to do the job for the gestures, it can also control a new cursor thanks 
to Xtest. However, it will introduce some lag in the loop Maybe we 
want an intermediate solution: the toolkit specify which track has to be 
split from the rest of the events in a new device (in the same way my 
patch works). It's just an idea.
___
xorg-devel mailing list
xorg-devel@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH v3] Allow driver to call DeleteInputDeviceRequest during UnInit

2010-01-18 Thread Peter Hutterer
On Sun, Jan 17, 2010 at 05:59:03PM +0100, Oldřich Jedlička wrote:
> When the input driver (like xf86-input-wacom) removes it's devices
> during a call to UnInit, the CloseDownDevices() cannot handle it. The
> "next" variable can become a pointer to freed memory.
> 
> The patch introduces order-independent device freeing mechanism by
> remembering the already freed device ids. The devices can reorder any
> time during freeing. No device will be double-freed - if the removing
> failed for any reason; some implementations of DeleteInputDeviceRequest
> don't free the devices already.
> 
> Signed-off-by: Oldřich Jedlička 
> ---
>  dix/devices.c |   44 +---
>  1 files changed, 33 insertions(+), 11 deletions(-)
> 
> diff --git dix/devices.c dix/devices.c
> index 245a95b..ef199b7 100644
> --- dix/devices.c
> +++ dix/devices.c
> @@ -878,13 +878,43 @@ CloseDevice(DeviceIntPtr dev)
>  }
>  
>  /**
> + * Shut down all devices of one list and free all resources.
> + */
> +static
> +void
> +CloseDeviceList(DeviceIntPtr *listHead)
> +{
> +/* Used to mark devices that we tried to free */
> +Bool freedIds[MAXDEVICES];
> +DeviceIntPtr dev;
> +int i;
> +
> +if (listHead == NULL)
> +return;
> +
> +for (i = 0; i < MAXDEVICES; i++)
> +freedIds[i] = FALSE;
> +
> +dev = *listHead;
> +while (dev != NULL)
> +{
> +freedIds[dev->id] = TRUE;
> +DeleteInputDeviceRequest(dev);
> +
> +dev = *listHead;
> +while (dev != NULL && freedIds[dev->id])
> +dev = dev->next;
> +}
> +}
> +
> +/**
>   * Shut down all devices, free all resources, etc.
>   * Only useful if you're shutting down the server!
>   */
>  void
>  CloseDownDevices(void)
>  {
> -DeviceIntPtr dev, next;
> +DeviceIntPtr dev;
>  
>  /* Float all SDs before closing them. Note that at this point resources
>   * (e.g. cursors) have been freed already, so we can't just call
> @@ -897,16 +927,8 @@ CloseDownDevices(void)
>  dev->u.master = NULL;
>  }
>  
> -for (dev = inputInfo.devices; dev; dev = next)
> -{
> - next = dev->next;
> -DeleteInputDeviceRequest(dev);
> -}
> -for (dev = inputInfo.off_devices; dev; dev = next)
> -{
> - next = dev->next;
> -DeleteInputDeviceRequest(dev);
> -}
> +CloseDeviceList(&inputInfo.devices);
> +CloseDeviceList(&inputInfo.off_devices);
>  
>  CloseDevice(inputInfo.pointer);
>  CloseDevice(inputInfo.keyboard);
> -- 
> 1.6.6

Reviewed-by: Peter Hutterer 

thanks, merged.

Cheers,
  Peter
___
xorg-devel mailing list
xorg-devel@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-devel


Re: [PATCH v2] Allow driver to call DeleteInputDeviceRequest during UnInit

2010-01-18 Thread Simon Thum
Peter Hutterer wrote:
> On Fri, Jan 15, 2010 at 10:40:02AM +0100, Simon Thum wrote:
>> Oldřich Jedlička wrote:
>>> When the input driver (like xf86-input-wacom) removes it's devices
>>> during a call to UnInit, the CloseDownDevices() cannot handle it. The
>>> "next" variable can become a pointer to freed memory.
>>>
>>> The patch introduces order-independent device freeing mechanism by
>>> remembering the already freed device ids. The devices can reorder any
>>> time during freeing. No device will be double-freed - if the removing
>>> failed for any reason; some implementations of DeleteInputDeviceRequest
>>> don't free the devices already.
>>>
>>> Signed-off-by: Oldřich Jedlička 
>>> ---
>>>  dix/devices.c |   42 +++---
>>>  1 files changed, 31 insertions(+), 11 deletions(-)
>>>
>>> diff --git dix/devices.c dix/devices.c
>>> index 245a95b..dce0f12 100644
>>> --- dix/devices.c
>>> +++ dix/devices.c
>>> @@ -878,13 +878,41 @@ CloseDevice(DeviceIntPtr dev)
>>>  }
>>>  
>>>  /**
>>> + * Shut down all devices of one list and free all resources.
>>> + */
>>> +static
>>> +void
>>> +CloseListDevices(DeviceIntPtr *listHead)
>>> +{
>>> +/* Used to mark devices that we tried to free */
>>> +Bool freedIds[MAXDEVICES] = { 0 };
>> This would only work if UnInit()s removals are somehow forced onto
>> freedIds. In any case the freed array is a sort of context that's not
>> easy to get right. I'm CC'ing Peter, maybe he's got a better idea.
> 
> yes, you're right. this solution could work though if the ID array was
> shared between CloseDevice and this function here. Any device calls
> CloseDevice at some point, so this is the best point you can fill in the ID
> array.
Yes, that's what I thought initially. I rethought because CD may be
called twice on any ID if it was reused during server lifetime.

But more importantly, I'm not right. Oldrichs solution works because he
always re-travels the list from start, which should properly exclude
devices removed in-between. Heck, it would probably even work for
devices added during UnInit().

I already apologized and added my reviewed-by in another mail, which is
probably elsewhere in your queue ;)
> 
> Cheers,
>   Peter
> 

___
xorg-devel mailing list
xorg-devel@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-devel