Re: [PATCH] Allow uploading a keymap to a single device

2010-10-29 Thread Florian Echtler
On Fri, 2010-10-29 at 09:49 +0200, Dirk Wallenstein wrote:
 Ups, that's an xkbcomp patch. Sorry, forgot to fill in the label.
Ha - thanks! This patch exactly solves my problem described here:
http://lists.x.org/archives/xorg/2010-October/051583.html

So it seems like it was an xkbcomp issue.

Thanks again,
Florian
-- 
0666 - Filemode of the Beast

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: X Gesture Extension protocol - draft proposal v1

2010-08-17 Thread Florian Echtler
On Mon, 2010-08-16 at 22:27 +0200, Simon Thum wrote:
 Am 16.08.2010 21:41, schrieb Chase Douglas:
  Also, we think that there's a case to be made for environmental gestures
  that should override gestures recognized by clients. This is provided by
  the mutual exclusion flag when selecting for events. Again, this
  wouldn't be possible without integrating it into the X propagation
  mechanism.
 I like it, if only because it resembles what I described on the list
 earlier that year :) The protocol's probably tricky to get race-free,
 but surely worth it. I'll have a more thorough look at it this week.
 I'm cc'ing florian in case he's still working on the issue.

Here's the old discussion:
http://lists.x.org/archives/xorg-devel/2010-March/006388.html

Thanks for mentioning, Simon - I'm very excited to see this, it's really
quite similar to my own work. 

Douglas, allow me to point you to my thesis from last year at
http://mediatum2.ub.tum.de/node?id=796958 
Chapter 3 in particular deals with very similar issues, I'd be happy if
you could give it a quick glance and tell me about your opinion. Or did
you already read it? The similiarities, as I said, are quite
striking :-)

There also is a cross-platform implementation of my concepts available at
http://tisch.sf.net/ and https://launchpad.net/~floe/+archive/libtisch .

One thing I would suggest as a future extension is something along the
lines of less monolithic gestures, i.e. compose them out of still smaller
primitives such as number of fingers, held time, pressure, distance change,
angle change etc. etc.

Florian

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


[ANN] libTISCH 1.1.0 (with XInput 2 support)

2010-07-28 Thread Florian Echtler
Hello everyone,

I'd like to make a short announcement - I've recently published the 1.1
release of libTISCH, our multitouch development platform. Source code is
available at http://sf.net/projects/tisch/ while PPA packages for Ubuntu
10.04  9.10 are at https://launchpad.net/~floe/+archive/libtisch .

I'm posting this here because libTISCH is, to the best of my knowledge,
one of the very first GUI development libraries to natively support
XInput 2 and therefore input with multiple mice. You do need a patched
version of FreeGLUT for this to work, but I've provided suitable binary
packages in the PPA. The patch itself is part of the source distribution
and is also available directly on the SF.net site.

If you manage to spare some minutes and give libTISCH a try, I'd be glad
to hear about your opinions.

Thanks,
Florian

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Generating HDMI 1.4 compliant 3D images?

2010-07-15 Thread Florian Echtler
Hello everyone,

here at the university, we've recently acquired a 3D plasma display
which supports 3D data compliant to HDMI 1.4. Sending 3D data in 1080p
side-by-side format is pretty simple, however, you lose half of the
horizontal resolution with that approach. The full-resolution 3D formats
have some pretty weird dimensions, e.g. 1920x2205 (=1080x2+45). (*)

Has anybody had any experience in generating these formats with an
Xorg-based setup?

Thanks in advance, Florian

(*) There's 45 black lines of blanking space in between the two images.
WTH? This is a digital format, right?
-- 
0666 - Filemode of the Beast

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: Multitouch wiki pages?

2010-05-26 Thread Florian Echtler
 Would it make sense to create a page or a few pages on the x.org wiki to
 summarize the current multitouch concept, plan, and possibly status?
Has anybody actually created that page yet? I was looking for it to add
a bit about my plans, but couldn't find anything...

Florian
-- 
0666 - Filemode of the Beast

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: Multitouch followup: gesture recognition?

2010-04-02 Thread Florian Echtler
  Just specifying what gestures a specific window would be interested in
  wouldn't usually be live, would it? That's something defined at
  creation time and maybe changed occasionally over the lifetime, but not
  constantly.
  Which is why a declarative approach is OK for that. It's the dynamics
  that make it harder. More specificially, the dynamic part of your
  formalism likely needs tailored requests.
  The reason for this being that the special client won't be notified of
  property changes on other client windows, correct?
 Not quite, the sgc could probably register for prop changes. By
 'dynamics' I was referring to cancelling a gesture or other gesture
 state feedback a client may want to send. Props aren't good for that,
 but requests are.
 In requests, you're free to define semantics, whereas props are limited
 and quite racy.
OK, I see. I'll probably stay with properties for the first attempt (the
protocol used in my userspace lib doesn't require any such realtime callbacks
right now). I'll probably blatantly ignore _any_ performance-related
issues in the prototype, just to get a general feel for the problem.

  If you want to try a special client, it's therefore sensible to divide
  your requests and events into route-through (client - special gesture
  client or sgc - client) and server-processed (server-sgc or sgc-
  server), if possible.
  As far as I understand the architecture, everything except the plain
  input events would be just routed through the server between the two
  clients. In fact, I guess that after defining some custom events in
 Yes, part of the idea is that the server provides only the
 infrastructure. Routing, simple state tracking, somesuch.
Good - seems I've finally understood that part :-)

  inputproto, it should be possible to send them through
  XSend{Extension}Event?
 At first glance it looks suitable, but I'm not convinced it is
 appropriate. You'll want the server to select which clients get events,
 as is done with Xi event masks. This way, the gesture client doesn't
 need to know about all the windows out there.
 Also, I recall Xi2 and Xi1 (XSendExtensionEvent) shouldn't be mixed.
I've had a brief look at the code in libXi, and AFAICT there's nothing
to prevent this from working with any custom event, as long as
_XiEventToWire is adapted, too. Peter, maybe you could comment on this?

  // select motion events for entire screen
  XIEventMask mask;
  mask.deviceid = XIAllDevices;
  mask.mask_len = XIMaskLen( XI_LASTEVENT );
  mask.mask = (unsigned char*)calloc( mask.mask_len, sizeof(char) );
  
  XISetMask( mask.mask, XI_Motion );
  XISetMask( mask.mask, XI_ButtonPress );
  XISetMask( mask.mask, XI_ButtonRelease );
  
  XISelectEvents( display, DefaultRootWindow(display), mask, 1 );
  free( mask.mask );
  
  to capture all XInput events, however, I believe that's also quite
  flawed. What other options exist?
 To me it seems sane.
 This replication of all input is one of the reasons for the 'special' in
 'special gesture client'. Whatever it shall be it should probably be a
 part of Xi2. What leads you to think the above is flawed ?
The main reason why this code isn't yet sufficient IMHO is that I
haven't yet found out how to get some additional data from the received
events, particularly 
a) which client window the event is actually targeted at and 
b) what the position in window-relative coordinates is.

These are probably related, can you give me a hint how to retrieve this
information?

Florian
-- 
0666 - Filemode of the Beast

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: Multitouch followup: gesture recognition?

2010-03-31 Thread Florian Echtler
  This seems essential to your approach, so the feasibility of a server
  extension (oranything else, but a extension incurs overhead) depends a
  fair bit on the dynamics of your gesture customization.
  Just specifying what gestures a specific window would be interested in
  wouldn't usually be live, would it? That's something defined at
  creation time and maybe changed occasionally over the lifetime, but not
  constantly.
 Which is why a declarative approach is OK for that. It's the dynamics
 that make it harder. More specificially, the dynamic part of your
 formalism likely needs tailored requests.
The reason for this being that the special client won't be notified of
property changes on other client windows, correct?

 If you want to try a special client, it's therefore sensible to divide
 your requests and events into route-through (client - special gesture
 client or sgc - client) and server-processed (server-sgc or sgc-
 server), if possible.
As far as I understand the architecture, everything except the plain
input events would be just routed through the server between the two
clients. In fact, I guess that after defining some custom events in
inputproto, it should be possible to send them through
XSend{Extension}Event?

  So whether a special client detects gestures or the server itself, the
  server needs to deliver events, and the client needs to be able to
  receive them. This is where XGE and libXi kick in.
  Okay, it seems I'm slowly getting it. Please have a look at the attached
  PDF - this should illustrate the combination of 2/4, correct? (the
  normal Xinput events should probably be duplicated to the clients in the
  classical manner).
 Yes, that's very much the picture I have in mind. For completeness'
 sake, you might want libXi or libXgesture in clients.
 At any rate, it shouldn't matter to clients what instance (server
 component, special client, *) actually detects gestures. They only see
 the server extension, or a prototype XInput + gesture requests/events.
Right, that's probably more or less transparent from a client
point-of-view.

One other question, though: how would the special client go about
receiving copies of the input events destined for the regular clients?
AFAICT a pointer grab is quite the wrong way; I've already used
something along the lines of

// select motion events for entire screen
XIEventMask mask;
mask.deviceid = XIAllDevices;
mask.mask_len = XIMaskLen( XI_LASTEVENT );
mask.mask = (unsigned char*)calloc( mask.mask_len, sizeof(char) );

XISetMask( mask.mask, XI_Motion );
XISetMask( mask.mask, XI_ButtonPress );
XISetMask( mask.mask, XI_ButtonRelease );

XISelectEvents( display, DefaultRootWindow(display), mask, 1 );
free( mask.mask );

to capture all XInput events, however, I believe that's also quite
flawed. What other options exist?

Many thanks,
Florian
-- 
0666 - Filemode of the Beast

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Multitouch followup: gesture recognition?

2010-03-21 Thread Florian Echtler
Hello everybody,

[ I just came across the multitouch discussion thread from last week.
I'm starting a new thread, as what I'm thinking about is only indirectly
related to the old one. ]

I've briefly seen x gesture library mentioned in that context. As one
core topic of my PhD thesis ([1], see chapter 3) was a generic gesture
recognition engine, I'm very much interested in this aspect. 

I'd like to give a very brief outline of the concepts I've developed
during my thesis (for an implementation and some more details, please
see [2]):

- The core elements of the entire concept are /features/. Every feature
is a single atomic property of the raw input data, such as motion
vector or number of points or relative rotation angle etc.

- One or more features, together with optional boundary values, compose
a /gesture event/. When all features match their respective boundary
values, the event is triggered.

- Gesture events are attached to /regions/, which are more or less like
XWindows with the important difference that they can have arbitrary
shape (polygons). This is needed because input event capture
happens /before/ the interpretation into gesture events, therefore
common event bubbling would be quite difficult.

As I said, this was just a very brief outline. These concepts are proven
to work and allow for stuff such as on-the-fly reconfiguration of
gestures or portabiltity across widely different input devices.

Now, in an Xorg context, I'd very much like to hear your opinions on
these concepts. Would it make sense to build this into an X helper
library?

Many thanks for your opinions!
Florian

[1] http://mediatum2.ub.tum.de/node?id=796958
[2] http://tisch.sf.net/
-- 
0666 - Filemode of the Beast

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel