On 3/1/2010 4:56 PM, Bradley T. Hughes wrote:
On 03/01/2010 01:55 PM, ext Daniel Stone wrote:
I don't really see the conceptual difference between multiple devices
and multiple axes on a single device beyond the ability to potentially
deliver events to multiple windows. If you need the flexibility that
multiple devices offer you, then just use multiple devices and make your
internal representation look like a single device with multiple axes.

This is where the context confusion comes in. How do we know what the
user(s) is/are trying to do solely based on a set of x/y/z/w/h
coordinates? In some cases, a single device with multiple axes is
enough, but in other cases it is not.

On a side note, I have a feeling this is why things like the iPhone/iPad
are full-screen only, and Windows 7 is single-window multi-touch only.

Given that no-one's been able to articulate in much detail what any
other proposed solution should look like or how it will actually work
in the real world, I'm fairly terrified of it.

Can you guys (Bradley, Peter, Matthew) think of any specific problems
with the multi-layered model? Usecases as above would be great, bonus
points for diagrams. :)

I'm concerned about the event routing and implicit grabbing behaviour,
specifically. I don't know enough about the internals to really put my
concerns into words or link to code in the server.

Use-cases? Collaboration is the main use-case. Class rooms, meeting
rooms, conferences are ones that I often think about. Think about the
GIMP having multi-user and multi-touch support so that art students
could work together on a multi-touch table top. I think the MS Surface
marketing videos are a good indication of what could be done as well.

GIMP and MS Surface are really single-application use cases. Are there any examples of collaboration when multiple applications are involved?

One thing that we definitely want is for normal button and motion events
for one of the active touch-points over a client window. As Peter
pointed out, we shouldn't have to rewrite the desktop to support
multi-touch. In addition to specialized applications like I described
above, we definitely want "normal" applications to remain usable in such
an environment (I can definitely see someone bringing up a terminal
and/or code editor just for themselves to try out an idea that they get
while in a meeting).

(Sorry for the lack of diagrams, my ascii-art kung-fu is non-existent.
How about a video? http://vimeo.com/4990545)

It looks like an example of a single client handling all the low-level touch events. I really doubt that all the images on the video are different windows :) All the grabs and gestures seem to be implemented on the client side as well.

Thanks,

Artem
_______________________________________________
xorg-devel mailing list
xorg-devel@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-devel

Reply via email to