Re: gitlab.fd.o financial situation and impact on services

2020-02-28 Thread The Rasterman
On Thu, 27 Feb 2020 22:27:04 +0100 Daniel Vetter  said:

Might I suggest that given the kind of expenses detailed here, literally buying
1 - 4 reasonably specced boxes and hosting them at OSUOSL would be incredibly
cheaper? (we (enlightenment.org) have been doing so for years on a single
box). We farm out CI to travis via gihub mirrors as it's not considered
an essential core service (unlike mailing lists, git, phabricator whch nwe
still run - we can live without CI for a while and find other ways).

The cost is the odd HDD replacement every few years and maybe every 10y or so a
new box. That's a massively lower cost than you are quoting below.

OSUOSL provide bandwidth, power, rack space etc. for free. They have been
fantastic IMHO and the whole "no fat bills" is awesome and you get a full
system to set up any way you like. You just bring the box. That should drop cost
through the floor. It will require some setup and admin though.

> Hi all,
> 
> You might have read the short take in the X.org board meeting minutes
> already, here's the long version.
> 
> The good news: gitlab.fd.o has become very popular with our
> communities, and is used extensively. This especially includes all the
> CI integration. Modern development process and tooling, yay!
> 
> The bad news: The cost in growth has also been tremendous, and it's
> breaking our bank account. With reasonable estimates for continued
> growth we're expecting hosting expenses totalling 75k USD this year,
> and 90k USD next year. With the current sponsors we've set up we can't
> sustain that. We estimate that hosting expenses for gitlab.fd.o
> without any of the CI features enabled would total 30k USD, which is
> within X.org's ability to support through various sponsorships, mostly
> through XDC.
> 
> Note that X.org does no longer sponsor any CI runners themselves,
> we've stopped that. The huge additional expenses are all just in
> storing and serving build artifacts and images to outside CI runners
> sponsored by various companies. A related topic is that with the
> growth in fd.o it's becoming infeasible to maintain it all on
> volunteer admin time. X.org is therefore also looking for admin
> sponsorship, at least medium term.
> 
> Assuming that we want cash flow reserves for one year of gitlab.fd.o
> (without CI support) and a trimmed XDC and assuming no sponsor payment
> meanwhile, we'd have to cut CI services somewhere between May and June
> this year. The board is of course working on acquiring sponsors, but
> filling a shortfall of this magnitude is neither easy nor quick work,
> and we therefore decided to give an early warning as soon as possible.
> Any help in finding sponsors for fd.o is very much appreciated.
> 
> Thanks, Daniel
> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> +41 (0) 79 365 57 48 - http://blog.ffwll.ch
> ___
> xorg-devel@lists.x.org: X.Org development
> Archives: http://lists.x.org/archives/xorg-devel
> Info: https://lists.x.org/mailman/listinfo/xorg-devel
> 


-- 
- Codito, ergo sum - "I code, therefore I am" --
Carsten Haitzler - ras...@rasterman.com

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel


Re: Composite redraw speedup?

2020-02-12 Thread The Rasterman
On Sat, 8 Feb 2020 15:46:36 +0100 Egil Möller  said:

> Hi!
> 
> I have for a time now been working on a new composing window manager to try
> out a few UX ideas (https://redhog.github.io/InfiniteGlass videos:
> https://www.youtube.com/watch?v=vbt7qtwiLiM
> https://www.youtube.com/watch?v=E8f2KwgvxK4).
> 
> However, I'm having a performance problem in my redraw loop: When a lot is
> going on, e.g. during continuous stream of mouse events and/or PropertyNotify
> events + property gets, DamageNotify events for windows are often queued up
> and e.g. animations or video appear choppy.
> 
> To solve that, I start a redraw loop with a certain framerate when the first
> damage event comes in, to run for a set time. During this redraw loop, any
> window that has had DamageNotify events also on every frame reloads their
> contents to their textures, using () / glXBindTexImageEXT().
> This works fairly well, but when windows are large, this does mean that my
> renderer process starts eating a lot of CPU. Worse, it does this for a short
> amount of time _after_ all window changes have ceased.

hint: don't glXCreatePixmap() every time you render. do it only when you first
start compositing a window (i.e. on map) and if the window resizes. it'sa slow
path to keep doing this every time. the bind is needed for changing gl state to
tell it that that pixmap is the texture source. also don't re-get the pixmap
every frame/damage. get it once as well at the same times you do
glXCreatePixmap(). the pixmap id may change as the xserver may allocate a new
pixmap for the newly sized window, but no need to keep doing round trips to the
server to get it every damage.

also... don't handle every damage event as a draw. accumulate them for some
time (until the next frame tick/draw) and then draw it all in one go. also
consider using the buffer age extension to limit redraws only to updated
regions where you can assume the gl back buffer has content from N frames ago
(the buffer age tells you). use a scissor clip and just update the regions you
need (do the usual of merge nearby regions into single larger super-regions to
do a trade-off of number of render passes vs a bit more overdraw).

also just avoid doing any xlib calls that require round trips unless you have
to. e.g. only get properties when you have been notified by events that they
changed, then store the properties locally and use your local versions until
you're told that they have changed ... etc.

follow the model of "i shadow copy/store all the server state locally whenever i
can and use what i have stored locally on my side until i am told it has changed
or circumstances are that what i have is invalid and needs to be thrown out or
updated". 

> Is there a way around this? Is the real solution to not use the X protocol
> (or properties) for e.g. controlling animations?

get the fd for the xlib server connection and use select() to listen on it for
input available to read. when there is then fetch events until no more events
exist (while xpending() { xnextevent()}). and use select timeouts as your
animation trigger. if you get your timeouts right you can set it to wake up
fairly smoothly e.g. at 60hz. ... if you don't want to animate any frames,
don't have a timeout and select "forever" until something comes in from the
server. adjust the timeout based on how much time you spent processing events
since the last time you went into select. even better - if the /dev/dri/card0
device exists, dlopen libdrm and get some symbols from it and ... use it to
request the drm device sent you vsync events so you can use the vsync interrupt
as your frame event. this will be another fd to listen on in select() and of
course you can turn this vblank event stream on and off.

this way your single select loop can multiplex all your i/o with timeouts for
animation or vsync events for screen refresh rate bound animation etc.

> Would it help to have multiple connections to the X server, and subscribe
> only to damage events on one of them?
> 
> I know that damages are not for the whole window but for a certain region,
> but I don't think there's a way to update parts of a texture in OpenGL?

you don't have to. if you stop creating a pixmap every time... it's zero-copy.
or should be. you want to follow my above advice to minimize redraw regions.

> Thanks in advance,
> Egil

just an aside. you want to make a "minimal compositing wm" and also want it
fast and efficient and also want to do smooth animation and so on. this is why
toolkits exist that have already solved these problems for you. :) they have
gone and implemented these select based main loops and have glued in xlib for
you and have implemented all the pixmap compositing glue to do it efficiently
and have handled "only redraw the regions that update" and glued in vsync if
available (or otherwise use the clock-based select timeouts) for animation. as
you do more and more you will re-implement more and more of this. you might want

Re: RFC: automatic _NET_WM_PID setting for local clients

2019-07-23 Thread The Rasterman
On Tue, 23 Jul 2019 13:57:01 -0700 x...@pengaru.com said:

it'd be much more reliable to set _NET_STARTUP_ID to the content of whatever
the DESKTOP_STARTUP_ID env var has and enforce this in xlib itself. this can be
inherited down the chain through your launching/containers/whatever and passed
in for that launch instance. assuming your wm of course can do this and track
every launch instance it started off and map it back to that instance... but it
can know reliably then "THIS action of launching here resulted in that window
over there". much better than _NET_WM_PID because the pid here may not be the
pid of whatever was forked - but some other child process or even unrelated
other pid.

> Hello folks,
> 
> I'd like to propose that Xorg set the _NET_WM_PID property on new
> windows for local clients @ window create time whenever possible.  
> 
> This is something I added localy years ago to more reliably have this
> property set with uncooperative clients that didn't set it.  My window
> manager integrates client process monitoring and relies on this property
> for acquiring the PID of connected clients.
> 
> At the time, it was just a few X clients that were problematic, stuff
> like xpdf and other smaller programs using less popular toolkits or no
> toolkit at all.  It wasn't such a big deal, so I promptly forgot about
> it and stopped building my own Xorg debs with the patch, living with the
> absent monitoring overlays on those windows.
> 
> Fast-forward to today; I'm using systemd-nspawn for running X clients -
> particularly network-facing clients like FireFox where I _strongly_
> prefer isolating the client from accessing things like my home directory
> for obvious reasons (ssh keys, etc).
> 
> These programs are cooperative and set _NET_WM_PID, but the PID they set
> is from the perspective of the container namespace.  The display server
> is running in the global host namespace, where this PID has zero
> relevance.  The same goes for my window manager, it to runs in the
> host's namespace, so when it gets this PID and tries to monitor the
> process subtree rooted at that PID in /proc, it either finds nothing or
> the entirely wrong tree.
> 
> So again I'm wishing the display server would just set this property for
> local clients immediately when creating the window, which would not only
> make the property more reliable but now it would also set it from the
> PIDNS of the Xorg server, that I would argue is far more meaningful.
> 
> I happened to still have the old patch I was using to do this back in
> the day, and have attached it as-is for discussion purposes.
> 
> Thanks,
> Vito Caputo


-- 
- Codito, ergo sum - "I code, therefore I am" --
Carsten Haitzler - ras...@rasterman.com

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: Antialiasing

2017-01-30 Thread The Rasterman
On Sun, 29 Jan 2017 23:34:08 -0700 SRC SRC <sr...@rocketmail.com> said:

> Hi,
> 
> I was able to successfully create a translucent window using xlib and then
> set the transparency using the XRender extension, and make the window have
> rounded corners with the Xshape extension and the following code:
> 
> void rectAttributes(XRectangle *rect ,int width, int height, int x, int y) {
>   rect->height = height;
>   rect->width = width;
>   rect->x=x;
>   rect->y=y;
> }
> 
> void createRoundedEdges(int radius, Display *display, Window window) {
>   XWindowAttributes attr;
>   XGetWindowAttributes(display,window, );
>   int windowHeight = attr.height,
>   windowWidth = attr.width,
>   cornerCoordinates[radius],
>   x_bar = windowWidth - radius;
>   XRectangle rect[(radius*4)]; //*4 because of 4 corners
>   /* Populate the firstCorner array */
>   for (int i=radius; i!=0; i--) {
> int y = i,
> x = sqrt((radius*radius)-(y*y));
> cornerCoordinates[radius-i] = x;
>   }
>   for (int i=0,j=radius,k=2*j,l=3*j;i<radius;i++,j++,k++,l++) {
> rectAttributes([i],windowWidth,1,x_bar+cornerCoordinates
> [i],i); //top-right rectAttributes([j],windowWidth,1,x_bar
> +cornerCoordinates[(2*radius)-1-j] //bottom-right ,windowHeight-(2*radius)+j);
> rectAttributes([k],windowWidth,1,radius-windowWidth-cornerCoordinates
> [i],i); //top-left rectAttributes([l],windowWidth,1,rect
> [k].x,windowHeight-1-i); //bottom-left }
>   int sizeOfArray = sizeof(rect)/sizeof(rect[0]);
>   XShapeCombineRectangles
> (display,window,0,0,0,rect,sizeOfArray,ShapeSubtract,0); }
> 
> This is what the above code generates:
> 
> If you look closely at the rounded corners, they seem to be not smooth but
> rather jagged, and to make those corners smooth I’d have to do ant-aliasing
> on the window. I could do that in two ways, one is by writing the
> anti-aliasing algorithm and placing pixels with differing alpha values along
> the edges using Xrender, or by finding an extension that does the
> anti-aliasing on the window. So, I was wondering if you could point me to an
> anti-aliasing extension that I can use to make the corners smooth mostly
> because I really don’t want to spend the time writing an antialiasing
> algorithm from scratch if an extension already exists that does just that.

well first... the shape extension is not going to help you at all. it's
boolean. server-side it boils down to a list of rectangles your window occupies
(visibly and event-wise). this is by nature either "inside or outside" your
window on a pixel basis and thus you'll get jaggies.

that's why you have an argb window... you can draw rounded corners with alpha
channels. but you still need a compositor to composite them together... and you
say you cannot have one by definition. the shape extension works without a
wm/compositor as its the server literally clipping your window to this
rectangle set. 

your choices are:

1. hand write pixels yourself one at a time with xrender.
2. save time and just use cairo to draw a rect with rounded corners 
3. pre-draw the rounded corners as images and have a library render them for
you after turning them into a pixmap (xrender, cairo, imlib2 - based on your
previous mail can do that too).

you really want to just start using other libraries to do this work for you and
get yourself a compositor/wm. unless you  have studied/learned enough about x11
to do it all yourself of course... in which case you wouldn't be asking
here. :)

that *IS* what libraries are for. to encapsulate such knowledge and save you
the pain. of course if you wish to learn... you are likely wagging the dog by
it's tail with where you are starting from...  (based on your previous
stackoverflow post) and doing things by pretty much poking in the dark. you are
better off going FROM a higher level working environment TO a simpler one and
learning pieces of the puzzle along the way rather than building up from nil.

-- 
- Codito, ergo sum - "I code, therefore I am" --
The Rasterman (Carsten Haitzler)ras...@rasterman.com

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: Create a 32-bit root window

2017-01-08 Thread The Rasterman
On Sat, 7 Jan 2017 15:29:41 -0700 SRC SRC <sr...@rocketmail.com> said:

> Hi,
> 
> This is my first email to this mailing list and I was wondering if anyone
> could tell me how to draw a root window on the display using xlib with a
> 32-bit color depth so that I can use RGBA colors on the child windows. As far
> as I know, I could be wrong though, the child window cannot use the alpha
> channel if it is not a 32-bit color depth window, and for it to be a 32-bit
> depth window it's parent has to also be a 32-bit depth window. Basically what
> I'm trying  to do is I'm trying to set a root window with a color depth of
> 32-bit that has a scaled 24-bit image loaded as the background, and I want
> another window that will be the child of this parent, or root, window, which
> will have a translucent effect similar to the window borders in Microsoft
> windows 7. if you need more details I have posted a similar question on stack
> overflow with more details on the code that I'm using to create the root
> window. Hopefully I was clear on my question.

no. you don't have to have root window be depth 32 to have children be depth 32
too. otherwise how on earth do you think windows can have alpha channels in a
compositor? how do you think this:

http://www.enlightenment.org/ss/e-5872c6ec3ddce1.54730231.png

is possible without 2 of those windows having 32bit depth? (the 2 on the left -
clock and translucent terminal). :)

the way translucency WORKS is a compositor intervenes (these days usually your
window manager) and is composites the 32bit windows on top (also possibly deals
with redrawing root window too at the bottom - it may depend though).

so what you want really is a compositor AND 32bit windows. :) either use a
compositing window manager and then create 32bit windows, OR run a separate
compositor and your existing wm, or write your own compositor... (not going to
be much fun to get this right AND fast)...

> Thanks!
> Sajeeb Roy
> 
> Stack overflow link:
> http://stackoverflow.com/questions/41524843/create-a-32-bit-root-window-in-xlib

-- 
----- Codito, ergo sum - "I code, therefore I am" --
The Rasterman (Carsten Haitzler)ras...@rasterman.com

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: Hints for debugging a weird sw-cursor issue ?

2016-08-31 Thread The Rasterman
On Thu, 1 Sep 2016 06:36:32 +1000 Dave Airlie <airl...@gmail.com> said:

> On 31 August 2016 at 22:10, Hans de Goede <hdego...@redhat.com> wrote:
> > Hi All,
> >
> > I've noticed a weird sw-cursor issue when a slave-output is active
> > (I believe this is a sw-cursor issue because show_cursor never
> >  gets called when a slave-output is active at server start).
> >
> > I'm seeing this with both 1.18 and master and with both the
> > intel and the modesetting drivers (running gnome3 / gnome-shell).
> >
> > Everything is fine until I start glxgears (*) then the cursor becomes
> > invisible (flickering on the intel driver) when it is near the
> > top of the screen. Basically there is a horizontal bar where
> > the cursor does not show. Note this goes for the entire monitor,
> > not just where glxgears is running. This bar is near the top of
> > the screen, but not completely at the top, there is a small area
> > near the top where the cursor still shows.
> >
> > Interestingly enough if I disable vblank for glxgears the problem
> > remains, but the bar becomes smaller (less high).
> >
> > So anyone got any clues for debugging this ?
> 
> Sounds like some wierdass tearing,
> 
> since swcursor has to paint the cursor on the screen, so you might
> be seeing frames where the cursor hasn't been painted yet.

that'd likely be it. remember that on an update of the screen, if the update
draws where the cursor is, the cursor is "destroyed" then some time after this
it's repainted on top directly to the fb. thats how sw cursors have been done
in x as long as i can remember. when the cursor paints it also makes a copy of
the region it paints to to an offscreen buffer area so if the cursor itself
moves/changes, it pastes that back on again to wipe the cursor, then does the
above "draw it to the fb" again.

there's a good chance the cursor draw is being hooked to some vblank interrupt
and thus if cursor is close to the top the draw is not done yet when screen
scans out... thus the bar at the top. i should assume your display is likely
composited right? which means it may be that that area is being drawn
regardless of where glxgears is. compositor is drawing it.

good luck with this one. i have an idea that'd make it better but not perfect.
your solutions are not going to be pretty with various downsides but they may
fix the flickering/invisible thing. :)

> Dave.
> ___
> xorg-devel@lists.x.org: X.Org development
> Archives: http://lists.x.org/archives/xorg-devel
> Info: https://lists.x.org/mailman/listinfo/xorg-devel

-- 
- Codito, ergo sum - "I code, therefore I am" --
The Rasterman (Carsten Haitzler)ras...@rasterman.com

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: Receiving PropertyNotify events for the root window.

2015-05-15 Thread The Rasterman
On Fri, 15 May 2015 13:19:44 +0300 Yaron Cohen-Tal yaro...@gmail.com said:

 Hi,
 
 I'd like to be notified about PropertyNotify events of the root window,
 for example to be notified when its _NET_CURRENT_DESKTOP property is
 changed. As I didn't create the root window, I don't know how to be
 notified of events related to it, of if it's at all possible.

XSelectInput(disp, window_id, PropertyChangeMask);

(and OR (|) to the propertychangemask whatever other events you want to listen
to. window_id would be the root window id. disp is your Display * handle.

 Thanx,
 Yaron.


-- 
- Codito, ergo sum - I code, therefore I am --
The Rasterman (Carsten Haitzler)ras...@rasterman.com

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: Migrating away from using cpp for startx and xinitrc in xinit

2015-02-10 Thread The Rasterman
On Mon, 9 Feb 2015 23:39:06 -0800 Jeremy Huddleston Sequoia
jerem...@freedesktop.org said:

 It seems that using cpp for startx and xinitrc in the xinit port is coming
 back to bite us now as different C preprocessors don't exactly process non-C
 files in ways that we might want.
 
 https://trac.macports.org/ticket/46811#comment:4
 
 Does anyone have any strong opinions about this state of affairs and how we
 should address it?  If not, I'll mull it over for a while and try to figure
 something out.

we had this problem long ago and solved it by shipping our own cpp:

http://git.enlightenment.org/core/efl.git/tree/src/bin/edje/epp

it's not actually ours - it's an ancient stand-alone gpl stand-alone cpp that
we KNOW works and outputs exactly what we expect/want. you could do the same.


-- 
- Codito, ergo sum - I code, therefore I am --
The Rasterman (Carsten Haitzler)ras...@rasterman.com

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Re: Additive (not linear) compositing when using GLX

2010-10-18 Thread The Rasterman
On Sun, 17 Oct 2010 18:03:28 -0500 Rendaw ren...@zarbosoft.com said:

x uses premultiplied alpha ARGB colorspace. (google for the info) but the
simple thing here is
  r = r *a, g =g * a, b = b* a
that means r can NEVER be bigger than a, same with g and b. otherwise rendering
booboos happen as you saw. you are rendering with opengl - but you are not
producing a premultiplied alpha result. fix that and presto. :)


   I'm using xcompmgr for compositing.  A window I paint using OpenGL is 
 much brighter than a window painted the same way using Cairo.  
 Specifically, with Red=1, Alpha=0, everything has full red.  Using 
 Cairo, there is no coloration.
 
 I'm not using glxcompmgr, but I noticed the line glBlendFunc(GL_ONE, 
 GL_ONE_MINUS_SRC_ALPHA); in the code.  Compositing like that would cause 
 the output I showed above.
 
 Does anyone know from which package this behavior might stem?  Since 
 xcompmgr is pretty small, I doubt it is differentiating between GL 
 rendered windows and xrender/whatever rendered ones, but that would have 
 been my first guess.
 
 (I'm using Arch linux: xcompmgr 1.1.5-1, xf86-video-intel 2.12.0-1, 
 xorg-server 1.8.1.902-1)
 
 Here's the code to reproduce.  You'll have to change TargetVisual to use 
 some RGBA supporting visual.
 #include GL/glx.h
 #include GL/gl.h
 #include cairo/cairo-xlib.h
 #include unistd.h
 #include cassert
 #include string
 
 const float ClearColor[4] = {1, 0, 0, 0.0f};
 int TargetVisual = 0x5e;
 
 void CairoDraw(Display *Context, Window Canvas, XVisualInfo *VisualData)
 {
  cairo_surface_t *BaseSurface = cairo_xlib_surface_create(Context, 
 Canvas, VisualData-visual, 100, 100);
  cairo_t *CairoContext = cairo_create(BaseSurface);
 
  cairo_set_operator(CairoContext, CAIRO_OPERATOR_SOURCE);
  cairo_set_source_rgba(CairoContext, ClearColor[0], ClearColor[1], 
 ClearColor[2], ClearColor[3]);
  cairo_paint(CairoContext);
 
  cairo_surface_write_to_png(BaseSurface, Test.png);
 }
 
 void GLDraw(Display *Context, Window Canvas, XVisualInfo *VisualData)
 {
  GLXContext GLContext = glXCreateContext(Context, VisualData, NULL, 
 GL_TRUE);
  glXMakeCurrent(Context, Canvas, GLContext);
 
  glClearColor(ClearColor[0], ClearColor[1], ClearColor[2], 
 ClearColor[3]);
  glClear(GL_COLOR_BUFFER_BIT);
  glXSwapBuffers(Context, Canvas);
 }
 
 int main(int argc, char **argv)
 {
  Display *Context = XOpenDisplay(NULL);
 
  XVisualInfo Criteria, *FoundVisuals;
  int FoundVisualCount;
  Criteria.visualid = TargetVisual;
  FoundVisuals = XGetVisualInfo(Context, VisualIDMask, Criteria, 
 FoundVisualCount);
  assert(FoundVisualCount != 0);
 
  XSetWindowAttributes NewAttributes;
  NewAttributes.colormap = XCreateColormap(Context, 
 RootWindow(Context, FoundVisuals-screen), FoundVisuals-visual, AllocNone);
  NewAttributes.border_pixel = 0;
  NewAttributes.event_mask = StructureNotifyMask;
 
  Window TestWindow = XCreateWindow(Context, RootWindow(Context, 
 FoundVisuals-screen),
  0, 0, 100, 100, 0,
  FoundVisuals-depth, InputOutput, FoundVisuals-visual,
  CWBorderPixel | CWColormap | CWEventMask,
 NewAttributes);
  XMapWindow(Context, TestWindow);
 
  XEvent Poll;
  do XNextEvent(Context, Poll); while (Poll.type != MapNotify);
 
  if ((argc = 2)  (std::string(argv[1]) == cairo))
  CairoDraw(Context, TestWindow, FoundVisuals);
  else GLDraw(Context, TestWindow, FoundVisuals);
 
  sleep(10);
 }
 
 ___
 xorg-devel@lists.x.org: X.Org development
 Archives: http://lists.x.org/archives/xorg-devel
 Info: http://lists.x.org/mailman/listinfo/xorg-devel
 


-- 
- Codito, ergo sum - I code, therefore I am --
The Rasterman (Carsten Haitzler)ras...@rasterman.com

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: Additive (not linear) compositing when using GLX

2010-10-18 Thread The Rasterman
On Mon, 18 Oct 2010 12:57:25 -0500 Rendaw ren...@zarbosoft.com said:

   On 10/18/2010 02:49 AM, Carsten Haitzler (The Rasterman) wrote:
  you are not producing a premultiplied alpha result. fix that and presto. :)
 I guess I sort of expected glx to handle that, since, should it not, I'd 
 have to change all my calls to glBlendFunc/glColor, glClearColor, and 

yup. you need to change them. as such it won't hurt - you should use premul
argb anyway. there is no other sane way to do destination alpha rendering and
keep rendering correct - so if you ever render to an fbo for example - you'll
need this mode if you ever want that fbo to have an alpha channel of its own
etc.

 all my shader code, right?  Or is there an easier way to get 
 premultiplication?  Well, I guess transparent applications aren't 
 exactly common, anyways.

well that depends what you are doing in your shaders :) and also what your
textures are. as such i moved my entire rendering pipeline to premul ages ago -
textures are premul argb, as are fbo's, buffers and everything. once youa re
there life is easy.

 Anyways, thanks, I got it working!  Thank goodness I was wrapping all my 
 OpenGL calls and not using any shaders! Or something.



-- 
- Codito, ergo sum - I code, therefore I am --
The Rasterman (Carsten Haitzler)ras...@rasterman.com

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: X and gestures architecture review

2010-08-27 Thread The Rasterman
 in a way they always
get delivered to that client, regardless of target window), and now that you
can have a conflict with events, a way as above, of resolving that conflict
sanely.

 However, what's the best way to resolve point 2?

i don;t see it as an issue - you most likely will have 2 layers checking for
gestures. clients themselves (targets for mt) and some wm for handling
screen-wide gestures. this is the majority case by a long shot. gesture
recognition is not hard for handling swipes and rotates and so on. it's pretty
lean cpu-wise. it's hard to do so it guesses the intended gesture just right
every time though - that's the trick, and then comes filtering of input. it's
cheap and easy to do this. (but remember i compare this to the cost in managing
the scene graph of a ui, and then rendering its updates pixel by pixel in the
cpu... and that can be done realtime with smooth framerates even on
embedded-level cpu's (arm/low level atoms). so in the scheme of things gesture
processing overhead is minimally intrusive (if done efficiently). admittedly
the only gestures i bother handling right now are swipes for scrolling with
fingers + momentum etc.

 We have not yet begun development for the next Ubuntu cycle, but we will
 be tackling it shortly. We are open to trying out any approach that
 seems reasonable, whether it's client side or server side recognition.
 At this point I'm on the fence due to the amount of work required to
 implement the X gesture extension vs the potential latencies encountered
 by gesture recognition being performed twice.
 
 Thanks,
 
 -- Chase
 
 ___
 xorg-devel@lists.x.org: X.Org development
 Archives: http://lists.x.org/archives/xorg-devel
 Info: http://lists.x.org/mailman/listinfo/xorg-devel
 


-- 
- Codito, ergo sum - I code, therefore I am --
The Rasterman (Carsten Haitzler)ras...@rasterman.com

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: [RFC] Multitouch support, step one

2010-03-16 Thread The Rasterman
On Tue, 16 Mar 2010 22:10:57 +0100 Henrik Rydberg rydb...@euromail.se said:

 Olivier Galibert wrote:
  On Tue, Mar 16, 2010 at 02:42:15PM +0100, Henrik Rydberg wrote:
  1. User space wants details, but also consistent behavior for all devices
  supporting multitouch.
  
  On that aspect, wouldn't it make sense to have a user-side gesture
  manager process with the same kind of status the window manager has?
  It could generate synthetic events with new types such as zoom in/out,
  rotate, etc, but also decide when to forward simple clics, drags...
  
  Ultimately, it's where multiplayer experiments could be done.
  
OG.
 
 That would be interesting, yes. Regarding the gestures, there seems to be a
 general consensus that a library and/or various drivers/managers can work
 together to produce all sorts of fun multitouch and multiuser effects. We only
 need to get the underlying contact interface in place first -- in one form or
 the other.

actually it makes  no sense doing this in a generic sense - not a gesture
manager. it would need LOTS of context. what parts of a window are sensitive to
which gestures, events etc. etc. etc. and what they mean, how and so on - and
that is intrinsically linked TO content. a library toolkits can use to help
recognise gestures - sure. a toolkit - or a whole app knows its content well
enough to make these decisions, but you can't do the gesture stuff without
having context.

-- 
- Codito, ergo sum - I code, therefore I am --
The Rasterman (Carsten Haitzler)ras...@rasterman.com

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: [RFC] Multitouch support, step one

2010-03-16 Thread The Rasterman
On Wed, 17 Mar 2010 03:22:20 +0200 Daniel Stone dan...@fooishbar.org said:

 Hi,
 
 On Mon, Mar 15, 2010 at 04:56:05PM +1000, Peter Hutterer wrote:
  Core requires us to always send x/y
 
 Er, I don't think it does _always_ require it.
 
  hence for core emulation we should
  always include _some_ coordinates that are easily translated. While the
  server does caching of absolute values, I think it would be worthwile to
  always have an x/y coordinate _independent of the touchpoints_ in the event.
  The driver can decide which x/y coordinates are chosen if the first
  touchpoint becomes invalid.
 
 Why not just use the first touchpoint for x/y, and when it goes away, no
 more core x/y is sent? Principle of least surprise and all that.

that's what i'd expect. first touchpoint == core emulation. even if apps are
listening to both core and xi2 - they will expect core emulation and can
either choose to ignore core events OR ignore first touch point and let core
events work for that (this is the least surprising for toolkits and apps as
they all already handle core events etc.)

  Hence, the example with 4 valuators above becomes a device with 6 valuators
  instead. x/y and the two coordinate pairs as mentioned above. If extra data
  is provided by the kernel driver, these pairs are simple extended into
  tuples of values, appropriately labeled.
 
 Yep.
 
 This all looks fine, really, and doesn't require terribly much work.
 Multi-focus multi-touch is a bit further off, but hey, none of us can
 even particularly describe how it should work, so I don't think it's
 such a pressing issue. :)

agreed. right now we have a definite use case and requirement for mt for 1
client. first touch sets window grab for all mt input and that window gets
delivered the xi2 events for all the mt input - regardless where extra touches
happen. that works for me at any rate. i'm happy with that.

-- 
- Codito, ergo sum - I code, therefore I am --
The Rasterman (Carsten Haitzler)ras...@rasterman.com

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: [RFC] Multitouch support, step one

2010-03-16 Thread The Rasterman
On Mon, 15 Mar 2010 12:55:11 +0100 Benjamin Tissoires tisso...@cena.fr said:

  XI2 allows devices to change at runtime. Hence a device may add or remove
  valuators on-the-fly as touchpoints appear and disappear. There is a
  chance of a race condition here. If a driver decides to add/remove
  valuators together with the touchpoints, a client that skips events may
  miss out. e.g. if a DeviceChanged event that removes an axis is followed
  by one that adds an axis, a client may only take the second one as
  current, thus thinking the axis was never removed. There is nothing in
  the XI2 specs that prohibits this. Anyways, adding removing axes together
  with touchpoints seems superfluous if we use the presence of an axis as
  indicator for touch. Rather, I think a device should be set up with a
  fixed number of valuators describing the default maximum number of
  touchpoints. Additional ones can be added at runtime if necessary.
 
  agreed. i really see this having a fixed # of touch points - and not
  changing - unless you literally unplug/plug in new hardware that has
  different features (has more or less in the way of touch point support).
 
 We can have a fixed number of touch point but send only the required 
 ones. So agreed too. The point is: how many touch point do we have. The 
 kernel knows how many touches a device can send as the data are not 
 serialized. But after that, we have no idea of how many touches the 
 device support.
 
 With the mask system (or the packing of the touches at the beginning), 
 we will send only the right number of touches, but the description will 
 be very heavy. If each point has 5 axes (trackingID, x, y, width, height 
 for instance) we will have 50 valuators if we support 10 touches ;-) By 
 the way, it's not the point here.

this is what i'm concerned about. there are touch surfaces that now support 10
touches - AND they can give you not just x,y but radius (x and y) too. so 10
touch points - 40 valuators. already blown the 36 valuator limit - and we
havent added yet more things like tracking id, separate pressure, angle etc.
properties. multiple devices with valuators for these touch params will handle
this. anyway - that's my concern for now.

-- 
- Codito, ergo sum - I code, therefore I am --
The Rasterman (Carsten Haitzler)ras...@rasterman.com

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: [RFC] Multitouch support, step one

2010-03-15 Thread The Rasterman
, adding removing axes together with touchpoints
 seems superfluous if we use the presence of an axis as indicator for touch.
 Rather, I think a device should be set up with a fixed number of valuators
 describing the default maximum number of touchpoints. Additional ones can be
 added at runtime if necessary.

agreed. i really see this having a fixed # of touch points - and not changing -
unless you literally unplug/plug in new hardware that has different features
(has more or less in the way of touch point support).

 Work needed:
 - drivers: updated to parse ABS_MT_FOO and forward it on.
 - X server: the input API still uses the principle of first + num_valuators
   instead of the bitmask that the XI2 protocol uses. These calls need to be
   added and then used by the drivers.
 - Protocol: no protocol changes are necessary, though care must be taken in
   regards to XI1 clients. 
   Although the XI2 protocol does allow device changes, this is not specified
   in the XI1 protocol, suggesting that once a device changes, potential XI1
   clients should be either ignored or limited to the set of axes present
   when they issued the ListInputDevices request. Alternatively, the option
   is to just encourage XI1 clients to go the way of the dodo.
 
 Corner cases:
 We currently have a MAX_VALUATORS define of 32. This may or may not be
 arbitrary and interesting things may or may not happen if we increase that.

another problem - no ability to do pressure here. ie have each touch point
have a radius for example (x and y radius) etc. etc. ??? what happened to that?

 A device exposing several axes _and_ multitouch axes will need to be
 appropriately managed by the driver. In this case, the right thing to do
 is likely to expose non-MT axes first and tack the MT axes onto the back.
 Some mapping may need to be added.

you mean axes like each touchpoint width/hight (radius) etc. ?

 The future addition of real multitouch will likely require protocol changes.
 These changes will need to include a way of differentiating a device that
 does true multitouch from one that does single-point multi-touch.
 
 That's it, pretty much (well, not much actually). Feel free to poke holes
 into this proposal.

*poke* :)

-- 
- Codito, ergo sum - I code, therefore I am --
The Rasterman (Carsten Haitzler)ras...@rasterman.com

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Re: multitouch

2010-02-08 Thread The Rasterman
On Mon, 8 Feb 2010 16:16:35 +1000 Peter Hutterer peter.hutte...@who-t.net
said:

 my apologies for the late answer to this whole thing, but this is sort-of a
 reply to all three emails by you guys.
 
 On Tue, Jan 19, 2010 at 01:00:27PM +0100, Simon Thum wrote:
  Bradley T. Hughes wrote:
   On 01/18/2010 11:54 PM, ext Carsten Haitzler (The Rasterman) wrote:
   hey guys (sorry for starting a new thread - i only just subscribed -
   lurking on xorg as opposed to xorg-devel).
  
   interesting that this topic comes up now... multitouch. i'm here @
   samsung and got multi-touch capable hardware - supports up to 10
   touchpoints, so need support.
  
   now... i read the thread. i'm curious. brad - why do u think a single
   event (vs multiple) means less context switches (and thus less power
   consumption, cpu used etc.)?
   
   Even though the events may be buffered (like you mention), there's no 
   guarantee that they will fit nicely into the buffer. I'm not say that
   this will always be the case, but I can foresee the need to write code
   that scans the existing event queue, possibly flushes and rereads, scans
   again, etc. to ensure that the client did actually get all of the events
   that it was interested in.

even then u;'d be begging for bugs if you handle events massively out-of-order
(eg several mouse moves/downs/ups between xi2 events). anyway not sure
power here is a good argument - there are other ones that are better :)

  They other guys at my workplace do a touchtable, so I'm not particularly
  qualified. But there's 10 points in carsten's HW already, and from what
  I know it's not hard to imagine pressure or what not to become
  important. That's 30 axes and a limit of 36 axes - if that's not easy to
  lift I'd be wary of such an approach.
 
 the 36 axis limit is one defined in XI1. arguably, no sane multi-touch
 application should be using XI1 anyway. XI2 has a theoretical 16-bit limit
 on axis numbers, so that should be sufficient for devices in the near
 future. Yes, there are some limitations in the server but they can be fixed.

good to hear :)

   There's also the fact that the current approach that Benjamin suggested 
   requires an extra client to manage the slave devices.
  
  OTOH, if you're getting serious, there needs to be an instance
  translating events into gestures/metaphors anyway. So I don't see the
  point of avoiding an instance you're likely to need further on.
 
 A gesture recogniser instance will be mandatory. However, a client that
 modifies the list of input devices on demand and quite frequently hopefully
 won't. Benjamin's approach puts quite a load on the server and on all
 clients (presence events are sent to every client), IMO unnecessarily.

why should one be at the xi2 event level? i'm dubious of this. i've thought it
through a lot - you want gesture recognition happening higher up in the toolkit
or app. you need context - does that gesture make sense. if one gesture was
started but it ended in a way that gesture changed, u ned to cancel the
previous action etc. imho multitouch etc. should stick to delivering as much
info that the HW provides as cleanly and simply as possible via xi2 with
minimal interruption of existing app functionality.

 The basic principle for the master/slave division is that even in the
 presence of multiple physical devices, what really counts in the GUI is the
 virtual input points. This used to be a cursor, now it can be multiple
 cursors and with multitouch it will be similar. Most multitouch gestures
 still have a single input point with auxiliary information attach.
 Prime example is the pinch gesture with thumb and index - it's not actually
 two separate points, it's one interaction. Having two master devices for
 this type of gesture is overkill. As a rule of thumb, each hand from each
 user usually constitutes an input point and thus should be represented as a
 master device.

well that depends - if i take both my hands with 2 fingers and now i draw thins
with both left and right hand.. i am using my hands as 2 independent core
devices. the problem is - the screen can't tell the difference - neither can
the app. i like 2 core devices - it means u can emulate multitouch screens
with mice... you just need N mice for N fingers. :) this is a good way to
encourage support in apps and toolkits as it can be more widely used.

 So all we need is hardware that can tell the difference between hands :)

aaah we can wish :)

 An example device tree for two hands would thus look like this:
 
 MD1- MD XTEST device
- physical mouse
- right hand touch device - thumb subdevice
  - index subdevice
 MD2- MD XTEST device
- physical trackball
- left hand touch device  - thumb subdevice
  - index subdevice
  - middle finger subdevice
 
 Where the subdevices are present on demand and may disappear. They may not
 even be actual devices

multitouch

2010-01-18 Thread The Rasterman
hey guys (sorry for starting a new thread - i only just subscribed - lurking on
xorg as opposed to xorg-devel).

interesting that this topic comes up now... multitouch. i'm here @ samsung and
got multi-touch capable hardware - supports up to 10 touchpoints, so need
support.

now... i read the thread. i'm curious. brad - why do u think a single event (vs
multiple) means less context switches (and thus less power consumption, cpu
used etc.)?

as such your event is delivered along with possibly many others in a buffer - x
protocol is buffered and thus read will read as much data as it can into the
buffer and process it. this means your 2, 3, 4, 5 or more touch events should
get read (and written from the server side) pretty much all at once and get put
into a single buffer, then XNextEvent will just walk the buffer processing the
events. even if by some accident thet dont end up in the same read and buffer
and you do context switch, you wont save battery as the cpu will have never
gone idle enough to go into any low power mode. but as such you should be
seeing all these events alongside other events (core mousepress/release/motion
etc. etc. etc.). so i think the argument for all of it in 1 event from a
power/cpu side i think is a bit specious. but... do you have actual data to
show that such events actually dont get buffered in x protocol as they should
be and dont end up getting read all at once? (i know that my main loop will
very often read several events from a single select wakeup before going back to
sleep, as long as the events come in faster than they can be acted on as they
also get processed and batched into yet another queue before any rendering
happens at the end of that queue processing).

but - i do see that if osx and windows deliver events as a single blob for
multiple touches, then if we do something different, we are just creating work
for developers to adapt to something different. i also see the arguument for
wanting multiple valuators deliver the coords of multiple fingers for things
like pinch, zoom, etc. etc. BUT this doesnt work for other uses - eg virtual
keyboard where i am typing with 2 thumbs - my presses are actually independent
presses like 2 core pointers in mpx.

so... i think the multiple valuators vs multiple devices for mt events is moot
as you can argue it both ways and i dont think either side has specifically a
stronger case... except doing multiple events from multiple devices works
better with mpx-aware apps/toolkits, and it works better for the more complex
touch devices that deliver not just x,y but x, y, width, height, angle,
pressure, etc. etc. per point (so each point may have a dozen or more valuators
attached to it), and thus delivering a compact set of points in a single event
makes life harder for getting all the extra data for the separate touch events.

so i'd vote for how tissoires did it as it allows for more information per
touch point to be sanely delivered. as such thats how we have it working right
now. yes - the hw can deliver all points at once but we produce n events. but
what i'm wondering is.. should we

1. have 1, 2, 3, 4 or more (10) core devices, each one is a touch point.
2. have 1 core with 9 slave devices (core is first touch and core pointer)
3. have 1 core for first touch and 9 floating devices for the other touches.

they have their respective issues. right now we do #3, but #2 seems very
logical. #1 seems a bit extreme.

remember - need to keep compatibility with single touch (mouse only) events and
apps as well as expand to be able to get the multi-touch events if wanted.

-- 
- Codito, ergo sum - I code, therefore I am --
The Rasterman (Carsten Haitzler)ras...@rasterman.com

___
xorg-devel mailing list
xorg-devel@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-devel