Re: Collaboration on standard Wayland protocol extensions
On Tue, 29 Mar 2016 08:11:03 -0400 Drew DeVaultsaid: > > what is allowed. eg - a whitelist of binary paths. i see this as a lesser > > chance of a hole. > > I see what you're getting at now. We can get the pid of a wayland > client, though, and from that we can look at /proc/cmdline, from which > we can get the binary path. We can even look at /proc/exe and produce a > checksum of it, so that programs become untrusted as soon as they > change. you can do that... but there are race conditions. a pid can be recycled. imagine some client just before it exits sends some protocol to request doing something "restricted". maybe you even check on connect, but let's say this child exits and you haven't gotten the disconnect on on the fd yet because there is still data to read in the buffer. you get the pid while the process is still there, then it happens to exit.. NOW you check /proc/PID ... but in the mean time the PID was recycled with a new process that is "whitelisted" so you check this new replacement /proc/PID/exe and find it's ok and ok the request from the old dying client... BOOM. hole. it'd be better to use something like smack labels - but this is not used commonly in linux. you can check the smack label on the connection and auth by that as smack label then can be in a db of "these guys are ok if they have smack label "x"" and there is no race here. smack labels are like containers and also affect all sorts of other access like to files, network etc. but the generic solution without relying on smack would be to launch yourself - socketpair + pass fd. :) it has the lowest chance of badness. this works if the client is a regular native binary (c/c++) or if its a script because the fd will happily pass on even if its a wrapper shell script that then runs a binary. > > i know - but for just capturing screencasts, adding watermarks etc. - all > > you need is to store a stream - the rest can be post-processed. > > Correct, if you record to a file, you can deal with it in post. But > there are other concerns, like what output format you'd like to use and > what encoding quality you want to use to consider factors like disk > space, cpu usage, etc. And there still is the live streaming use-case, > which we should support and which your solution does not address. given high enough quality any post process can also transcode to another format/codec/quality level while adding watermarks etc. a compositor able to stream out (to a file or whatever) video would of course have options for basics like quality/bitrate etc. - the codec libraries will want this info anyway... > > let's talk about the actual apps surfaces and where they go - not > > configuration of outputs. :) > > No, I mean, that's what I'm getting at. I don't want to talk about that > because it doesn't make sense outside of e. On Sway, the user is putting > their windows (fullscreen or otherwise) on whatever output they want > themselves. There aren't output roles. Outputs are just outputs and I > intend to keep it that way. enlightenment ALSO puts windows "on the current screen" by default and you can move them to another screen, desktop etc. as you like. hell it has the ability to remember screen, desktop, geometry, and all sorts of other state and re-apply it to the same window when it appears again. i use this often myself to force apps to do what i want when they keep messing up.. i'm not talking about manually moving things or the ability for a compositor/wm to override and enforce its will. i am talking about situations where you want things to "just work" out of the box as they might be intended to without forcing the user to go manually say "hey no - i want this". i'm talking about a situation like powerpoint/impress/whatever where when i give a presentation on ONE screen i have a smaller version of the slide, i also have the preview of the next slide, a count-down timer for the slide talk, etc. and on the "presentation screen" i get the actual full presentation. I should not have to "manually configure this". impress/ppts/whatever should be able to open up 2 windows and appropriately tag them for their purposes and the compositor then KNOWS which screen they should go onto. impress etc. also need to know that a presentation screen exists so it knows to open up a special "presentation window" and a "control window" vs just a presentation window. these windows are of course fullscreen ones - i think we don't disagree there. the same might go for games - imagine a nintento DS setup. game has a control window (on the bottom screen) and a "game window" on the top. similar to impress presentation vs control windows. imagine a laptop with 2 screens. one in the normal place and one where your keyboard would be... similar to the DS. maybe we can talk flight simulators which may want to span 3 monitors (left/middle/right), due to different screens able to do different refresh rates etc. you really likely want to have 3
Re: [PATCH] protocol: Add summaries to event parameters
On Tue, Mar 29, 2016 at 05:17:06PM -0700, Bryce Harrington wrote: > On Tue, Mar 29, 2016 at 06:17:38PM -0500, Yong Bakos wrote: > > > > > > From: Yong Bakos> > > > > > All event arg elements now have an appropriate summary attribute. > > > This was conducted mostly in response to the undocumented parameter > > > warnings generated during 'make check'. > > > > > > Signed-off-by: Yong Bakos > > > --- > > > > Sorry I borked the subject line. Should be [PATCH wayland]... > > Let me know if I should re-send. > > It's fine, I forget to do this myself from time to time. > Fwiw 'wayland' is implied when one omits specifying the project in the > subject line, anyway. > > I've learned you can put this in your weston checkout's .git/config: > > [format] > pretty = fuller > subjectprefix = "PATCH weston" > > But I find in practice I usually have to override it anyway with > --subject-prefix to add the patchset version numbers. fwiw, I've started running this in any new repo, it's pretty much universally applicable: git config --add format.subjectprefix "PATCH `basename $PWD`" Cheers, Peter ___ wayland-devel mailing list wayland-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/wayland-devel
Re: [PATCH] protocol: Add summaries to event parameters
On Tue, Mar 29, 2016 at 06:17:38PM -0500, Yong Bakos wrote: > > > > From: Yong Bakos> > > > All event arg elements now have an appropriate summary attribute. > > This was conducted mostly in response to the undocumented parameter > > warnings generated during 'make check'. > > > > Signed-off-by: Yong Bakos > > --- > > Sorry I borked the subject line. Should be [PATCH wayland]... > Let me know if I should re-send. It's fine, I forget to do this myself from time to time. Fwiw 'wayland' is implied when one omits specifying the project in the subject line, anyway. I've learned you can put this in your weston checkout's .git/config: [format] pretty = fuller subjectprefix = "PATCH weston" But I find in practice I usually have to override it anyway with --subject-prefix to add the patchset version numbers. Bryce > yong > > > > protocol/wayland.xml | 128 > > ++- > > 1 file changed, 65 insertions(+), 63 deletions(-) > > > > diff --git a/protocol/wayland.xml b/protocol/wayland.xml > > index 8739cd3..dffb708 100644 > > --- a/protocol/wayland.xml > > +++ b/protocol/wayland.xml > > @@ -70,9 +70,9 @@ > > own set of error codes. The message is an brief description > > of the error, for (debugging) convenience. > > > > - > > - > > - > > + > > + > > + > > > > > > > > @@ -96,7 +96,7 @@ > > When the client receive this event, it will know that it can > > safely reuse the object ID. > > > > - > > + > > > > > > > > @@ -141,9 +141,9 @@ > > the given name is now available, and it implements the > > given version of the given interface. > > > > - > > - > > - > > + > > + > > + > > > > > > > > @@ -159,7 +159,7 @@ > > ignored until the client destroys it, to avoid races between > > the global going away and a client sending a request to it. > > > > - > > + > > > > > > > > @@ -367,7 +367,7 @@ > > can be used for buffers. Known formats include > > argb and xrgb. > > > > - > > + > > > > > > > > @@ -485,7 +485,7 @@ > > event per offered mime type. > > > > > > - > > + > > > > > > > > @@ -550,7 +550,7 @@ > > will be sent right after wl_data_device.enter, or anytime the source > > side changes its offered actions through wl_data_source.set_actions. > > > > - > > + > > > > > > > > @@ -591,7 +591,7 @@ > > final wl_data_offer.set_actions and wl_data_offer.accept requests > > must happen before the call to wl_data_offer.finish. > > > > - > > + > > > > > > > > @@ -633,7 +633,7 @@ > > Used for feedback during drag-and-drop. > > > > > > - > > + > > > > > > > > @@ -643,8 +643,8 @@ > > close it. > > > > > > - > > - > > + > > + > > > > > > > > @@ -746,7 +746,7 @@ > > Clients can trigger cursor surface changes from this point, so > > they reflect the current action. > > > > - > > + > > > > > > > > @@ -821,7 +821,7 @@ > > mime types it offers. > > > > > > - > > + > > > > > > > > @@ -832,11 +832,12 @@ > > local coordinates. > > > > > > - > > - > > - > > - > > - > allow-null="true"/> > > + > > + > summary="client surface entered"/> > > + > > + > > + > allow-null="true" > > + summary="source data_offer object"/> > > > > > > > > @@ -855,8 +856,8 @@ > > coordinates. > > > > > > - > > - > > + > > + > > > > > > > > @@ -891,7 +892,8 @@ > > destroy the previous selection data_offer, if any, upon receiving > > this event. > > > > - > allow-null="true"/> > > + > allow-null="true" > > + summary="selection data_offer object"/> > > > > > > > > @@ -1231,7 +1233,7 @@ > > Ping a client to check if it is receiving events and sending > > requests. A client is expected to reply with a pong request. > > > > - > > + > > > > > > > > @@ -1255,9 +1257,9 @@ > > in surface local coordinates. > > > > > > - > > - > > - > > + > > + > > + > > > > > > > > @@ -1533,7 +1535,7 @@ > > > > Note that a surface may be overlapping with zero or more outputs. > > > > - > > + > summary="output entered by the surface"/> > > > > > > > > @@ -1542,7 +1544,7 @@ > > results in it no longer having any part of it within the scanout region > > of an output. > > > > - > > + > summary="output left by the surface"/> > > > > > > > >
Re: [PATCH wayland 0/5] add api to inspect the compositor state
On Mon, Mar 07, 2016 at 06:31:30PM +0100, Giulio Camuffo wrote: > This patches add several new functions to: > - get the list of current clients for a wl_display > - get notified of new clients > - get the list of resources for a wl_client > - get notified of new resources for a client > - get the interface of a resource > > I'm working on a tool to inspect the internal state of a compositor, and this > new > functions allow it to, once it has retrieved the wl_display, show all the > protocol > objects that are active. > I would also like to add a wl_resource *wl_resource_get_parent() to get the > tree of resources but i'm not sure what the best way to achieve that would be, > so for now i believe these patches stand on their own and they can be pushed > once R-b-ed. Just from a technical review perspective, the set looks ok to me. Reviewed-by: Bryce Harrington> Giulio Camuffo (4): > Add API to retrieve the interface of a wl_resource > Add API to get the list of connected clients > Add a resource creation signal > Add API to retrieve and iterate over the resources list of a client > > Sungjae Park (1): > server: add listener API for new clients > > src/wayland-server-core.h | 32 > src/wayland-server.c | 129 > ++ > 2 files changed, 161 insertions(+) > > -- > 2.7.2 > > ___ > wayland-devel mailing list > wayland-devel@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/wayland-devel ___ wayland-devel mailing list wayland-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/wayland-devel
Re: [PATCH wayland 1/5] server: add listener API for new clients
On Mon, Mar 07, 2016 at 06:31:31PM +0100, Giulio Camuffo wrote: > From: Sungjae Park> > Using display object, Emit a signal if a new client is created. > > In the server-side, we can get the destroy event of a client, > But there is no way to get the created event of it. > Of course, we can get the client object from the global registry > binding callbacks. > But it can be called several times with same client object. > And even if A client creates display object, > (so there is a connection), The server could not know that. > There could be more use-cases not only for this. > > Signed-off-by: Sung-jae Park Reviewed-by: Bryce Harrington > --- > > This is the v2 of the patch by Sung-jae, i applied the incremental diff > on the first version. > > src/wayland-server-core.h | 4 > src/wayland-server.c | 22 ++ > 2 files changed, 26 insertions(+) > > diff --git a/src/wayland-server-core.h b/src/wayland-server-core.h > index e8e1e9c..1bc4d6b 100644 > --- a/src/wayland-server-core.h > +++ b/src/wayland-server-core.h > @@ -156,6 +156,10 @@ void > wl_display_add_destroy_listener(struct wl_display *display, > struct wl_listener *listener); > > +void > +wl_display_add_client_created_listener(struct wl_display *display, > + struct wl_listener *listener); > + > struct wl_listener * > wl_display_get_destroy_listener(struct wl_display *display, > wl_notify_func_t notify); > diff --git a/src/wayland-server.c b/src/wayland-server.c > index ae9365f..2857b1d 100644 > --- a/src/wayland-server.c > +++ b/src/wayland-server.c > @@ -96,6 +96,7 @@ struct wl_display { > struct wl_list client_list; > > struct wl_signal destroy_signal; > + struct wl_signal create_client_signal; > > struct wl_array additional_shm_formats; > }; > @@ -448,6 +449,8 @@ wl_client_create(struct wl_display *display, int fd) > > wl_list_insert(display->client_list.prev, >link); > > + wl_signal_emit(>create_client_signal, client); > + > return client; > > err_map: > @@ -864,6 +867,7 @@ wl_display_create(void) > wl_list_init(>registry_resource_list); > > wl_signal_init(>destroy_signal); > + wl_signal_init(>create_client_signal); > > display->id = 1; > display->serial = 0; > @@ -1353,6 +1357,24 @@ wl_display_add_destroy_listener(struct wl_display > *display, > wl_signal_add(>destroy_signal, listener); > } > > +/** Registers a listener for the client connection signal. > + * When a new client object is created, \a listener will be notified, > carring > + * a pointer to the new wl_client object. > + * > + * \ref wl_client_create > + * \ref wl_display > + * \ref wl_listener > + * > + * \param display The display object > + * \param listener Signal handler object > + */ > +WL_EXPORT void > +wl_display_add_client_created_listener(struct wl_display *display, > + struct wl_listener *listener) > +{ > + wl_signal_add(>create_client_signal, listener); > +} > + > WL_EXPORT struct wl_listener * > wl_display_get_destroy_listener(struct wl_display *display, > wl_notify_func_t notify) > -- > 2.7.2 > > ___ > wayland-devel mailing list > wayland-devel@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/wayland-devel ___ wayland-devel mailing list wayland-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/wayland-devel
Re: [PATCH] protocol: Add summaries to event parameters
> > From: Yong Bakos> > All event arg elements now have an appropriate summary attribute. > This was conducted mostly in response to the undocumented parameter > warnings generated during 'make check'. > > Signed-off-by: Yong Bakos > --- Sorry I borked the subject line. Should be [PATCH wayland]... Let me know if I should re-send. yong > protocol/wayland.xml | 128 ++- > 1 file changed, 65 insertions(+), 63 deletions(-) > > diff --git a/protocol/wayland.xml b/protocol/wayland.xml > index 8739cd3..dffb708 100644 > --- a/protocol/wayland.xml > +++ b/protocol/wayland.xml > @@ -70,9 +70,9 @@ > own set of error codes. The message is an brief description > of the error, for (debugging) convenience. > > - > - > - > + > + > + > > > > @@ -96,7 +96,7 @@ > When the client receive this event, it will know that it can > safely reuse the object ID. > > - > + > > > > @@ -141,9 +141,9 @@ > the given name is now available, and it implements the > given version of the given interface. > > - > - > - > + > + > + > > > > @@ -159,7 +159,7 @@ > ignored until the client destroys it, to avoid races between > the global going away and a client sending a request to it. > > - > + > > > > @@ -367,7 +367,7 @@ > can be used for buffers. Known formats include > argb and xrgb. > > - > + > > > > @@ -485,7 +485,7 @@ > event per offered mime type. > > > - > + > > > > @@ -550,7 +550,7 @@ > will be sent right after wl_data_device.enter, or anytime the source > side changes its offered actions through wl_data_source.set_actions. > > - > + > > > > @@ -591,7 +591,7 @@ > final wl_data_offer.set_actions and wl_data_offer.accept requests > must happen before the call to wl_data_offer.finish. > > - > + > > > > @@ -633,7 +633,7 @@ > Used for feedback during drag-and-drop. > > > - > + > > > > @@ -643,8 +643,8 @@ > close it. > > > - > - > + > + > > > > @@ -746,7 +746,7 @@ > Clients can trigger cursor surface changes from this point, so > they reflect the current action. > > - > + > > > > @@ -821,7 +821,7 @@ > mime types it offers. > > > - > + > > > > @@ -832,11 +832,12 @@ > local coordinates. > > > - > - > - > - > - allow-null="true"/> > + > + summary="client surface entered"/> > + > + > + allow-null="true" > +summary="source data_offer object"/> > > > > @@ -855,8 +856,8 @@ > coordinates. > > > - > - > + > + > > > > @@ -891,7 +892,8 @@ > destroy the previous selection data_offer, if any, upon receiving > this event. > > - allow-null="true"/> > + allow-null="true" > +summary="selection data_offer object"/> > > > > @@ -1231,7 +1233,7 @@ > Ping a client to check if it is receiving events and sending > requests. A client is expected to reply with a pong request. > > - > + > > > > @@ -1255,9 +1257,9 @@ > in surface local coordinates. > > > - > - > - > + > + > + > > > > @@ -1533,7 +1535,7 @@ > > Note that a surface may be overlapping with zero or more outputs. > > - > + > > > > @@ -1542,7 +1544,7 @@ > results in it no longer having any part of it within the scanout region > of an output. > > - > + > > > > @@ -1701,7 +1703,7 @@ > The above behavior also applies to wl_keyboard and wl_touch with the > keyboard and touch capabilities, respectively. > > - > + summary="capabilities of the seat"/> > > > > @@ -1751,7 +1753,7 @@ > identify which physical devices the seat represents. Based on > the seat configuration used by the compositor. > > - > + > > > > @@ -1832,8 +1834,8 @@ > an appropriate pointer image with the set_cursor request. > > > - > - > + > + summary="surface entered by the pointer"/> > > > > @@ -1846,8 +1848,8 @@ > The leave notification is sent before the enter notification > for the new focus. > > - > - > + > + summary="surface left by the pointer"/> > > > > @@ -1881,10 +1883,10 @@ > granularity, with an undefined base. > > >
[PATCH] protocol: Add summaries to event parameters
From: Yong BakosAll event arg elements now have an appropriate summary attribute. This was conducted mostly in response to the undocumented parameter warnings generated during 'make check'. Signed-off-by: Yong Bakos --- protocol/wayland.xml | 128 ++- 1 file changed, 65 insertions(+), 63 deletions(-) diff --git a/protocol/wayland.xml b/protocol/wayland.xml index 8739cd3..dffb708 100644 --- a/protocol/wayland.xml +++ b/protocol/wayland.xml @@ -70,9 +70,9 @@ own set of error codes. The message is an brief description of the error, for (debugging) convenience. - - - + + + @@ -96,7 +96,7 @@ When the client receive this event, it will know that it can safely reuse the object ID. - + @@ -141,9 +141,9 @@ the given name is now available, and it implements the given version of the given interface. - - - + + + @@ -159,7 +159,7 @@ ignored until the client destroys it, to avoid races between the global going away and a client sending a request to it. - + @@ -367,7 +367,7 @@ can be used for buffers. Known formats include argb and xrgb. - + @@ -485,7 +485,7 @@ event per offered mime type. - + @@ -550,7 +550,7 @@ will be sent right after wl_data_device.enter, or anytime the source side changes its offered actions through wl_data_source.set_actions. - + @@ -591,7 +591,7 @@ final wl_data_offer.set_actions and wl_data_offer.accept requests must happen before the call to wl_data_offer.finish. - + @@ -633,7 +633,7 @@ Used for feedback during drag-and-drop. - + @@ -643,8 +643,8 @@ close it. - - + + @@ -746,7 +746,7 @@ Clients can trigger cursor surface changes from this point, so they reflect the current action. - + @@ -821,7 +821,7 @@ mime types it offers. - + @@ -832,11 +832,12 @@ local coordinates. - - - - - + + + + + @@ -855,8 +856,8 @@ coordinates. - - + + @@ -891,7 +892,8 @@ destroy the previous selection data_offer, if any, upon receiving this event. - + @@ -1231,7 +1233,7 @@ Ping a client to check if it is receiving events and sending requests. A client is expected to reply with a pong request. - + @@ -1255,9 +1257,9 @@ in surface local coordinates. - - - + + + @@ -1533,7 +1535,7 @@ Note that a surface may be overlapping with zero or more outputs. - + @@ -1542,7 +1544,7 @@ results in it no longer having any part of it within the scanout region of an output. - + @@ -1701,7 +1703,7 @@ The above behavior also applies to wl_keyboard and wl_touch with the keyboard and touch capabilities, respectively. - + @@ -1751,7 +1753,7 @@ identify which physical devices the seat represents. Based on the seat configuration used by the compositor. - + @@ -1832,8 +1834,8 @@ an appropriate pointer image with the set_cursor request. - - + + @@ -1846,8 +1848,8 @@ The leave notification is sent before the enter notification for the new focus. - - + + @@ -1881,10 +1883,10 @@ granularity, with an undefined base. - + - - + + @@ -1916,8 +1918,8 @@ - - + + @@ -2020,7 +2022,7 @@ The order of wl_pointer.axis_discrete and wl_pointer.axis_source is not guaranteed. - + @@ -2073,8 +2075,8 @@ The order of wl_pointer.axis_discrete and wl_pointer.axis_source is not guaranteed. - - + + @@ -2100,9 +2102,9 @@ This event provides a file descriptor to the client which can be memory-mapped to provide a keyboard mapping description. - - - + + + @@ -2110,8 +2112,8 @@ Notification that
Re: [PATCH wayland] Add API to install protocol loggers on the server wl_display
On Wed, Mar 23, 2016 at 04:31:16PM +0200, Giulio Camuffo wrote: > The new wl_display_add_protocol_logger allows to set a function as > a logger, which will get called when a new request is received or an > event is sent. > This is akin to setting WAYLAND_DEBUG=1, but more powerful because it > can be enabled at run time and allows to show the log e.g. in a UI view. > > Signed-off-by: Giulio CamuffoHi Giulio, This looks pretty interesting but I have a few questions. This patch allows adding an arbitrary number of watchers to tap the client/server communication stream at the wayland display layer. The data is rendered down into strings formatted in the same style as wayland's current logging output. You mention a use case of a UI view, so first question is what is the use case for allowing multiple loggers? Obviously the implementation would be simpler if you just have a single logger (plus stderr). It seems to me that for a UI log browser you're going to want functionality to filter or reformat the raw data, so my second question is why the API renders all the data down to a string, rather than just copy the data into a structure and pass that? A UI log browser could then format the data as it wished after applying its own filters. This would also obviate the need for wl_closure_format() so wl_closure_print could stay as it is. Although, I wonder (third question) if this logging system is in place, could wl_closure_print() be redone to be a logger function? This would simplify the code somewhat, and also make it possible to turn stderr logging on/off at runtime, which I suppose could be handy when debugging in certain circumstances. Bryce > --- > src/connection.c | 117 > ++ > src/wayland-private.h | 4 ++ > src/wayland-server-core.h | 13 ++ > src/wayland-server.c | 94 ++--- > 4 files changed, 192 insertions(+), 36 deletions(-) > > diff --git a/src/connection.c b/src/connection.c > index c0e322f..f6447c0 100644 > --- a/src/connection.c > +++ b/src/connection.c > @@ -1181,71 +1181,128 @@ wl_closure_queue(struct wl_closure *closure, struct > wl_connection *connection) > return result; > } > > -void > -wl_closure_print(struct wl_closure *closure, struct wl_object *target, int > send) > +static inline int > +min(int a, int b) > +{ > + return a < b ? a : b; > +} > + > +/** Formats the closure and returns a static const char * with the value, > + * in the 'interface@id.message_name(args)' format. > + * DO NOT free or mess with the returned pointer. > + */ > +const char * > +wl_closure_format(struct wl_closure *closure, struct wl_object *target) > { > + static char *buffer = NULL; > + static size_t buf_size; > int i; > struct argument_details arg; > const char *signature = closure->message->signature; > - struct timespec tp; > - unsigned int time; > + size_t size = 0; > > - clock_gettime(CLOCK_REALTIME, ); > - time = (tp.tv_sec * 100L) + (tp.tv_nsec / 1000); > + if (!buffer) { > + buf_size = 128; > + buffer = malloc(buf_size); > + } > > - fprintf(stderr, "[%10.3f] %s%s@%u.%s(", > - time / 1000.0, > - send ? " -> " : "", > - target->interface->name, target->id, > - closure->message->name); > + size = snprintf(buffer, buf_size, "%s@%u.%s(", > + target->interface->name, target->id, > + closure->message->name); > > for (i = 0; i < closure->count; i++) { > signature = get_next_argument(signature, ); > if (i > 0) > - fprintf(stderr, ", "); > + size += snprintf(buffer + size, > + buf_size - min(size, buf_size), > + ", "); > > switch (arg.type) { > case 'u': > - fprintf(stderr, "%u", closure->args[i].u); > + size += snprintf(buffer + size, > + buf_size - min(size, buf_size), > + "%u", closure->args[i].u); > break; > case 'i': > - fprintf(stderr, "%d", closure->args[i].i); > + size += snprintf(buffer + size, > + buf_size - min(size, buf_size), > + "%d", closure->args[i].i); > break; > case 'f': > - fprintf(stderr, "%f", > - wl_fixed_to_double(closure->args[i].f)); > + size += snprintf(buffer + size, > + buf_size - min(size, buf_size), "%f", > + >
Re: Introduction and updates from NVIDIA
Hi Andy, On 23 March 2016 at 00:12, Andy Ritgerwrote: > Thanks for the thorough responses, Daniel. No problem; as I said, I'm actually really happy to see an implementation out there. > On Tue, Mar 22, 2016 at 01:49:59PM +, Daniel Stone wrote: >> On 21 March 2016 at 16:28, Miguel Angel Vico wrote: >> > Similarly, EGLOutput will provide means to access different >> > portions of display control hardware associated with an EGLDevice. >> > >> > For instance, EGLOutputLayer represents a portion of display >> > control hardware that accepts an image as input and processes it >> > for presentation on a display device. >> >> I still struggle to see the value of what is essentially an >> abstraction over KMS, but oh well. > > The intent wasn't to abstract all of KMS, just the surface presentation > aspect where EGL and KMS intersect. Besides the other points below, > an additional motivation for abstraction is to allow EGL to work with > the native modesetting APIs on other platforms (e.g., OpenWF on QNX). Fair enough. And, ah, _that's_ where the OpenWF implementation is - I was honestly unsure for years since the last implementation I saw was from the ex-Hybrid NVIDIA guys in Helsinki, back when it was aimed at Series 60. >> Firstly, again looking at the case where a Wayland client is a stream >> producer and the Wayland compositor is a consumer, we move from a >> model where references to individual buffers are explicitly passed >> through the Wayland protocol, to where those buffers merely carry a >> reference to a stream. Again, as stated in the review of 4/7, that >> looks like it has the potential to break some actual real-world cases, >> and I have no idea how to solve it, other than banning mailbox mode, >> which would seem to mostly defeat the point of Streams (more on that >> below). > > Streams are just a transport for frames. The client still explicitly > communicates when a frame is delivered through the stream via wayland > protocol, and the compositor controls when it grabs a new frame, via > eglStreamConsumerAcquireKHR(). Unless there are bugs in the patches, > the flow of buffers is still explicit and fully under the wayland protocol > and compositor's control. Right, I believe if you have FIFO mode and strictly enforce synchronisation to wl_surface::frame, then you should be safe. Mailbox mode or any other kind of SwapInterval(0) equivalent opens you up to a series of issues. > Also, mailbox mode versus FIFO mode should essentially equate to Vsync > off versus Vsync on, respectively. It shouldn't have anything to do > with the benefits of streams, but mailbox mode is a nice feature for > benchmarking games/simulations or naively displaying your latest & > greatest content without tearing. I agree it's definitely a nice thing to have, but it does bring up the serialisation issue: we expect any configuration performed by the client (say, wl_surface::set_opaque_area to let the compositor know where it can disable blending) to be fully in-line with buffer attachment. The extreme case of this is resize, but there are quite a few valid cases where you need serialisation. I don't know quite off the top of my head how you'd support mailbox mode with Streams, given this constraint - you need three-way feedback between the compositor (recording all associated surface state, including subsurfaces), clients (recording the surface state valid when that buffer was posted), and the Streams implementation (determining which frames to dequeue, which to discard and return to the client, etc). >> Secondly, looking at the compositor-drm case, the use of the dumb >> buffer to display undefined content as a dummy modeset really makes me >> uneasy, > > Yes, the use of dumb buffer in this patch series is a kludge. If we > were going to use drmModeSetCrtc + EGLStreams, I think we'd want to > pass no fb to drmModeSetCrtc, but that currently gets rejected by DRM. > Are surface-less modesets intended to be allowable in DRM? I can hunt > that down if that is intended to work. Of course, better to work out > how EGLStreams should cooperate with atomic KMS. > > It was definitely an oversight to not zero initialize the dumb buffer. Right, atomic allows you separate pipe/CRTC configuration from plane/overlay configuration. So you'd have two options: one is to use atomic and require the CRTC be configured with planes off before using Streams to post flips, and the other is to add KMS configuration to the EGL output. Though, now I think of it, this effectively precludes one case, which is scaling a Streams-sourced buffer inside the display controller. In the GBM case, the compositor gets every buffer, so can configure the plane scaling in line with buffer display. I don't see how you'd do that with Streams. There's another hurdle to overcome too, which would currently preclude avoiding the intermediate dumb buffer at all. One of the
Re: Introduction and updates from NVIDIA
Hi, On 28 March 2016 at 19:12, Daniel Vetterwrote: > On Thu, Mar 24, 2016 at 10:06:04AM -0700, Andy Ritger wrote: >> eglstreams or gbm or any other implementation aside, is it always _only_ >> the KMS driver that knows what the optimal configuration would be? >> It seems like part of the decision could require knowledge of the graphics >> hardware, which presumably the OpenGL/EGL driver is best positioned >> to have. > > Android agrees with that and stuffs all these decisions into hwc. And I > agree that there's cases with combinations of display block, 2d engined > and 3d engine where that full-system overview is definitely necessary. But > OpenGL still doesn't look like the right place to me. Something in-between > everything else, like hwc+gralloc on android (which has its own issues) > makes a lot more sense imo in a world where you can combine things widly. Right. Samsung decided that answer was correct, and Tizen has the Tizen Buffer Manager, which started off life as GBM with the copyright notices filed off[0] and the addition of separate allocation intended-use flags for 2D/blit and media decode engines. So for them, GBM has mutated from the thing that knows about the intersection of GPU + display, to the gralloc-like thing that can determine optimal allocation strategies. Unfortunately I don't expect to ever get meaningful input there, as I only discovered its existence by semi-accident, back when you needed a Tizen login to access it as well. It's only ever really been mentioned in passing, and has no users outside Tizen (and I still don't know what exactly uses their 'surface queue'). Oh well. > I do believe though that with just kms + sensible heuristics to allocate > surfaces to hw planes + some semi-clever fallback mechanism/hints (which > is what we currently lack) it should be possible to pull something off > without special-case vendor magic in hwc for every combination. That's > purely a conjecture though on my part, otoh no one has ever really tried > all that hard yet. Another fun suggestion that came back would be feedback from the atomic ioctl: when rejecting a configuration, optionally return a list of property changes with which a future configuration would have a larger chance of success (e.g. wider stride, different tiling mode). Plumbing that back through to clients isn't without the realm of reason, though would require more user-visible API. This is something that would fit in quite nicely with the Weston atomic KMS implementation, where we attempt to enlarge the configuration one plane at a time: start with the primary plane, and attempt to place every other scanout target on a plane, seeing at every turn if they succeed or need to be punted down to GPU composition. Cheers, Daniel [0]: tbm_bo_handle must be copied from gbm_bo_handle, beacuse to write that even once makes no sense; to write it independently is so improbable as to be impossible. https://review.tizen.org/git/?p=platform/core/uifw/libtbm.git;a=blob;f=src/tbm_bufmgr.h;h=7bf2597f3fee53d3b00ca7ba760675c977ba4435;hb=ecc409c142cd77b1d92cb35f444099e2c782b6ad ___ wayland-devel mailing list wayland-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/wayland-devel
Re: Collaboration on standard Wayland protocol extensions
On Tue, 29 Mar 2016 08:11:03 -0400 Drew DeVaultwrote: > On 2016-03-29 3:10 PM, Carsten Haitzler wrote: > > > I don't really understand why forking from the compositor and bringing > > > along the fds really gives you much of a gain in terms of security. Can > > > > why? > > > > there is no way a process can access the socket with privs (even know the > > extra protocol exists) unless it is executed by the compositor. the > > compositor > > can do whatever it deems "necessary" to ensure it executes only what is > > allowed. eg - a whitelist of binary paths. i see this as a lesser chance of > > a > > hole. > > I see what you're getting at now. We can get the pid of a wayland > client, though, and from that we can look at /proc/cmdline, from which > we can get the binary path. We can even look at /proc/exe and produce a > checksum of it, so that programs become untrusted as soon as they > change. That means you have to recognize all interpreters, or you suddenly just authorized all applications running with /usr/bin/python or such. The PID -> /proc -> executable thing works only for a limited set of things. However, forking in the compositor is secure against that. Assuming the compositor knows what it wants to run, it creates a connection *before* launching the app, and the app just inherits an already authorized connection. The general solution is likely with containers, as you said. That thing I agree with. Thanks, pq pgpCcEjGa9KaX.pgp Description: OpenPGP digital signature ___ wayland-devel mailing list wayland-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/wayland-devel
Re: Collaboration on standard Wayland protocol extensions
On Tue, 29 Mar 2016 07:41:10 -0400 Drew DeVaultwrote: > Thus begins my long morning of writing emails: > > On 2016-03-29 12:01 PM, Jonas Ådahl wrote: > Not everyone has dbus on their system and it's not among my goals to > force it on people. I'm not taking a political stance on this and I > don't want it to devolve into a flamewar - I'm just not imposing either > side of the dbus/systemd argument on my users. If you don't use what others use, then you use something different. As you don't want to use what other people use, it goes the same way for other people to not want to use what you use. Whatever the reasons for either party. Wayland upstream/community/whatever cannot force neither you nor them to do against their will. So your only hope is to compete with technical excellence and popularity. > > Pinos communicates via D-Bus, but pixels/frames are of course never > > passed directly, but via shared memory handles. What a screen > > cast/remote desktop API would do is more or less to start/stop a pinos > > stream and optionally inject events, and let the client know what stream > > it should use. > > Hmm. Again going back to "I don't want to make the dbus decision for my > users", I would prefer to find a solution that's less dependent on it, > though I imagine taking inspiration from Pinos is quite reasonable. Up to you, indeed, on what you force down your users' throats, but the fact is, you will always force something on them. Your users don't have the freedom of choice to use your compositor without Wayland either. You chose Wayland, your users chose your software. > > Sorry, I don't see how you make the connection between "Wayland" and > > "screen capture" other than that it may be implemented in the same > > process. Wayland is meant to be used by clients to be able to pass > > content to and receive input from the display server. It's is not > > intended to be a catch-all IPC replacing D-Bus. > > DBus is not related to Wayland. DBus is not _attached_ to Wayland. DBus > and Wayland are seperate, unrelated protocols and solving Wayland > problems with DBus is silly. Correct. Use each to its best effect, not all problems are nails. If there already is a DBus based solution that just works, why would someone write a new solution to replace that? There has to be a benefit for replacing the old for the people using the old solution. It could be a benefit for the end users of the old, or for the developers of the old, but if the only benefit is for "outsiders", it gives no motivation. Thanks, pq pgp5VMosd1dlC.pgp Description: OpenPGP digital signature ___ wayland-devel mailing list wayland-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/wayland-devel
Re: Collaboration on standard Wayland protocol extensions
Hi, On 29 March 2016 at 13:24, Drew DeVaultwrote: > On 2016-03-29 11:45 AM, Daniel Stone wrote: >> Firstly, >> https://www.redhat.com/archives/fedora-devel-list/2008-January/msg00861.html >> is a cliché, but the spirit of free software is empowering people to >> make the change they want to see, rather than requiring the entire >> world be perfectly isolated and abstracted along inter-module >> boundaries, freely mix-and-matchable. > > I should rephrase: it's against the spirit of Unix. Simple, composable > tools, that Do One Thing And Do It Well, is the Unix way. Our desktop > environments needn't and shouldn't be much different. And yet the existence and dominant popularity of large integrated environments (historically beginning with Emacs) suggests that the pithy summary is either wrong, or no longer applicable. Ditto the relative successes of Plan 9 and microkernels compared to other OSes. >> Secondly, you talk about introducing all these concepts and protocols >> as avoiding complexity. Nothing could be further from the case. That >> X11 emulates this model means that it has Xinerama, XRandR, >> XF86VidMode, the ICCCM, and NetWM/EWMH, as well as all the various >> core protocols. You're not avoiding complexity, but simultaneously >> shifting and avoiding it. You're not avoiding policy to create >> mechanism; the structure and design of the mechanism is a policy in >> itself. > > I disagree. I think this is just a fundamental difference of opinion. I really do not see how you can look at ICCM/EWMH and declare it to be a victory for simplicity, and ease of implementation. >> Thirdly, it's important to take a step back. 'Wayland doesn't support >> middle-button primary selections' is a feature gap compared to X11; >> 'Wayland doesn't have XRandR' is not. Sometimes it seems like you miss >> the forest of user-visible behaviour for the trees of creating >> protocol. > > I think you're missing what users are actually using. You'd be surprised > at how many power users are comfortable working with tools like xrandr > and scripting their environments. This is about more than just > xrandr-like support, too. There's definitely a forest of people using > screen capture for live streaming, for instance. Yes, screen capture is vital to have. Providing some of the functionality (application fullscreening, including to potentially different sizes/modes than are currently set; user display control) that RandR does, is also vital. Providing an exact clone of XRandR ('let's provide one protocol that allows any arbitrary application to do what it likes'), much less so. I also posit that anyone suggesting that providing the full XRandR suite to arbitrary users makes implementation more simple, has never been on the sharp end of that implementation. >> Fourthly, I think you misunderstand the role of what we do. If you >> want to design and deploy a modular framework for Legoing your own >> environment together, by all means, please do that. Give it a go, see >> what falls out, see if people creating arbitrary external panels and >> so find it useful, and then see if you can convince the others to >> adopt it. But this isn't really the place for top-down design where we >> dictate how all environments based on Wayland shall behave. > > I've already seen this. It's been around for a long time. I don't know > if you live in a "desktop environment bubble", but there's a LOT of this > already in practice in the lightweight WM world. Many, many users, are > using software like i3 and xmonad and herbstluftwm and openbox and so on > with composable desktop tools like dmenu and i3bar and lemonbar and so > on _today_. Yes I know, as a former long-term Awesome/OpenBox/etc etc etc etc etc user. > This isn't some radical experiment in making a composable > desktop. It's already a well proven idea, and it works great. Again, I don't know in what parallel universe ICCCM+EWMH are 'great', but OK. > I would > guess that the sum of people who are using a desktop like this > perhaps outnumbers the total users of, say, enlightenment. I'm just > bringing the needs of this group forward. I would suggest the total number of users of these 'power' environments allowing full flexibility and arbitrary external control (but still via entirely standardised protocols), is several orders of magnitude than the combined total of Unity, GNOME and KDE, but I don't think this thread really needs any more value judgements. My point is that there is no solution for this existing _on Wayland_ today, something which I would've thought to be pretty inarguable, since that's what this entire thread is ostensibly about. I know full well that this exists on X11, and that there are users of the same, but again, you are talking about creating the same functionality as a generic Wayland protocol, so it's pretty obvious that it doesn't exist today. What I was trying to get at, before this devolved into angrily trying to create
Re: Collaboration on standard Wayland protocol extensions
On 2016-03-29 11:45 AM, Daniel Stone wrote: > Firstly, > https://www.redhat.com/archives/fedora-devel-list/2008-January/msg00861.html > is a cliché, but the spirit of free software is empowering people to > make the change they want to see, rather than requiring the entire > world be perfectly isolated and abstracted along inter-module > boundaries, freely mix-and-matchable. I should rephrase: it's against the spirit of Unix. Simple, composable tools, that Do One Thing And Do It Well, is the Unix way. Our desktop environments needn't and shouldn't be much different. > Secondly, you talk about introducing all these concepts and protocols > as avoiding complexity. Nothing could be further from the case. That > X11 emulates this model means that it has Xinerama, XRandR, > XF86VidMode, the ICCCM, and NetWM/EWMH, as well as all the various > core protocols. You're not avoiding complexity, but simultaneously > shifting and avoiding it. You're not avoiding policy to create > mechanism; the structure and design of the mechanism is a policy in > itself. I disagree. I think this is just a fundamental difference of opinion. > Thirdly, it's important to take a step back. 'Wayland doesn't support > middle-button primary selections' is a feature gap compared to X11; > 'Wayland doesn't have XRandR' is not. Sometimes it seems like you miss > the forest of user-visible behaviour for the trees of creating > protocol. I think you're missing what users are actually using. You'd be surprised at how many power users are comfortable working with tools like xrandr and scripting their environments. This is about more than just xrandr-like support, too. There's definitely a forest of people using screen capture for live streaming, for instance. > Fourthly, I think you misunderstand the role of what we do. If you > want to design and deploy a modular framework for Legoing your own > environment together, by all means, please do that. Give it a go, see > what falls out, see if people creating arbitrary external panels and > so find it useful, and then see if you can convince the others to > adopt it. But this isn't really the place for top-down design where we > dictate how all environments based on Wayland shall behave. I've already seen this. It's been around for a long time. I don't know if you live in a "desktop environment bubble", but there's a LOT of this already in practice in the lightweight WM world. Many, many users, are using software like i3 and xmonad and herbstluftwm and openbox and so on with composable desktop tools like dmenu and i3bar and lemonbar and so on _today_. This isn't some radical experiment in making a composable desktop. It's already a well proven idea, and it works great. I would guess that the sum of people who are using a desktop like this perhaps outnumbers the total users of, say, enlightenment. I'm just bringing the needs of this group forward. Some of your email is just griping about the long life of this thread, and you're right. I think I've got most of what I wanted from this thread, I'm going to start proposing some protocols in new threads next. -- Drew DeVault ___ wayland-devel mailing list wayland-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/wayland-devel
Re: Collaboration on standard Wayland protocol extensions
On Tue, Mar 29, 2016 at 07:41:10AM -0400, Drew DeVault wrote: > Thus begins my long morning of writing emails: > > On 2016-03-29 12:01 PM, Jonas Ådahl wrote: > > > I prefer to think of it as "who has logical ownership over this resource > > > that they're providing". The compositor has ownership of your output and > > > input devices and so on, and it should be responsible for making them > > > available. > > > > I didn't say the display server shouldn't be the one exposing such an > > API, I just think it is a bad idea to duplicate every display server > > agnostic API for every possible display server protocol. > > Do you foresee GNOME on Mir ever happening? We're trying to leave X > behind here. There won't be a Wayland replacement for a while. The > Wayland compositor has ownership over these resources and the Wayland > compositor is the one managing these resources - and it speaks the > Wayland protocol, which is extensible. GNOME's mutter already work as being a compositor for two separate protocols: X11 and Wayland. Whenever possible, I by far prefer to deprecate the way replacing it with a display server protocol agnostic solution, than having duplicate implementation for every such thing. > > > > I know that Gnome folks really love their DBus, but I don't think that > > > it makes sense to use it for this. Not all of the DEs/WMs use dbus and > > > it would be great if the tools didn't have to know how to talk to it, > > > but instead had some common way of getting pixels from the compositor. > > > > So if you have a compositor or a client that wants to support three > > display server architectures, it needs to implement all those three > > API's separately? Why can't we provide an API ffmpeg etc can use no > > matter if the display server happens to be the X server, sway or > > Unity-on-Mir? > > See above Most if not all clients will for the forseeable future most likely need to support at least three protocols on Linux: X11, Wayland and Mir. I don't see any of these going away any time soon, and I don't see any reason to have three separate interfaces doing exactly the same thing. > > > I don't see the point of not just using D-Bus just because you aren't > > using it yet. It's already there, installed on your system, it's already > > used by various other parts of the stack, and it will require a lot less > > effort by clients and servers if they they want to support more than > > just Wayland. > > Not everyone has dbus on their system and it's not among my goals to > force it on people. I'm not taking a political stance on this and I > don't want it to devolve into a flamewar - I'm just not imposing either > side of the dbus/systemd argument on my users. > > > Pinos communicates via D-Bus, but pixels/frames are of course never > > passed directly, but via shared memory handles. What a screen > > cast/remote desktop API would do is more or less to start/stop a pinos > > stream and optionally inject events, and let the client know what stream > > it should use. > > Hmm. Again going back to "I don't want to make the dbus decision for my > users", I would prefer to find a solution that's less dependent on it, > though I imagine taking inspiration from Pinos is quite reasonable. We are not going to reimplement anything like Pinos via Wayland protocols, so any client/compositor that want to do anything related to stream casting (anything that doesn't just make the content end up directly on the filesystem) will either need to reimplement their own private solution, or depend on something like Pinos which will itself depend on D-Bus. > > > Sorry, I don't see how you make the connection between "Wayland" and > > "screen capture" other than that it may be implemented in the same > > process. Wayland is meant to be used by clients to be able to pass > > content to and receive input from the display server. It's is not > > intended to be a catch-all IPC replacing D-Bus. > > DBus is not related to Wayland. DBus is not _attached_ to Wayland. DBus > and Wayland are seperate, unrelated protocols and solving Wayland > problems with DBus is silly. So is screen casting/recording/sharing. It's a feature of a compositor, not a feature of Wayland. Screen casting in the way you describe (pass content to some client) will most likely have its frames passed via D-Bus, so you'd still force your user to use D-Bus anyway. Jonas > > -- > Drew DeVault ___ wayland-devel mailing list wayland-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/wayland-devel
Re: Collaboration on standard Wayland protocol extensions
Hi, On 29 March 2016 at 13:11, Drew DeVaultwrote: >> or just have the compositor "work" without needing scripts and users to have >> to >> learn how to write them. :) > > Never gonna happen, man. There's no way you can foresee and code for > everyone's needs. I'm catching on to this point you're heading towards, > though: e doesn't intend to suit everyone's needs. If a compositor implementation can never be sufficient to express peoples' needs, how could an arbitrary protocol be better? Same complexity problem. (And, as far as the 'but what if a compositor implementation isn't good' argument goes - don't use bad compositors.) Cheers, Daniel ___ wayland-devel mailing list wayland-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/wayland-devel
Re: fullscreen shell is irrelevant to this (Re: Collaboration on standard Wayland protocol extensions)
This a mistake on my part. I mixed up the two protocols, I don't intend to make any changes to fullscreen-shell. Sorry for the confusion. ___ wayland-devel mailing list wayland-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/wayland-devel
Re: Collaboration on standard Wayland protocol extensions
On 2016-03-29 10:25 AM, Martin Graesslin wrote: > > - More detailed surface roles (should it be floating, is it a modal, > > does it want to draw its own decorations, etc) > > Concerning own decoration we have implemented https://quickgit.kde.org/? > p=kwayland.git=blob=8bc106c7c42a40f71dad9a884824a7a9899e7b2f=818e320bd99867ea9c831edfb68c9671ef7dfc47=src > %2Fclient%2Fprotocols%2Fserver-decoration.xml Excellent. The protocol looks like it'll do just fine. > I think especially for compositors like sway that can be very helpful. For Qt > we implemented support in our QPT plugin for Plasma. So if sway wants to use > it I can give you pointers on how to use it in your own QPT plugin (if you > don't have one yet how to create it) and to use it to force QtWayland to not > use the client side decorations. I would love to see something like that. Can we work on a model that would avoid making users install qt to install Sway? Honestly I'd like to just set an environment variable to turn off CSD where possible, for both Qt and GTK. I'm still trying to avoid forcing a toolkit on users. -- Drew DeVault ___ wayland-devel mailing list wayland-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/wayland-devel
Re: Collaboration on standard Wayland protocol extensions
On 2016-03-29 10:20 AM, Martin Graesslin wrote: > > - Output configuration > > we have our kwin-kscreen specific protocol for this. You can find it at: > https://quickgit.kde.org/? > p=kwayland.git=blob=9ebe342f7939b6dec45e2ebf3ad69e772ec66543=818e320bd99867ea9c831edfb68c9671ef7dfc47=src > %2Fclient%2Fprotocols%2Foutput-management.xml > > and > > https://quickgit.kde.org/? > p=kwayland.git=blob=747fc264b7e6a40a65a0a04464c2c98036a84f0f=818e320bd99867ea9c831edfb68c9671ef7dfc47=src > %2Fclient%2Fprotocols%2Foutputdevice.xml > > It's designed for our specific needs in Plasma. If it's useful for others, we > are happy to share and collaborate. It looks like something I could use in Sway. I like it. I'm going to see how well it integrates with Sway and probably write a command line tool to interface with it. I think that it would be useful to put this under the permissions system, though, once that's put together. -- Drew DeVault ___ wayland-devel mailing list wayland-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/wayland-devel
Re: Collaboration on standard Wayland protocol extensions
On 2016-03-29 3:10 PM, Carsten Haitzler wrote: > > I don't really understand why forking from the compositor and bringing > > along the fds really gives you much of a gain in terms of security. Can > > why? > > there is no way a process can access the socket with privs (even know the > extra protocol exists) unless it is executed by the compositor. the compositor > can do whatever it deems "necessary" to ensure it executes only what is > allowed. eg - a whitelist of binary paths. i see this as a lesser chance of a > hole. I see what you're getting at now. We can get the pid of a wayland client, though, and from that we can look at /proc/cmdline, from which we can get the binary path. We can even look at /proc/exe and produce a checksum of it, so that programs become untrusted as soon as they change. > i know - but for just capturing screencasts, adding watermarks etc. - all you > need is to store a stream - the rest can be post-processed. Correct, if you record to a file, you can deal with it in post. But there are other concerns, like what output format you'd like to use and what encoding quality you want to use to consider factors like disk space, cpu usage, etc. And there still is the live streaming use-case, which we should support and which your solution does not address. > why do we need the fullscreen shell? that was intended for environments where > apps are only ever fullscreen from memory. xdg shell has the ability for a > window to go fullscreen (or back to normal) this should do just fine. :) sure > - > let's talk about this stuff - fullscreening etc. I've been mixing up fullscreen-shell with that one thing in xdg-shell. My bad. > let's talk about the actual apps surfaces and where they go - not > configuration of outputs. :) No, I mean, that's what I'm getting at. I don't want to talk about that because it doesn't make sense outside of e. On Sway, the user is putting their windows (fullscreen or otherwise) on whatever output they want themselves. There aren't output roles. Outputs are just outputs and I intend to keep it that way. > > Troublemaking software is going to continue to make trouble. Further > > news at 9. That doesn't really justify making trouble for users as well. > > or just have the compositor "work" without needing scripts and users to have > to > learn how to write them. :) Never gonna happen, man. There's no way you can foresee and code for everyone's needs. I'm catching on to this point you're heading towards, though: e doesn't intend to suit everyone's needs. > > Here's the wayland screenshot again for comparison: > > > > https://sr.ht/Ai5N.png > > > > Most apps are fine with being told what resolution to be, and they > > _need_ to be fine with this for the sake of my sanity. But I understand > > that several applications have special concerns that would prevent this > > but for THEIR sanity, they are not fine with it. :) Nearly all toolkits are entirely fine with being any size, at least above some sane minimum. A GUI that cannot deal with being a user-specified size is a poorly written GUI. > no. this has nothing to do with floating. this has to do with minimum and in > this case especially - maximum sizes. it has NOTHING to do with floating. you > are conflating sizing with floating because floating is how YOU HAPPEN to want > to deal with it. Fair. Floating is how I would deal with it. But maybe I'm missing something: where does the min/max size hints come from? All I seem to know of is the surface geometry request, which isn't a hint so much as it's something every single app does. If I didn't ignore it, all windows would be fucky and the tiling layout wouldn't work at all. Is there some other hint coming from somewhere I'm not aware of? > you COULD deal with it as i described - pad out the area or > scale retaining aspect ratio - allow user to configure the response. if i had > a > small calculator on the left and something that can size up on the right i > would EXPECt a tiling wm to be smart and do: > > +---++ > | || > |:::|| > |:::|| > |:::|| > | || > +---++ Eh, this might be fine for a small number of windows, and maybe even is the right answer for Sway. I'm worried about it happening for most windows and I don't want to encourage people to make their applications locked into one aspect ratio and unfriendly to tiling users. > > What I really want is _users_ to have control. I don't like it that > > compositors are forcing solutions on them that doesn't allow them to be > > in control of how their shit works. > they can patch their compositors if they want. if you are forcing users to > write scripts you are already forcing them to "learn to code" in a simple way. > would it not be best to try and make things work without needing > scripts/custom > code per user and have features/modes/logic that "just work" ? There's a huge difference between the
Re: Collaboration on standard Wayland protocol extensions
On 2016-03-29 8:25 AM, Giulio Camuffo wrote: > If the client just binds the interface the compositor needs to > immediately create the resource and send the protocol error, if the > client is not authorized. It doesn't have the time to ask the user for > input on the matter, while my proposal gives the compositor that. Understood. I'm on board. ___ wayland-devel mailing list wayland-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/wayland-devel
Re: Collaboration on standard Wayland protocol extensions
Thus begins my long morning of writing emails: On 2016-03-29 12:01 PM, Jonas Ådahl wrote: > > I prefer to think of it as "who has logical ownership over this resource > > that they're providing". The compositor has ownership of your output and > > input devices and so on, and it should be responsible for making them > > available. > > I didn't say the display server shouldn't be the one exposing such an > API, I just think it is a bad idea to duplicate every display server > agnostic API for every possible display server protocol. Do you foresee GNOME on Mir ever happening? We're trying to leave X behind here. There won't be a Wayland replacement for a while. The Wayland compositor has ownership over these resources and the Wayland compositor is the one managing these resources - and it speaks the Wayland protocol, which is extensible. > > I know that Gnome folks really love their DBus, but I don't think that > > it makes sense to use it for this. Not all of the DEs/WMs use dbus and > > it would be great if the tools didn't have to know how to talk to it, > > but instead had some common way of getting pixels from the compositor. > > So if you have a compositor or a client that wants to support three > display server architectures, it needs to implement all those three > API's separately? Why can't we provide an API ffmpeg etc can use no > matter if the display server happens to be the X server, sway or > Unity-on-Mir? See above > I don't see the point of not just using D-Bus just because you aren't > using it yet. It's already there, installed on your system, it's already > used by various other parts of the stack, and it will require a lot less > effort by clients and servers if they they want to support more than > just Wayland. Not everyone has dbus on their system and it's not among my goals to force it on people. I'm not taking a political stance on this and I don't want it to devolve into a flamewar - I'm just not imposing either side of the dbus/systemd argument on my users. > Pinos communicates via D-Bus, but pixels/frames are of course never > passed directly, but via shared memory handles. What a screen > cast/remote desktop API would do is more or less to start/stop a pinos > stream and optionally inject events, and let the client know what stream > it should use. Hmm. Again going back to "I don't want to make the dbus decision for my users", I would prefer to find a solution that's less dependent on it, though I imagine taking inspiration from Pinos is quite reasonable. > Sorry, I don't see how you make the connection between "Wayland" and > "screen capture" other than that it may be implemented in the same > process. Wayland is meant to be used by clients to be able to pass > content to and receive input from the display server. It's is not > intended to be a catch-all IPC replacing D-Bus. DBus is not related to Wayland. DBus is not _attached_ to Wayland. DBus and Wayland are seperate, unrelated protocols and solving Wayland problems with DBus is silly. -- Drew DeVault ___ wayland-devel mailing list wayland-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/wayland-devel
fullscreen shell is irrelevant to this (Re: Collaboration on standard Wayland protocol extensions)
On Tue, 29 Mar 2016 00:01:00 -0400 Drew DeVaultwrote: > On 2016-03-29 11:31 AM, Carsten Haitzler wrote: > > my take on it is that it's premature and not needed at this point. in fact i > > wouldn't implement a protocol at all. *IF* i were to allow special access, > > i'd > > simply require to fork the process directly from compositor and provide a > > socketpair fd to this process and THAT fd could have extra capabilities > > attached to the wl protocol. i would do nothing else because as a > > compositor i > > cannot be sure what i am executing. i'd hand over the choice of being able > > to > > execute this tool to the user to say ok to and not just blindly execute > > anything i like. > > I don't really understand why forking from the compositor and bringing > along the fds really gives you much of a gain in terms of security. Can > you elaborate on how this changes things? I should also mention that I > don't really see the sort of security goals Wayland has in mind as > attainable until we start doing things like containerizing applications, > in which case we can elimitate entire classes of problems from this > design. > I'm snipping out a lot of the output configuration related stuff from > this response. I'm not going to argue very hard for a common output > configuration protocol. I've been trying to change gears on the output > discussion towards a discussion around whether or not the > fullscreen-shell protocol supports our needs and whether or how it needs > to be updated wrt permissions. I sense there is a misunderstanding here, that I want to correct. The fullscreen-shell protocol is completely irrelevant here. It has been designed to be mutually exclusive to a desktop protocol suite. The original goal for the fullscreen-shell is to be able to use a ready-made compositor, e.g. Weston in particular, as a hardware abstraction layer for a single application. We of course have some demo programs to use it so we can test it. That single application would often be a DE compositor, perhaps a small project which does not want to deal with all the KMS and other APIs but concentrate on making a good DE at the expense of the slight overhead that using a middle-man compositor brings. Now that we have decided that libweston is a good idea, I would assume this use case may disappear eventually. There are also no permission issues wrt. to the fullscreen shell protocol. The compositor exposing the fullscreen shell interface expects only a single client ever, or works a bit like the VTs in that only a single client can be active at a time. Ordinarily you set up the application such, that the parent compositor is launched as part of the app launch, and nothing else can even connect to the parent compositor. Fullscreening windows on a desktop has absolutely nothing to do with the fullscreen shell. Fullscreen shell is not available on compositors configured for desktop. This is how it was designed and meant to be. Thanks, pq pgp7IVuWXzpmZ.pgp Description: OpenPGP digital signature ___ wayland-devel mailing list wayland-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/wayland-devel
Re: Collaboration on standard Wayland protocol extensions
On Tue, 29 Mar 2016 12:01:52 +0800 Jonas Ådahlwrote: > On Mon, Mar 28, 2016 at 11:33:15PM -0400, Drew DeVault wrote: > > On 2016-03-29 10:30 AM, Jonas Ådahl wrote: > > > I'm just going to put down my own personal thoughts on these. I mostly > > > agree with Carsten on all of this. In general, my opinion is that it is > > > completely pointless to add Wayland protocols for things that has > > > nothing to do with Wayland what so ever; we have other display protocol > > > agnostic methods for that that fits much better. > > > > I think these features have a lot to do with Wayland, and I still > > maintain that protocol extensions make sense as a way of doing it. I > > don't want to commit my users to dbus or something similar and I'd > > prefer if I didn't have to make something unique to sway. It's probably > > going to be protocol extensions for some of this stuff and I think it'd > > be very useful for the same flexibility to be offered by other > > compositors. > > > > > As a rule of thumb, whether a feature needs a Wayland protocol or not, > > > one can consider whether a client needs to reference a client side > > > object (such as a surface) on the server. If it needs it, we should add > > > a Wayland protocol; otherwise not. Another way of seeing it would be > > > "could this be shared between Wayland/X11/Mir/... then don't do it in > > > any of those". > > > > I prefer to think of it as "who has logical ownership over this resource > > that they're providing". The compositor has ownership of your output and > > input devices and so on, and it should be responsible for making them > > available. > > I didn't say the display server shouldn't be the one exposing such an > API, I just think it is a bad idea to duplicate every display server > agnostic API for every possible display server protocol. > > > > > > > - Screen capture > > > Why would this ever be a Wayland protocol? If a client needs to capture > > > its own content it doesn't need to ask the compositor; otherwise it's > > > the job of the compositor. If there needs to be a complex pipeline setup > > > that adds sub titles, muxing, sound effects and what not, we should make > > > use of existing projects that intend to create inter-process video > > > pipelines (pinos[0] for example). > > > > > > FWIW, I believe remote desktop/screen sharing support partly falls under > > > this category as well, with the exception that it may need input event > > > injection as well (which of course shouldn't be a Wayland protocol). > > > > > > As a side note, for GNOME, I have been working on a org.gnome prefixed > > > D-Bus protocol for remote desktop that enables the actual remote desktop > > > things to be implemented in a separate process by providing pinos > > > streams, and I believe that at some point it would be good to have a > > > org.freedesktop.* (or equivalent) protocol doing that in a more desktop > > > agnostic way. Such a protocol could just as well be read-only, and > > > passed to something like ffmpeg (maybe can even pipe it from gst-launch > > > directly to ffmpeg if you so wish) in order to do screen recording. > > > > I know that Gnome folks really love their DBus, but I don't think that > > it makes sense to use it for this. Not all of the DEs/WMs use dbus and > > it would be great if the tools didn't have to know how to talk to it, > > but instead had some common way of getting pixels from the compositor. > > So if you have a compositor or a client that wants to support three > display server architectures, it needs to implement all those three > API's separately? Why can't we provide an API ffmpeg etc can use no > matter if the display server happens to be the X server, sway or > Unity-on-Mir? > > I don't see the point of not just using D-Bus just because you aren't > using it yet. It's already there, installed on your system, it's already > used by various other parts of the stack, and it will require a lot less > effort by clients and servers if they they want to support more than > just Wayland. > > > > > I haven't heard of Pinos before, but brief searches online make it look > > pretty useful for this purpose. I think it can be involved here. > > > > Pinos communicates via D-Bus, but pixels/frames are of course never > passed directly, but via shared memory handles. What a screen > cast/remote desktop API would do is more or less to start/stop a pinos > stream and optionally inject events, and let the client know what stream > it should use. > > > > > > I don't think we should start writing Wayland protocols for things that > > > has nothing to do with Wayland, only because the program where it is > > > going to be implemented in may already may be doing Wayland things. > > > There simply is no reason for it. > > > > > > We should simply use the IPC system that we already have that we already > > > use for things like this (for example color management, inter-process
Re: Collaboration on standard Wayland protocol extensions
Hi, On 29 March 2016 at 05:01, Drew DeVaultwrote: > You don't provide any justification for this, you just say it like it's > gospel, and it's not. I will again remind you that not everyone wants to > buy into a desktop environment wholesale. They may want to piece it > together however they see fit and it's their god damn right to. Anything > else is against the spirit of free software. I only have a couple of things to add, since this thread is so long, so diverse, and so shouty, that it's long past the point of usefulness. Firstly, https://www.redhat.com/archives/fedora-devel-list/2008-January/msg00861.html is a cliché, but the spirit of free software is empowering people to make the change they want to see, rather than requiring the entire world be perfectly isolated and abstracted along inter-module boundaries, freely mix-and-matchable. Secondly, you talk about introducing all these concepts and protocols as avoiding complexity. Nothing could be further from the case. That X11 emulates this model means that it has Xinerama, XRandR, XF86VidMode, the ICCCM, and NetWM/EWMH, as well as all the various core protocols. You're not avoiding complexity, but simultaneously shifting and avoiding it. You're not avoiding policy to create mechanism; the structure and design of the mechanism is a policy in itself. Thirdly, it's important to take a step back. 'Wayland doesn't support middle-button primary selections' is a feature gap compared to X11; 'Wayland doesn't have XRandR' is not. Sometimes it seems like you miss the forest of user-visible behaviour for the trees of creating protocol. Fourthly, I think you misunderstand the role of what we do. If you want to design and deploy a modular framework for Legoing your own environment together, by all means, please do that. Give it a go, see what falls out, see if people creating arbitrary external panels and so find it useful, and then see if you can convince the others to adopt it. But this isn't really the place for top-down design where we dictate how all environments based on Wayland shall behave. I don't really hold out hope for this thread, but would be happy to pick up separate threads on various topics, e.g. screen capture/streaming to external apps. Cheers, Daniel ___ wayland-devel mailing list wayland-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/wayland-devel
Re: Collaboration on standard Wayland protocol extensions
On Mon, 28 Mar 2016 09:08:55 +0300 Giulio Camuffowrote: > 2016-03-27 23:34 GMT+03:00 Drew DeVault : > > Greetings! I am the maintainer of the Sway Wayland compositor. > > > > http://swaywm.org > > > > It's almost the Year of Wayland on the Desktop(tm), and I have > > reached out to each of the projects this message is addressed to (GNOME, > > Kwin, and wayland-devel) to collaborate on some shared protocol > > extensions for doing a handful of common tasks such as display > > configuration and taking screenshots. Life will be much easier for > > projects like ffmpeg and imagemagick if they don't have to implement > > compositor-specific code for capturing the screen! > > > > I want to start by establishing the requirements for these protocols. > > Broadly speaking, I am looking to create protocols for the following > > use-cases: > > > > - Screen capture > > - Output configuration > > - More detailed surface roles (should it be floating, is it a modal, > > does it want to draw its own decorations, etc) > > - Input device configuration > > > > I think that these are the core protocols necessary for > > cross-compositor compatability and to support most existing tools for > > X11 like ffmpeg. Considering the security goals of Wayland, it will also > > likely be necessary to implement some kind of protocol for requesting > > and granting sensitive permissions to clients. > > On this, see > https://lists.freedesktop.org/archives/wayland-devel/2015-November/025734.html > I have not been able to continue on that, but if you want to feel free > to grab that proposal. Hi, I may have had negative opinions related to some things on Giulio's proposal, but I have changed my mind since then. I'd be happy to see it developed further, understanding that it does not aim to solve the question of authentication but only communicating the authorization, for now. Thanks, pq pgpE58yZfTpg6.pgp Description: OpenPGP digital signature ___ wayland-devel mailing list wayland-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/wayland-devel
Re: Collaboration on standard Wayland protocol extensions
On Tue, 29 Mar 2016 08:25:19 +0300 Giulio Camuffowrote: > 2016-03-29 6:23 GMT+03:00 Drew DeVault : > > On 2016-03-29 2:15 AM, Martin Peres wrote: > >> I was proposing for applications to just bind the interface and see if it > >> works or not. But Giulio's proposal makes sense because it could be used to > >> both grant and revoke rights on the fly. > > > > I think both solutions have similar merit and I don't feel strongly > > about either one. > > If the client just binds the interface the compositor needs to > immediately create the resource and send the protocol error, if the > client is not authorized. It doesn't have the time to ask the user for > input on the matter, while my proposal gives the compositor that. More precisely, you cannot gracefully fail to use an interface exposed via wl_registry. It either works, or the client gets disconnected. Protocol error always means disconnection, and wl_registry has no other choice to communicate a "no, you can't use this". Checking "whether an interface works or not" is also not trivial. It would likely lead to adding a "yes, this works" event to all such interfaces, since anything less explicit is harder than necessary. But why do that separately in every interface rather than in a common interface? Btw. I did say in the past that I didn't quite understand or like Giulio's proposal, but I have come around since. For the above reasons, it does make sense on a high level. Thanks, pq pgpL3f7kdpeQ3.pgp Description: OpenPGP digital signature ___ wayland-devel mailing list wayland-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/wayland-devel
Re: Collaboration on standard Wayland protocol extensions
On Sunday, March 27, 2016 4:34:37 PM CEST Drew DeVault wrote: > Greetings! I am the maintainer of the Sway Wayland compositor. > > http://swaywm.org > > It's almost the Year of Wayland on the Desktop(tm), and I have > reached out to each of the projects this message is addressed to (GNOME, > Kwin, and wayland-devel) to collaborate on some shared protocol > extensions for doing a handful of common tasks such as display > configuration and taking screenshots. Life will be much easier for > projects like ffmpeg and imagemagick if they don't have to implement > compositor-specific code for capturing the screen! > > I want to start by establishing the requirements for these protocols. > Broadly speaking, I am looking to create protocols for the following > use-cases: > > - Screen capture > - Output configuration > - More detailed surface roles (should it be floating, is it a modal, > does it want to draw its own decorations, etc) Concerning own decoration we have implemented https://quickgit.kde.org/? p=kwayland.git=blob=8bc106c7c42a40f71dad9a884824a7a9899e7b2f=818e320bd99867ea9c831edfb68c9671ef7dfc47=src %2Fclient%2Fprotocols%2Fserver-decoration.xml We would be very happy to share this one. It's already in use in Plasma 5.6 and so far we are quite satisfied with it. It's designed with convergence in mind so that it's possible to easily switch the modes (e.g. decorated on Desktop, not decorated on phone, no decorations for maximized windows, etc.). I think especially for compositors like sway that can be very helpful. For Qt we implemented support in our QPT plugin for Plasma. So if sway wants to use it I can give you pointers on how to use it in your own QPT plugin (if you don't have one yet how to create it) and to use it to force QtWayland to not use the client side decorations. Cheers Martin signature.asc Description: This is a digitally signed message part. ___ wayland-devel mailing list wayland-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/wayland-devel
Re: Collaboration on standard Wayland protocol extensions
On Sunday, March 27, 2016 4:34:37 PM CEST Drew DeVault wrote: > Greetings! I am the maintainer of the Sway Wayland compositor. > > http://swaywm.org > > It's almost the Year of Wayland on the Desktop(tm), and I have > reached out to each of the projects this message is addressed to (GNOME, > Kwin, and wayland-devel) to collaborate on some shared protocol > extensions for doing a handful of common tasks such as display > configuration and taking screenshots. Life will be much easier for > projects like ffmpeg and imagemagick if they don't have to implement > compositor-specific code for capturing the screen! > > I want to start by establishing the requirements for these protocols. > Broadly speaking, I am looking to create protocols for the following > use-cases: > > - Screen capture > - Output configuration we have our kwin-kscreen specific protocol for this. You can find it at: https://quickgit.kde.org/? p=kwayland.git=blob=9ebe342f7939b6dec45e2ebf3ad69e772ec66543=818e320bd99867ea9c831edfb68c9671ef7dfc47=src %2Fclient%2Fprotocols%2Foutput-management.xml and https://quickgit.kde.org/? p=kwayland.git=blob=747fc264b7e6a40a65a0a04464c2c98036a84f0f=818e320bd99867ea9c831edfb68c9671ef7dfc47=src %2Fclient%2Fprotocols%2Foutputdevice.xml It's designed for our specific needs in Plasma. If it's useful for others, we are happy to share and collaborate. Cheers Martin signature.asc Description: This is a digitally signed message part. ___ wayland-devel mailing list wayland-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/wayland-devel
Re: Collaboration on standard Wayland protocol extensions
On Tuesday, March 29, 2016 9:23:13 AM CEST Peter Hutterer wrote: > On Sun, Mar 27, 2016 at 04:34:37PM -0400, Drew DeVault wrote: > > Greetings! I am the maintainer of the Sway Wayland compositor. > > > > http://swaywm.org > > > > It's almost the Year of Wayland on the Desktop(tm), and I have > > reached out to each of the projects this message is addressed to (GNOME, > > Kwin, and wayland-devel) to collaborate on some shared protocol > > extensions for doing a handful of common tasks such as display > > configuration and taking screenshots. Life will be much easier for > > projects like ffmpeg and imagemagick if they don't have to implement > > compositor-specific code for capturing the screen! > > > > I want to start by establishing the requirements for these protocols. > > Broadly speaking, I am looking to create protocols for the following > > use-cases: > > > > - Screen capture > > - Output configuration > > - More detailed surface roles (should it be floating, is it a modal, > > > > does it want to draw its own decorations, etc) > > > > - Input device configuration > > a comment on the last point: input device configuration is either extremely > simple ("I want tapping enabled") or complex ("This device needs feature A > when condition B is met"). There is very little middle ground. > > as a result, you either have some generic protocol that won't meet the niche > cases or you have a complex protocol that covers all the niche cases but > ends up being just a shim between the underlying implementation and the > compositor. Such a layer provides very little benefit but restricts what > the compositor can add in the future. It's not a good idea, imo. I agree. I think that's something best left to the respective compositor specific configuration modules. Cheers Martin signature.asc Description: This is a digitally signed message part. ___ wayland-devel mailing list wayland-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/wayland-devel
Re: Collaboration on standard Wayland protocol extensions
On Tue, 29 Mar 2016 00:01:00 -0400 Drew DeVaultsaid: > On 2016-03-29 11:31 AM, Carsten Haitzler wrote: > > my take on it is that it's premature and not needed at this point. in fact i > > wouldn't implement a protocol at all. *IF* i were to allow special access, > > i'd simply require to fork the process directly from compositor and provide > > a socketpair fd to this process and THAT fd could have extra capabilities > > attached to the wl protocol. i would do nothing else because as a > > compositor i cannot be sure what i am executing. i'd hand over the choice > > of being able to execute this tool to the user to say ok to and not just > > blindly execute anything i like. > > I don't really understand why forking from the compositor and bringing > along the fds really gives you much of a gain in terms of security. Can why? there is no way a process can access the socket with privs (even know the extra protocol exists) unless it is executed by the compositor. the compositor can do whatever it deems "necessary" to ensure it executes only what is allowed. eg - a whitelist of binary paths. i see this as a lesser chance of a hole. > you elaborate on how this changes things? I should also mention that I > don't really see the sort of security goals Wayland has in mind as > attainable until we start doing things like containerizing applications, > in which case we can elimitate entire classes of problems from this > design. certain os's do this already - tizen does. we use smack labels. this is why i care so much about application isolation and not having anything exposed to an app that it doesn't absolutely need. :) so i am coming from the point of view of "containering is solved - we need to not break that in wayland" :) > > all a compositor has to do is be able to capture a video stream to a file. > > you can ADD watermarking, sepia, and other effects later on in a video > > editor. next you'll tell me gimp is incapable of editing image files so we > > need programmatic access to a digital cameras ccd to implement > > effects/watermarking etc. on photos... > > I'll remind you again that none of this supports the live streaming > use-case. i know - but for just capturing screencasts, adding watermarks etc. - all you need is to store a stream - the rest can be post-processed. > > > currently possible with ffmpeg. How about instead we make a simple > > > wayland protocol extension that we can integrate with ffmpeg and OBS and > > > imagemagick and so on in a single C file. > > > > i'm repeating myself. there are bigger fish to fry. > > I'm repeating myself. Fry whatever fish you want and backlog this fish. > > > eh? ummm that is what happens - unless you close the lid, then internal > > display is "disconnected". > > I'm snipping out a lot of the output configuration related stuff from > this response. I'm not going to argue very hard for a common output > configuration protocol. I've been trying to change gears on the output > discussion towards a discussion around whether or not the > fullscreen-shell protocol supports our needs and whether or how it needs > to be updated wrt permissions. I'm going to continue to omit large parts > of your response that I think are related to the resistance to output > configuration, let me know if there's something important I'm dropping > by doing so. why do we need the fullscreen shell? that was intended for environments where apps are only ever fullscreen from memory. xdg shell has the ability for a window to go fullscreen (or back to normal) this should do just fine. :) sure - let's talk about this stuff - fullscreening etc. > > a protocol with undefined metadata is not a good protocol. it's now goes > > blobs of data that are opaque except to specific implementations., this > > will mean that other implementations eventually will do things like strip > > it out or damage it as they don't know what it is nor do they care. > > It doesn't have to be undefined metadata. It can just be extensions. A > protocol with extensions built in is a good protocol whose designers had > foresight, kind of like the Wayland protocol we're all already making > extensions for. yeah - but you are creating objects (screens) with no extended data - or modifying them. you don't have or lose the data. :) let's talk about the actual apps surfaces and where they go - not configuration of outputs. :) > > but it isn't the user - it's some game you download that you cannot alter > > the code or behaviour of that then messes everything up because its creator > > only ever had a single monitor and didn't account for those with 2 or 3. > > But it _is_ the user. Let the user configure what they want, however > they want, and make it so that they can both do this AND deny crappy > games the right to do it as well. This applies to the entire discussion > broadly, not necessarily just to the output configuration bits (which I > retract). > > > because things like