On Mon, Oct 28, 2013 at 9:10 AM, Neil Roberts <n...@linux.intel.com> wrote:
> Just to be clear, I think the discussion about whether to queue release > events is no longer related to implementing eglSwapInterval(0) so it > shouldn't hold up the patch. As far as I understand if you want to > render at the maximum rate you need four buffer slots and being able to > disable the queuing mechanism isn't going to affect that. If the device > can't handle four buffers then applications simply can't use > eglSwapInterval(0) when rendering fullscreen. Increasing the number of > buffer slots doesn't affect the number of buffers that will actually be > used in the normal case of non-fullscreen applications which should > continue to just use two buffers. > > I agree that we should probably do something about the event queuing > anyway. Currently if a fullscreen application goes idle after drawing > its last frame it will never get the release event for the buffer > because nothing will flush the queue. This would deny the application a > chance to free the buffer. However I don't know if having a mechanism to > explicitly disable the queuing is the right answer in this case and it > might make more sense for the compositor to ensure that it always > eventually flushes the queue instead of keeping it indefinitely. My > previous patch to disable the queuing when there are no frame callbacks > could still be a good way to achieve that. > Yes, I think we should find some way to ensure that the queue gets flushed eventually. Your patches may work for this. Another approach would be to simply force-flush the queue of every client every once-in-a-while. In any case, I think that's a Weston implementation issue not a fundamental protocol issue. Another option would be to detect when a application goes from being in its own plane to the primary plane (Or any other operation that would cause the number of required buffers to decrease) and make sure the buffer releases get posted in that case. > > Regards, > - Neil > > Daniel Stone <dan...@fooishbar.org> writes: > > > Hi, > > > > On 28 October 2013 11:19, Tomeu Vizoso <to...@tomeuvizoso.net> wrote: > >> I'm still concerned about platforms with high resolution displays but > >> relatively little memory. > >> > >> I'm thinking of the RPi, but my understanding is that Android goes to > >> great lengths to reduce the number of buffers that clients have to > >> keep, because of general memory consumption, but also because scanout > >> buffers are precious when you try to get the smoothest of the > >> experiences that is possible on these phones. > >> > >> I think we should still consider adding a flag through which the > >> client can tell the compositor to send the release events immediately > >> instead of queuing them. > >> > >> Otherwise, the compositor is making a very broad assumption on the > >> client's inner workings if it assumes that release events can be > >> queued without a negative impact on performance. > I fail to see how this is such a broad assumption. If we don't want to make assumptions, we should just post the event every time. Admittedly, I don't know what the inside of your RPi EGL implementation looks like. However, it costs almost nothing to call wl_display.sync after every attach/commit and it *guarantees* that you get the events. You don't have to continuously sync, just sync after every attach/commit. While it may be somewhat non-obvious, I don't see how calling sync once per frame is any worse than setting some flag somewhere. Unless, of course, you are wanting the event absolutely as soon as it happens. I'd like to see an actual use-case where doing so would save you a buffer. Thanks, --Jason Ekstrand
_______________________________________________ wayland-devel mailing list wayland-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/wayland-devel