Raster,

On Mon, Sep 2, 2013 at 8:19 PM, Carsten Haitzler <ras...@rasterman.com> wrote:
> On Mon, 2 Sep 2013 11:27:16 -0300 Ulisses Furquim <uliss...@gmail.com> said:
>
>> Raster,
>>
>> On Sat, Aug 31, 2013 at 1:31 AM, Carsten Haitzler <ras...@rasterman.com>
>> wrote:
>> > On Fri, 30 Aug 2013 10:58:55 -0300 Ulisses Furquim <uliss...@gmail.com>
>> > said:
>> >
>> >> Raster,
>> >>
>> >> On Fri, Aug 30, 2013 at 5:11 AM, Carsten Haitzler <ras...@rasterman.com>
>> >> wrote:
>> >> > On Thu, 29 Aug 2013 10:26:50 -0300 Ulisses Furquim <uliss...@gmail.com>
>> >> > said:
>> >> >
>> >> >> Raster,
>> >> >>
>> >> >> On Thu, Aug 29, 2013 at 9:18 AM, Carsten Haitzler  - Enlightenment Git
>> >> >> <no-re...@enlightenment.org> wrote:
>> >> >> > raster pushed a commit to branch master.
>> >> >> >
>> >> >> > commit 42a46214c4f9b35c0e1f5a84c56ea76ba2235eae
>> >> >> > Author: Carsten Haitzler (Rasterman) <ras...@rasterman.com>
>> >> >> > Date:   Thu Aug 29 21:18:04 2013 +0900
>> >> >> >
>> >> >> >     other async render issue - sync ALL rendering canvases, not just
>> >> >> > one
>> >> >> > ---
>> >> >> >  src/lib/evas/canvas/evas_render.c          |  2 ++
>> >> >> >  src/lib/evas/common/evas_thread_render.c   | 21 
>> >> >> > ++++++++++++++++++++-
>> >> >> >  src/lib/evas/include/evas_common_private.h |  4 +++-
>> >> >> >  3 files changed, 25 insertions(+), 2 deletions(-)
>> >> >> >
>> >> >> > diff --git a/src/lib/evas/canvas/evas_render.c
>> >> >> > b/src/lib/evas/canvas/evas_render.c index b04d606..4fbe9e2 100644
>> >> >> > --- a/src/lib/evas/canvas/evas_render.c
>> >> >> > +++ b/src/lib/evas/canvas/evas_render.c
>> >> >> > @@ -2235,6 +2235,7 @@ _canvas_render_dump(Eo *eo_e EINA_UNUSED, void
>> >> >> > *_pd, va_list *list EINA_UNUSED) Evas_Public_Data *e = _pd;
>> >> >> >     Evas_Layer *lay;
>> >> >> >
>> >> >> > +   evas_thread_queue_block();
>> >> >> >     evas_render_rendering_wait(e);
>> >> >> >     evas_cache_async_freeze();
>> >> >>
>> >> >> This might deadlock. If the queue is no empty and we have there the
>> >> >> commands for e then we'll wait forever in _rendering_wait() in the
>> >> >> main thread and the render thread will sleep forever trying to acquire
>> >> >> the evas_thread_block_lock. Right?
>> >> >
>> >> > fixed. i didnt notice evas->rendering was set in evas_render as opposed
>> >> > to the rendering handle in the async thread. :)
>> >>
>> >> Well, yes, your change solves the deadlock. But I want to know what
>> >> you want, really. Do you want a call to sync all canvases? This lock
>> >> is not doing that. We'd need to wait all canvases rendering flags to
>> >> be set to false and then return. This way we wouldn't have any pending
>> >> commands in the render thread.
>> >
>> > in doing the async rendering the code to dump data was totally disabled.
>> > image data with refcounts > 0 were now never dumped. why? why fix for "i
>> > have a background thread still using that data to render while my mainloop
>> > is dumping". the "wait for evas to finish" waits for the GIVEN evas to
>> > finish... ONLY that one. but the background async renderer may march on to
>> > the NEXT canvas wanting a render in the queue and it doesn't wait for that.
>> > it continues and we just now dumped images from memory it needs. as the
>> > load data is done before the async renderer begins and it assumes data is
>> > there and ready, so this block basically forces the dump cal to wait for
>> > the async renderer to finish going through ALL queued rendering so far -
>> > not just once canvas, but all of them, since its a mutex lock around that
>> > block of code. this then allows all image data to be dumped from memory
>> > (even refcount > 0) which is what the original intent and code was doing.
>> > now it's safe and any new renders come from the mainloop afte the dump and
>> > they can/will re-load data from files if needed for the async renderer to
>> > work properly. no - it's not pretty, but it fixed the dump functionality.
>>
>> This problem is now hard to reproduce but we still have a race with
>> your fix. That lock doesn't guarantee what you want. If you happen to
>> acquire the lock in the main thread just before the render thread
>> tries to acquire it then after you clean up and release the lock the
>> render thread will acquire it and we have a problem. Even though still
>> difficult to happen this race is more prone to happen if the canvas
>> you passed to _rendering_wait() is not rendering at that moment.
>>
>> Moreover, I don't like this lock in the render thread and being able
>> to block/unblock it. It's a deadlock wanting to happen. I'll see what
>> I can do here but we need a proper sync all canvases function. It
>> should be simple if we can just wait on the rendering flags for all
>> canvases.
>
> that was my other option for the fix. but then i'd have to track a list of all
> canvases etc. etc. so this was was simpler. :) i was taking the view that 
> since
> it is the mainloop that triggers the render and also does the dump - any
> rendering will have been triggered long before this. we could add a counter to
> the cond wakeups. ie if mainloop issues a cond wakeup, reander thread needs to
> decrement it when its done and back waiting (another lock plus counter). then
> we lock+unlock (to force a big sleeping block if rendering is busy) and just
> check counter is 0, since mainloop is the only one that ++'s the counter it 
> can
> only go down now... until it hits 0.
> - if not we have a race

I just did that, committed and tested a bit. Let me know if you bump
into some problem. The sync I added waits until the render queue is
empty. Right now that's the only way to do it properly and sleeping is
good so the render thread can finish the work and we can move on the
main thread.

Regards,

-- Ulisses

------------------------------------------------------------------------------
Learn the latest--Visual Studio 2012, SharePoint 2013, SQL 2012, more!
Discover the easy way to master current and previous Microsoft technologies
and advance your career. Get an incredible 1,500+ hours of step-by-step
tutorial videos with LearnDevNow. Subscribe today and save!
http://pubads.g.doubleclick.net/gampad/clk?id=58040911&iu=/4140/ostg.clktrk
_______________________________________________
enlightenment-devel mailing list
enlightenment-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/enlightenment-devel

Reply via email to