On 16.6.2015 10:36, Jakub Jermar wrote:
> Hi Jan,
> 
> On 5.6.2015 9:38, Jan Mareš wrote:
>> Any input on that piece of code I sent in the previous message?
>>
>> Dne 4. 6. 2015 15:24 napsal uživatel "Jan Mareš" <[email protected]
>> <mailto:[email protected]>>:
>>
>>     I see, yes I realized my suggestion was wrong yesterday when I had a
>>     better look at it after I returned to my PC. 
>>
>>     Anyway, I promised you that I would try to reproduce the problems
>>     with threads and fibrils on plain fibrils. I think I have it, unless
>>     there is a bug in my code. Have a look at [1]. It's the smallest
>>     extract I was able to create to reproduce the race condition I'm
>>     running into. If I set PREEMPTIVNESS to 0 everything works fine, if
>>     I set to 1, I start to get page faults. It seems to me that
>>     async_futex is not doing it's job, given the stack traces from
>>     taskdump. But stack trace can be misleading as well.
>>
>>     [1]
>>     
>> http://bazaar.launchpad.net/~maresja1/helenos/qemu_porting/view/2212/uspace/app/posixtest/posixtest.c
> 
> It appears that the problem is caused by the fact that, for some reason,
> there are not only two manager fibrils created, but literally hundreds
> of them. And each of them ups the async_futex, rendering it thus useless
> for mutual exclusion. So for example, in async_manager_fibril, I am
> observing the up count for async_futex to be 180 and the number of
> manager fibrils 338 and growing.

Alright, this turned out to be a red herring caused by running printf()
from a manager fibril. Please disregard. Real fix coming soon.

Jakub


_______________________________________________
HelenOS-devel mailing list
[email protected]
http://lists.modry.cz/listinfo/helenos-devel

Reply via email to